Unnamed: 0
int64
0
31.6k
Clean_Title
stringlengths
7
376
Clean_Text
stringlengths
1.85k
288k
Clean_Summary
stringlengths
215
5.34k
0
PV generation and load profile data of net zero energy homes in South Australia
Fig. 1 shows the general configuration of a Net Zero Energy home.The data is supplied in two separate Excel files: one containing raw half-hourly load data for a region in South Australia, and the other containing PV generation and load data scaled down for a single home.Examining the second file will reveal that it consists of three variables: PV power generation of a 3 kWp system, residential load demand of a typical home, and ambient temperature.Both the PV generation and load data are represented in kW.The data are specifically filtered for the state of South Australia and represent hourly data for a full year.The data contain information related to PV generation pattern of SA region, for example, the minimum PV generation occurs during the middle of the calendar year due to winter.The load data provide insight into the electricity usage pattern of South Australian homes.For example, electricity demand peaks at certain times in the early and late days of a year due to high summer temperatures, which increases the air-conditioning load.Fig. 2 illustrates the average daily PV generation and load profile of a typical SA household for the year 2015.This clearly demonstrates significant mismatch between the PV generation and load patterns, which provides the justification for using battery storage with PV systems .In Fig. 1, the exported power and the imported power represent the amount of power exchanged between the home and the grid due to the mismatch.The power balance equation at the point of common coupling is shown in Fig. 1.When the PV generation is higher than the load demand then PEXP > 0 and PIMP = 0.When the PV generation is lower than the load demand then PEXP = 0 and PIMP > 0.Because the dataset relate to a NZE Home, the overall PV generated energy and the energy consumed by the home is the same over a year.Therefore, the annual exported energy is the same as the annual imported energy.The method used to produce the data is given in the next section.The method to produce PV generation and residential load profile is given below.Hourly PV generation and ambient temperature data have been derived using web platform Renewable ninja .To generate the data the following information is supplied to the system:Location of the PV system,Target PV system capacity,PV system loss,These factors can be varied according to system requirement.Additional features such as tracking type can be included.The data presented in this article is an hourly PV generation profile of the year 2015.Note that the data downloaded from this source is for the UTC time zone.However, South Australian time is 10.5 hours ahead of UTC time during the month of January.Therefore, a few rows of data from the bottom of the generated file need to be shifted to the top rows to align with South Australian time.The above steps can be used to generate household load and PV generation data for other Australian states.
This paper presents the hourly Photovoltaic (PV) generation and residential load profiles of a typical South Australian Net Zero Energy (NZE) home. These data are used in the research article entitled “Energy Cost Minimization for Net Zero Energy Homes through Optimal Sizing of Battery Storage System” Sharma et al., 2019. The PV generation data is derived using the publicly accessible renewable ninja web platform by feeding information such as the region of interest, PV system capacity, losses and tilt angle. The raw load profile data is sourced from the Australian Energy Market Operator (AEMO) website, which is further processed and filtered to match the household load requirement. The processing of data has been carried out using Microsoft Excel and MATLAB software. The experimental method used to obtain the required data from the downloaded raw dataset is described in this paper. While the data is generated for the state of South Australia (SA), the method described here can be used to produce datasets for any other Australian state.
1
The synthesis of recombinant membrane proteins in yeast for structural studies
The first crystal structures of mammalian membrane proteins derived from recombinant sources were solved in 2005 using protein that had been produced in yeast cells: the rabbit Ca2+-ATPase, SERCA1a, was overexpressed in Saccharomyces cerevisiae and the rat voltage-dependent potassium ion channel, Kv1.2 was produced in Pichia pastoris .Since then, several other host cells have been used for eukaryotic membrane protein production including Escherichia coli, baculovirus-infected insect cells and mammalian cell-lines .Whilst all host systems have advantages and disadvantages, yeasts have remained a consistently-popular choice in the eukaryotic membrane protein field .As microbes, they are quick, easy and cheap to culture; as eukaryotes they are able to post-translationally process eukaryotic membrane proteins.Very recent crystal structures of recombinant transmembrane proteins produced in yeast include those of human aquaporin 2, chicken bestrophin-1, the human TRAAK channel, human leukotriene C4 synthase, an algal P-glycoprotein homologue and mouse P-glycoprotein using P. pastoris-derived samples; the structures of the Arabidopsis thaliana NRT1.1 nitrate transporter, a fungal plant pathogen TMEM16 lipid scramblase and the yeast mitochondrial ADP/ATP carrier were solved using recombinant protein produced in S. cerevisiae.Despite these successes, the overall rate of progress in membrane protein structural biology has, until very recently, been markedly slower than that in the soluble protein field .However, recent experimental breakthroughs mean that the gap is set to narrow.For example, the use of stabilizing mutants has had a revolutionary impact on increasing the crystallization propensity of some membrane protein targets , while incorporating fusion partner proteins such as T4 lyzozyme1 has been particularly important in structural studies of G protein-coupled receptors.From the perspective of the host cell, our improved understanding of cellular pathways controlling translation and protein folding, and how they influence functional recombinant protein yields, means it is now possible to select better expression strains; this knowledge has also allowed a more strategic approach to cell culture in order to maximise the productivity of each cell .Finally, new methods for extracting and solubilizing membrane proteins from the cell membrane using styrene maleic anhydride co-polymers have enabled traditional detergents to be circumvented .The benefits of this approach include improved thermostability of the solubilized protein and retention of protein–lipid interactions that are normally disrupted during detergent-extraction .This review focuses on current approaches to optimizing expression plasmids, yeast strains and culture conditions, as well as the extraction and purification of functional membrane proteins for crystallization trials using detergents and SMA co-polymers.Over 1500 species of yeast are known, but only very few of them have been employed as host organisms for the production of recombinant proteins .The two most widely used for recombinant membrane protein production are S. cerevisiae and P. pastoris.These single-celled, eukaryotic microbes grow quickly in complex or defined media in formats ranging from multi-well plates to shake flasks and bioreactors of various sizes .P. pastoris has the advantage of being able to grow to very high cell densities and therefore has the potential to produce large amounts of recombinant membrane protein for structural analysis.This yeast has also been important in generating high-resolution GPCR crystal structures such as the adenosine A2A and the histamine H1 receptors.However, because it is a strictly aerobic organism, the full benefits of P. pastoris are achievable only if it is cultured under highly-aerated conditions; this is usually only possible in continuously-stirred tank bioreactors.S. cerevisiae has the advantage that its genetics are better understood and that it is supported by a more extensive literature than P. pastoris.This has led to the development of a much wider range of tools and strains for improved membrane protein production.Consequently, projects requiring specialized strains may benefit from using S. cerevisiae as the host.Notably, the structure of the histamine H1 receptor was obtained from protein produced in P. pastoris, although initial screening to define the best expression construct was performed in S. cerevisiae .This is presumably because of the greater range of molecular biological tools available for S. cerevisiae at the screening stage, coupled with the superior yield characteristics of P. pastoris when cultured at larger scale in bioreactors.In principle, many of the tools established for S. cerevisiae could be transferred to P. pastoris combining the strengths of both yeast species, although such work would be time-consuming.In our laboratory, we often start with P. pastoris and, if the production is not straightforward, turn to S. cerevisiae to troubleshoot thereby benefitting from the best attributes of the two hosts .Having decided which yeast species will be used as the recombinant host, a suitable expression plasmid needs to be selected or designed.Table 1 lists examples of common plasmids that are used for recombinant protein production in S. cerevisiae and P. pastoris, while Sections 3.1–3.3 briefly review three key elements of such plasmids: the promoter; the nature of any tags and the codon sequence.Typically, episomal plasmids are used for expression in S. cerevisiae, but the expression cassette is integrated into the genome of P. pastoris.These continuing preferences may have resulted from the replication of early successes using particular plasmid/species combinations.Since the P. pastoris system depends upon very strong promoters, only a few copies of the gene are required to obtain sufficient levels of mRNA.In contrast, in S. cerevisiae, the promoter can be 10- to 100-fold weaker so the use of episomal plasmids with high copy numbers is advantageous; episomal plasmids are available for P. pastoris, but are not yet widely used in structural biology projects.Auxotrophic markers are routinely used in S. cerevisiae plasmids to select for successfully-transformed yeast cells.Notably, the yield of the recombinant insulin analogue precursor protein was increased sevenfold simply by using the selection marker URA3 instead of LEU2 .Truncations in the promoters of auxotrophic marker genes can further increase recombinant protein yields: by decreasing the promoter length, transcription of the marker gene on the plasmid is reduced and the cell compensates by increasing the plasmid copy number .A truncated LEU2 promoter was recently used to increase the yields of nine different transporters, including NRT1.1 .Most recombinant expression systems employed in structural biology pipelines depend upon strong, inducible promoters to drive high rates of mRNA synthesis.For example, the strong S. cerevisiae promoter, PGAL1, is induced with galactose while PAOX1 is induced with methanol .In choosing a strong promoter, the idea is that transcription should not be rate limiting.However, high mRNA synthesis rates may be countered by high rates of mRNA degradation .Moreover, evidence from prokaryotic expression systems suggests that acquired mutations that lower promoter efficiency lead to improved functional yields of membrane proteins for some, but not all, targets .In a separate study, a series of E. coli strains that had been evolved to improve their yield characteristics were found to have a mutation in the hns gene, which has a role in transcriptional silencing .Together, these results support an emerging view that a suitable balance between mRNA and protein synthesis rates is desirable, although how this might be achieved in practice is not yet understood; one possibility might be a system based on slow, constitutive expression .It has been proposed that the ideal inducible system would completely uncouple cell growth from recombinant synthesis, which requires the host cell to remain metabolically capable of transcription and translation in a growth-arrested state.In this scenario, all metabolic fluxes would be diverted to the production of recombinant protein .While this approach is yet to be demonstrated for membrane protein production in yeast cells, soluble chloramphenicol acetyltransferase was produced to more than 40% of total cell protein in E. coli suggesting that this may be a strategy worth exploring in yeast.Indeed, growth rates often decline dramatically upon induction of yeast cultures, in part achieving this state.When wild-type P. pastoris cells were cultured in methanol, it was found that a higher proportion of the total mRNA pool was associated with two or more ribosomes compared to the same cells cultured in any other non-inducing growth condition .This observation suggests that high recombinant protein yields in methanol-grown cells are due not just to promoter strength, but also to the global response of P. pastoris to growth on methanol .However, PAOX1-driven expression is leaky; the recent characterization of pre-induction expression under the control of PAOX1 indicates that the uncoupling of growth and protein synthesis in P. pastoris cells has not yet been achieved.The response of a series of inducible S. cerevisiae promoters to different carbon sources has also been studied ; this type of careful analysis of promoter expression patterns now opens up opportunities for dynamic regulation of recombinant protein production in S. cerevisiae.In addition to the open reading frame of the gene of interest, a typical expression plasmid will usually incorporate a number of other sequences in its expression cassette.The S. cerevisiae α-mating factor signal sequence is a common addition to commercial expression plasmids because it is believed to correctly-target recombinant membrane proteins to the yeast membrane.For example, its presence had a positive impact on the yield of the mouse 5-HT5A serotonin receptor but dramatically reduced expression of the histamine H1 receptor .Alternative signal sequences have been used such as the STE2 leader sequence of the fungal GPCR, Ste2p .Many expression plasmids contain tags as part of their DNA sequence, and it is straightforward to add a range of others by gene synthesis or polymerase chain reaction.Frequently-used tags for recombinant membrane proteins are polyhistidine, green fluorescent protein and T4L.These and others have been reviewed extensively elsewhere .Briefly, polyhistidine tags are routinely fused to recombinantly-produced membrane proteins to facilitate rapid purification by metal chelate chromatography using Ni–NTA resins.In many cases, the tag is not removed prior to crystallization trials, although protease cleavage sites can be engineered into the expression plasmid if this is desired .GFP tags are used differently, typically to assess functional yield or homogeneity of the purified recombinant protein prior to crystallization trials.In the former case, caution must be exercised because GFP tags remain fluorescent in eukaryotic cells irrespective of whether the partner membrane protein is correctly folded in the plasma membrane .GFP is therefore an inappropriate marker to assess the folding status of recombinant membrane proteins produced in yeast prior to extraction, although it is still useful in analyzing the stability of a purified membrane protein by fluorescence size-exclusion chromatography .Finally, most GPCR crystal structures have been obtained using a fusion protein strategy where the flexible third intracellular loop is replaced by T4L, with modified T4L variants having been developed to optimize crystal quality or promote alternative packing interactions .Overall, the precise combination and location of any tags needs to be decided based upon their proposed use and the biochemistry of the recombinant membrane protein.The sequence of an mRNA transcript is critically important in determining the rate and accuracy of translation meaning that optimal design of the corresponding DNA expression plasmid is essential to the success of a recombinant protein production experiment.Each organism is known to have a preference for some of the 64 available codons over others, but the biological reason for this is not yet clear.One idea is that each codon is decoded at a different rate: codons that are decoded quickly will be more resource efficient , while slower decoding will allow time for proper post-translational folding and translocation .Another idea is that different codons are read with different accuracy, which might affect proteolysis and degradation .Codon optimization involves manipulating the sequence of an ORF in order to maximize its expression.Several companies offer codon optimization services that account for codon bias in the host cell, mRNA GC content and secondary structure while minimizing sites such as internal ribosome entry sites or premature polyA sites that may negatively affect gene expression.However, there is no guarantee that recombinant protein yields will be increased, as demonstrated for the production of two membrane proteins in E. coli .In contrast, careful codon optimization of the mouse P-glycoprotein gene for expression in P. pastoris led to substantially more recombinant protein compared to expression from the wild-type gene .It has been proposed that the mRNA sequence around the translation start site has a bigger influence on membrane protein yields than codon choice in the rest of the ORF both in E. coli and P. pastoris since strong mRNA structure in this region could affect translation initiation and therefore protein production .The use of degenerate PCR primers to optimize the codon sequence around the start codon therefore offers one approach to improving the expression plasmid .As mentioned in Section 2, a wide range of S. cerevisiae resources are available, including comprehensive strain collections from which potential expression hosts can be selected.These resources are supported by a wealth of information in the Saccharomyces Genome Database.The yeast deletion collections comprise over 21,000 mutant strains with precise start-to-stop deletions of the approximately 6000 S. cerevisiae ORFs .The collections include heterozygous and homozygous diploids as well as haploids of both MATa and MATα mating types.Individual strains or the complete collection can be obtained from Euroscarf or the American Type Culture Collection.Complementing this, Dharmacon sells the Yeast Tet-Promoters Hughes Collection with 800 essential yeast genes under control of a tetracycline-regulated promoter that permits experimental regulation of essential genes.A number of specifically-engineered S. cerevisiae strains also exist including those with “humanized” sterol and glycosylation pathways .Notably, protease-deficient strains are a consistently-popular choice in membrane protein structural biology projects.Use of specific strains from these collections offers the potential to gain mechanistic insight into the molecular bottlenecks that preclude high recombinant protein yields; we and others have used transcriptome analysis to guide strain selection.In an early study we were able to identify genes that were up-regulated under high yielding conditions for our target membrane protein but down-regulated under low yielding conditions or vice versa .This enabled us to select four high yielding strains: srb5Δ, spt3Δ, gcn5Δ and yTHCBMS1.The use of the spt3Δ strain resulted in the largest yields of Fps1p in shake-flasks.When the yTHCBMS1 strain was cultured in the presence of 0.5 μg/ml doxycycline, yields were increased by 30-fold in shake-flasks and over 70-fold in bioreactors compared with wild-type cells .Using the strains srb5Δ and gcn5Δ, Fps1p yields were increased 5- and 10-fold over wild-type, respectively.While these strains were originally selected to optimize Fps1p yields, we also noted generic advantages in that functional yields of the adenosine A2A receptor and soluble GFP could be doubled using them .This suggests that both general and target-specific effects are likely to occur during recombinant protein production in yeast.It would be desirable to be able to distinguish between the two, but this remains challenging because of the limited number of studies that have been done, including those in yeast.Specific metabolic pathways have been targeted in order to increase functional recombinant protein yields in yeast cells.For example, exploiting the global cellular stress response to misfolded proteins has been investigated as a route to improving functional yields of recombinant proteins for structural studies ; it has even been argued that exposure to mild stress may enhance tolerance to a future stressful stimulus such as that imposed during recombinant protein production.A recent study of recombinant GPCR production in S. cerevisiae demonstrated that mislocalized proteins were associated with the endoplasmic reticulum chaperone, BiP , providing opportunities to regulate the chaperone network.The unfolded protein response and the heat shock response have also been examined; tuning expression levels to avoid or minimize UPR induction has previously been shown to increase functional membrane protein yields , while the HSR activates chaperones and the proteasome in order to relieve stress.HSR up-regulation has specifically been used to increase recombinant yields of soluble α-amylase in S. cerevisiae, but did not increase the yield of a recombinant human insulin precursor .Overall, studies such as these demonstrate that the manipulation of stress responses may influence recombinant protein yields in yeast, but that the magnitude of any effect is protein specific.P. pastoris expression plasmids are usually integrated into the yeast genome to produce a stable production strain.Since it is not possible to precisely control the number of copies that integrate, the optimal clone must be selected experimentally.One approach is to screen on increasing concentrations of antibiotic to obtain so-called “jackpot” clones.Although the results in Fig. 3 suggest a correlation between the copy number of the integrated expression cassette and the final yield of recombinant protein, this is not always the case .Sometimes clones with lower copy numbers are more productive, suggesting that the cellular machinery is overwhelmed in jackpot clones.Consistent with this idea, adenosine A2A receptor yields were increased 1.8-fold when the corresponding gene was co-expressed in P. pastoris with the stress-response gene HAC1 ; Hac1 drives transcription of UPR genes.In contrast to the situation in S. cerevisiae, many fewer P. pastoris strains are available in which to integrate the expression plasmid for the generation of a recombinant production strain.The wild-type strain, X33, the histidine auxotroph GS115, and the slow-methanol-utilization strain KM71H, have all been used to produce membrane proteins for structural studies .Protease-deficient strains such as SMD1163, which lacks proteinase A and proteinase B, are also available.The structures of recombinant membrane proteins produced using P. pastoris that were published in 2014 and 2015 were all produced in one of the three mutant strains, SMD1163, KM71H and GS115 .Notably production of human aquaporin 2 was actually done using an engineered GS115 strain in which the native aquaporin gene, AQY1, was deleted.In all these strains, P. pastoris post-translationally glycosylates membrane proteins by adding core8-2 groups, but not the higher-order structures found in humans and other mammals; compared to S. cerevisiae, the mannose chains also tend to be shorter.However, the effects of these non-native modifications are not necessarily detrimental and need to be assessed for each individual protein .Indeed, the high-resolution structure of a glycosylated form of the Caenorhabditis elegans P-glycoprotein demonstrates that yeast glycosylation does not necessarily hinder crystal formation .Nonetheless, in order to overcome potential bottlenecks in producing, purifying, characterizing and crystallizing human proteins in yeast, engineered strains have been developed including strains with “humanized” glycosylation and sterol pathways.The yeast membrane differs in composition from that of mammalian membranes.This is likely to be highly relevant to subsequent structural and functional studies of recombinant membrane proteins produced in yeast because lipids have a particularly important role in the normal function of membrane proteins; they contribute to membrane fluidity and may directly interact with membrane proteins.In an attempt to “humanize” the yeast membrane, yeast strains have been developed that synthesize cholesterol rather than the native yeast sterol, ergosterol.This was achieved by replacing the ERG5 and ERG6 genes of the ergosterol biosynthetic pathway with the mammalian genes DHRC24 and DHRC7 , respectively.The gene products of DHRC7 and DHRC24 were identified as key enzymes that saturate sterol intermediates at positions C7 and C24 in cholesterol synthesis.Erg5p introduces a double bond at position C22 and Erg6p adds a methyl group at position C24 in the ergosterol biosynthetic pathway and therefore competes with the gene product of DHRC24 for its substrate.The yeast tryptophan permease, Tat2p, was unable to function in a yeast strain producing only ergosterol intermediates, but in a cholesterol-producing strain activity was recovered to almost wild-type levels.Localization to the plasma membrane also appeared to correlate with the function of Tat2p .The yeast ABC transporter, Pdr12p, although correctly localized to the plasma membrane, was inactive in a cholesterol-producing strain because of the lack of the key methyl group at position C24 .A similar scenario was observed with the function of yeast Can1p: the protein was localized to the plasma membrane regardless of the sterol produced, but function was lost when ergosterol production was disrupted .The native yeast GPCR, Ste2p, which is involved in signal transduction, partially retained its function when cholesterol was produced instead of ergosterol.The agonist of Ste2p, MFα, retained potency on this receptor in both wild-type and cholesterol-producing strains.However, the efficacy appeared to be only half of that observed in the wild-type strain .A positive outcome was observed when the human Na,K-ATPase α3β1 isoform was expressed in a cholesterol-producing P. pastoris strain : there was an improvement in recombinant yield and radio-ligand binding on intact cells, with the number of ligand binding sites in the cholesterol-producing strain increasing 2.5- to 4-fold compared to wild-type and protease deficient strains, respectively, both of which are ergosterol-containing .Overall, studies on native yeast membrane proteins suggest that cell viability is not impaired in “humanized” yeast cells, although growth rates and densities are somewhat affected.However, this is likely to be an acceptable trade-off in return for higher yields of functional protein.Since a relatively small number of heterologous membrane proteins have been produced in cholesterol-producing yeast strains to date, potential exists to further optimize functional yields by using them.Recovering functional protein from recombinant host cells is dependent upon their capacity to synthesize an authentically-folded polypeptide.This requires the proper functioning of the transcription, translation and folding pathways .During a recombinant protein production experiment, the maintenance and processing of an expression plasmid places a substantial metabolic burden on a cell, which means that these pathways must operate under abnormally stressful conditions .A popular strategy to mitigate this burden is to decrease culture temperature; however, transcription, translation, polypeptide folding rates and membrane composition are also affected by low temperature stress .This probably explains why increased yields are not always observed experimentally using that approach.Furthermore, many other variables are likely to affect yields including the composition of the growth medium, the pH and oxygenation of the culture, the inducer concentration and the point of induction.Yeast cells grow quickly in complex or defined media; the selection and composition of suitable broths have been discussed elsewhere .While higher yields are typically achieved in complex media, more control is possible in selective media, such as the ability to incorporate selenomethionine for anomalous dispersion phasing in both S. cerevisiae and P. pastoris .The transcriptional and translational machinery of a cell respond to its growth rate, which is strongly affected by nutrient availability .For example, several inducible and constitutive S. cerevisiae promoters have recently been characterised following growth on different carbon sources and across the diauxic shift in glucose batch cultivation .The study demonstrates that constitutive promoters differ in their response to different carbon sources and that expression under their control decreases as glucose is depleted and cells enter the diauxic shift .Changes in nutrient source have also been found to alter the transcriptome and the global translational capacity of P. pastoris .As discussed in Section 3.1, when P. pastoris cells were cultured in methanol, the majority of the total mRNA pool was associated with two or more ribosomes per mRNA.Methanol is used to induce protein production under the control of PAOX1 in this yeast, suggesting that high recombinant protein yields may be associated with the global response of P. pastoris to methanol as well as promoter activity .Several small molecules, sometimes referred to as chemical chaperones, have been investigated for their ability to enhance functional membrane protein yields.Specific improvements in yield have been reported following addition to recombinant yeast cultures of dimethyl sulphoxide, glycerol, histidine and protein-specific ligands .The effects of antifoams on protein yield, which are added to prevent foaming in bioreactor cultures, are discussed separately in Section 5.3.The solvent DMSO has numerous biological applications, and is routinely used as a cryoprotectant and a drug vehicle .Addition of DMSO to yeast cultures producing membrane proteins has been reported to have a positive effect on yield .DMSO added at 2.5% v/v more than doubled the yield of 9 GPCRs produced in P. pastoris, with improvements of up to 6-fold .In another study, the production in S. cerevisiae of a range of transporters fused to GFP was enhanced on average by 30% following DMSO addition .However, DMSO has also been reported to have no effect or, in some cases, negative effects upon membrane protein yields .The underlying mechanisms are incompletely understood; DMSO is known to increase membrane permeability and cross membranes itself .It has also been shown to upregulate the transcription of genes involved in lipid biosynthesis and increase phospholipid levels in S. cerevisiae .When DMSO is added with stabilizing ligands, it may therefore improve the ability of these compounds to pass through the membrane and reach receptors in compartments within the cell .Glycerol has been added to S. cerevisiae cultures producing human P-glycoprotein; at 10% v/v, yields were improved by up to 3.3-fold .Glycerol is not as membrane-permeable as DMSO , so is thought to exert its effects by stabilizing protein conformation .However, in another study, glycerol addition had a negative impact upon the yields of several membrane proteins produced in S. cerevisiae .When producing recombinant membrane proteins such as GPCRs, ligands may be added at saturating concentrations to boost yields.Functional yields of GPCRs such as the β2-adrenergic receptor were tripled, the 5HT5A receptor doubled and the adenosine A2A receptor doubled by adding receptor-specific ligands.An optimization study demonstrated that addition of ligand could improve functional yields of 18 out of 20 GPCRs, with increases of up to 7-fold.However, a decrease in Bmax was observed for two of the receptors investigated .It is thought that ligands able to pass through the plasma membrane may bind to receptors as they fold during biosynthesis, thereby stabilizing them in the correctly-folded state.As a result, the level of functional receptors expressed at the plasma membrane is increased .The amino acid, histidine, has been shown to double yields of some GPCRs when added to cultures at 0.04 mg/mL.Notably, its addition positively influenced fewer receptors than other additives such as DMSO .Histidine addition did not have any effect upon the growth of the cells; instead it has been suggested that improved protein yields may result from its ability to protect yeast from oxidative stress .Overall, it is clear that the use of a range of additives has improved recombinant membrane protein yields for diverse targets.In some cases additive effects have been synergistic, while in others their addition has been detrimental .It is therefore important to systematically investigate the effects of additives on a case-by-case basis.Membrane protein production is often done in bioreactors in order to obtain the large quantities of protein required for crystallization trials.Use of bioreactors enables the tight control of critical parameters, such as culture temperature, pH and the level of dissolved oxygen, thereby enabling the design of highly-reproducible bioprocesses.The most efficient way to select the optimal combination of these parameters is to use a design of experiments approach .DoE applies a structured test design to determine how combining input parameters set at different levels affects the output.This efficient test design means that all experimental combinations do not need to be tested in order to derive the empirical relationship between the input parameters and protein yield in the form of a deterministic equation.The DoE approach is therefore a highly efficient way to obtain a quantitative understanding of how each factor and its interaction with all other factors affect final protein yield .While this strategy is ideally executed in a bioreactor format, even in shake flasks yields can be improved by careful control of culture conditions .One of the most important parameters in bioreactor cultures, especially of P. pastoris cells, is appropriate oxygenation.This is achieved by vigorous stirring and sparging of gases, which usually leads to foaming.The addition of chemical antifoaming agents is therefore required to manage and prevent the formation of foam.As additives to the process, these chemicals can affect both host cells and the recombinant proteins being produced; yields can be affected by the type of antifoam used, the concentration added, and whether production is undertaken in small shake flasks or in larger-scale bioreactors.Although the biological effects of antifoams are not well understood, they have been shown to affect the volumetric mass oxygen transfer coefficient , influence growth rates of yeast and are thought to alter membrane permeability .While it was possible to more than double the yield of soluble GFP secreted by P. pastoris cells following the optimization of antifoam addition, the same conditions had detrimental effects on the functional yield of a recombinant GPCR produced in yeast.These findings highlight the importance of investigating the effects of antifoam addition; this often disregarded experimental parameter can significantly affect recombinant protein yields.In Section 3.1, we highlighted the fact that most recombinant expression systems employed in structural biology pipelines depend upon strong, inducible promoters.All promoters are known to vary in activity over time as well as in response to different carbon sources, which means that the timing of induction can be critical in obtaining the highest yields of functional protein; these parameters must be empirically determined.The response of a series of inducible S. cerevisiae promoters to different carbon sources has been studied providing a framework for these types of experiments.We previously demonstrated the major impact of the induction regime on the yield of secreted GFP from P. pastoris cultures, showing the importance of matching the composition of the methanol feedstock to the metabolic activity of the cells .PAOX1 is induced on methanol; however, when glucose was the pre-induction carbon source, the adenosine A2A receptor and GFP were still produced in the pre-induction phases of bioreactor cultures .This study also reveals that a range of recombinant membrane proteins can be detected in the pre-induction phases of P. pastoris cultures when grown in bioreactors, but not shake-flasks.The results of all these investigations suggest that a DoE approach to selecting and optimizing induction phase conditions might be a particularly effective method of maximizing recombinant protein yields.The first steps in isolating a recombinant membrane protein are to break open the host cells and harvest the membranes.In yeast this requires breaking the cell wall, which needs harsher conditions than those typically used for insect, mammalian or E. coli cells.Typical methods for achieving this include high pressure or homogenization using glass beads shaken at high frequency followed by differential centrifugation .In isolating a recombinant membrane protein from yeast membranes , the goal is to maintain structural integrity and functionality.Depending on the protein target, this can be an extremely difficult task.However approaches are available to optimize the extraction process and the environment into which the target protein is being transferred.Traditionally, detergents have been used for membrane protein extraction, purification and crystallization; the general principles have been reviewed extensively elsewhere .Popular detergents include the non-ionic n-octyl-β-d-gluocopyranoside, n-decyl-β-d-maltopyranoside and n-dodecyl-β-d-maltopyranoside .Interestingly, the most commonly-used detergents to date are the same for yeast as for other expression systems, despite the differences in membrane composition.Optimization of detergent and buffer conditions must be done for each individual target membrane protein by assessment of protein stability and monodispersity.Unfortunately, membrane protein aggregation is a relatively common occurrence in these studies since detergents do not provide an exact mimic of the lipid environment in which the protein natively resides.Alternative amphiphiles have been designed to overcome these limitations and include novel compounds such as maltose neopentyl glycol .It has been suggested that for some target membrane proteins, MNG provides increased protein stability in comparison to detergents such as DM .One useful technique to assess membrane protein stability prior to crystallization trials exploits a thiol-specific fluorochrome, N-maleimide, which enables the investigator to assess the thermal stability of a recombinant membrane protein in a high-throughput format, therefore requiring small amounts of purified material .In order to use this assay, the target membrane protein must have cysteine residues buried within the hydrophobic interior.Such residues bind thiol-specific CPM upon temperature-induced protein unfolding.CPM is essentially non-fluorescent until it reacts with a cysteine residue; therefore fluorescence can be recorded over time to determine the rate of protein unfolding.The influence of detergent type and concentration, salt concentration, pH, glycerol content and lipid addition on stability can all be investigated.Several studies have found a correlation between protein stability and the likelihood of obtaining well-ordered crystals for high resolution structure determination .When determining the structure of a protein it is important to demonstrate that it is functional.For many membrane proteins, measuring function in the detergent-solubilized state can be difficult, either due to detergent effects or because both ‘sides’ of the membrane are accessible.Therefore reconstitution of detergent-solubilized proteins into proteoliposomes is needed.Typically this involves the following steps: preparation of liposomes comprised of the desired lipids; destabilization of the liposomes with a detergent; mixing of detergent-purified protein with the liposomes; and removal of detergent using methods such as adsorption onto Bio-Beads SM-2 resin or dialysis .Several proteins expressed in S. cerevisiae or P. pastoris have been reconstituted into proteoliposomes and studied, showing that proteins produced in yeast are fully functional and comparable to those expressed in other cell systems.Although all crystal structures of membrane proteins to date, including those synthesized in yeast, have used detergents for extraction of the protein from the lipid bilayer, the use of detergents is not without problems.As mentioned in Section 6.1, screening for conditions and detergents that effectively extract the protein yet retain structure and stability can be difficult, time consuming and expensive.The environment produced by a detergent micelle does not fully mimic the lipid bilayer environment, as not only does the bilayer provide lateral pressure to stabilize the protein structure but interactions between the protein and its annular lipids can affect protein function.Notably, the most effective detergents for extraction are often not the best detergents for crystal formation.Recently a new detergent-free method for extraction of membrane proteins has emerged using SMA co-polymers.The SMA inserts into biological membranes and forms small discs of lipid bilayer surrounded by the polymer, termed SMALPs , also known as lipodisqs or native nanodiscs .Membrane proteins within the SMALPs retain their annular lipid bilayer environment , yet the particles are small, stable and water soluble, allowing standard affinity chromatography methods to be used to purify a protein of interest .To date this approach has been successfully applied to a wide range of transmembrane proteins from many different expression systems including both S. cerevisiae and P. pastoris , for protein targets including GPCRs, ABC transporters and ion channels.Proteins within SMALPs have been shown to retain functional activity .The small size of the particle and lack of interference from the polymer scaffold mean the SMALPs are ideal for many spectroscopic and biophysical techniques .Importantly for structural studies, SMALP-encapsulated proteins have been found to be significantly more thermostable, less prone to aggregation, and easier to concentrate than detergent-solubilized proteins .The importance of maintaining the lipid bilayer environment and lateral pressure is highlighted in Fig. 5d.When the adenosine A2A receptor is extracted from P. pastoris membranes with detergent, it is necessary to supplement with the cholesterol analogue, cholesterol hemisuccinate, in order to retain any binding activity.However when the SMA co-polymer is used to extract the receptor, there is no requirement for CHS suggesting that it is not the cholesterol per se that is required for function of this protein, but some stabilizing interaction with lipids.Although as yet, there are no reports of SMALP-encapsulated proteins being used to generate protein crystals, they have been used in both negative stain and cryo-single particle electron microscopy .With recent technological and analytical advances within the field of electron microscopy the possibility of high resolution membrane protein structures using electron microscopy has become a reality ; SMALPs offer the ability for these structures to be obtained without stripping away the membrane environment from a transmembrane protein.Yeast has an important role to play in membrane protein structural biology projects; since S. cerevisiae and P. pastoris are particularly amenable to genetic study, new insight may emerge that can lead to the design of improved experiments.One challenge is to identify which experimental parameters discussed in Sections 3–5, above, should be the focus in devising a production trial for a novel target.This is particularly demanding since these parameters may affect both host-cell- and target-protein-specific responses.Our understanding of the interlinked processes of transcription, translation and protein folding offers new opportunities to improve functional yields of recombinant membrane proteins through strain selection and the choice of suitable culture conditions using DoE.Coupled with new approaches to extraction and solubilization, it is likely that the pace of solving new membrane protein structures is set to increase in the foreseeable future.
Historically, recombinant membrane protein production has been a major challenge meaning that many fewer membrane protein structures have been published than those of soluble proteins. However, there has been a recent, almost exponential increase in the number of membrane protein structures being deposited in the Protein Data Bank. This suggests that empirical methods are now available that can ensure the required protein supply for these difficult targets. This review focuses on methods that are available for protein production in yeast, which is an important source of recombinant eukaryotic membrane proteins. We provide an overview of approaches to optimize the expression plasmid, host cell and culture conditions, as well as the extraction and purification of functional protein for crystallization trials in preparation for structural studies.
2
ACP-DL: A Deep Learning Long Short-Term Memory Model to Predict Anticancer Peptides Using High-Efficiency Feature Representation
Cancer is one of the most devastating killers of human beings, accounting for millions of deaths around the world each year.1,2,Conventional physical and chemical methods, including targeted therapy, chemotherapy, and radiation therapy, remain the principle modes to treat cancer, which focus on killing the diseased cells, but normal cells are also adversely affected.3,4,More obviously, these treatments are expensive and inefficient, which means there is an urgent need to develop novel efficient measures to solve this deadly disease.5,The discovery of anticancer peptides, a kind of short peptide generally with a length less than 50 amino acids and most of which are derived from antimicrobial peptides, often cationic in nature, has led to the emergence of a novel alternative therapy to treat cancer.ACPs open a promising perspective for cancer treatment, and they have various attractive advantages,6,7 including high specificity, ease of synthesis and modification, low production cost, and so on.8,ACPs could interact with the anionic cell membrane components of only cancer cells, and, for this reason, they can selectively kill cancer cells with almost no harmful effect on normal cells.4,9,In addition, few ACPs, e.g., cell-penetrating peptides or peptide drugs, inhibit the cell cycle or any other functionality.Thus, they are safer than traditional broad-spectrum drugs, which have become the most competitive choice as therapeutics compared to small molecules and antibodies.In recent years, ACP therapeutics have been extensively explored and used to fight various tumor types across different phases of preclinical and clinical trials.10–14,However, only a few of them can eventually be employed for clinical treatment.Furthermore, it’s time-consuming, expensive, and lab-limited to identify potential new ACPs by experiment.With the huge therapeutic importance of ACPs, there is an urgent need to develop highly efficient prediction techniques.Some notable research has been reported in the prediction of ACPs.15,Tyagi et al.16 developed a support vector machine model using amino acid composition and dipeptide composition as input features on experimentally confirmed anticancer peptides and random peptides derived from the Swiss-Prot database.Hajisharifi et al.17 also reported an SVM model using Chou’s18,19 pseudo AAC and the local alignment kernel-based method.Vijayakumar and Ptv20 proposed that, between ACPs and non-ACPs, there was no significant difference in AAC observed.Also, they presented a novel encoding measure, which achieved better predictive performance than AAC-based features, considering both compositional information and centroidal, distributional measures of amino acids.Shortly afterward, based on the optimal g-gap dipeptide components, by exploring the correlation between long-range residues and sequence-order effects, Chen et al.21 described iACP, which exhibited the best predictive performance at that time.More recently, Wei et al.22 developed a sequence-based predictor called ACPred-FL, which uses two-step feature selection and seven different feature representation methods.According to the cognition of the short length of ACPs, it’s difficult to exploit the efficient features of many mature feature representation methods, which are widely used on protein sequences.23,With the rapid growth of the number of ACPs that has been identified experimentally, by machine learning, and by bioinformatics research,24–40 the computational prediction methods of ACPs still need further development.In this study, we proposed a deep learning long short-term memory neural network model to predict anticancer peptides, which we named ACP-DL.The efficient features exploited from peptides sequences are fed as input to train the LSTM model.More specifically, peptide sequences are transformed by k-mer sparse matrix of the reduced amino acid alphabet,41,42 which is a 2D matrix, and retained almost complete sequence order and amino acid component details.Meanwhile, peptide sequence are also converted by a binary profile feature,43 which can be regarded as one-hot encoding of categorical variables and has been suggested to be an efficient feature extraction technique.16,22,Finally, these features are fed into our LSTM model to predict new anticancer peptides.To further evaluate the performance of our model, we evaluated the ACP-DL on two novel benchmark datasets.We also compared the purposed ACP-DL with existing state-of-the-art machine-learning models, e.g., SVM,44,45 Random Forest,46and Naive Bayes.47,The 5-fold cross-validation experimental results showed that our method is suitable for the anticancer prediction mission with notable prediction performance.The workflow of ACP-DL is show in Figure 1.Above all, we compared the different distributions of amino acids in anticancer peptides, non-anticancer peptides, and all peptides in datasets ACP740 and ACP240.The results for ACP740 are shown in Figure 2, the composition of all 20 amino acids in these peptides were counted and compared.Certain residues, including Cys, Phe, Gly, His, Ile, Asn, Ser, and Tyr, were found to be abundant in anticancer peptides compared to non-anticancer peptides, while Glu, Leu, Met, Gln, Arg, and Trp were abundant in non-anticancer peptides compared to anticancer peptides.Similarly, as shown in Figure 3, in dataset ACP240, the Phe, His, Ile, and Lys were abundant in anticancer peptides.Since terminal residues play essential roles in biological functions of peptides.First, we executed our model ACP-DL on the ACP740 and ACP240 datasets to evaluate its ability of predicting anticancer peptides.The 5-fold cross-validation details are offered in Tables 1 and 2.The average accuracy of 5-fold cross-validation on ACP740 was 81.48% with 3.12% SD, the average sensitivity was 82.61% with 3.36% SD, the average specificity was 80.59% with 4.01% SD, the mean precision was 82.41% with 3.81% SD, and the Matthews correlation coefficient was 63.05% with 6.23% SD.ACP-DL showed an outstanding capability to identify anticancer peptides, performed an area under the receiver operating characteristic curve of 0.894, as shown in Figure 4A, and has achieved the best performance on the ACP740 dataset among all comparison methods.The mean accuracy of 5-fold cross-validation on ACP240 was 85.42%, the average Sens was 84.62%, the average Spec was 89.94%, the mean Prec was 80.28%, and the MCC was 71.44%; and, the AUC of ACP-DL was 0.906, as shown in Figure 4C.In general, the performance of the deep learning model will become better with the increase in the scale of data, and the model that can achieve good results on smaller datasets will also achieve good results on huger data.Actually, the data scale of anticancer peptides is not very large, so we didn’t implement a neural network model with very complex architecture; and, to a certain extent, the 5-fold cross-validation is not conducive to the neural network model, because it further reduces the amount of training data.It is noteworthy that, although the dataset ACP240 was smaller than ACP740, our model ACP-DL still performed very well.The experimental results of rigorous cross-validation on benchmark dataset ACP740 and dataset ACP240 confirmed that our model has a good capability to predict anticancer peptides.To evaluate the ability of the purposed method, we further compared ACP-DL with other widely used machine-learning models on the same benchmark datasets, including ACP740 and ACP240.Here we have selected the SVM, RF, and NB models, and we built them using the same cross-validation datasets.The implementation of these three machine-learning models comes from Scikit-learn,48 and they were tested with default parameters.Since these methods were evaluated using the same evaluation criteria, the comparison model and deep learning model ACP-DL results are shown in Table 3 and Figures 4 and 5.ACP-DL obtained the most significant performance among the contrasted methods.Table 3 shows the details of the comparison.In the ACP740 dataset, our method ACP-DL significantly outperformed other methods with an accuracy of 81.48%, a Sens of 82.61%, a Spec of 80.59%, a Prec of 82.41%, an MCC of 63.05%, and an AUC of 0.894.ACP-DL increased the accuracy by over 5%, the MCC by over 10%, and the AUC by more than 5%, respectively.In the dataset ACP240, ACP-DL also performed remarkably with an accuracy of 85.42%, a Sens of 84.62%, a Spec of 89.94%, a Prec of 80.28%, an MCC of 71.44%, and an AUC of 0.906.ACP-DL improved the accuracy by over 8%, the Spec by over 10%, the MCC by over 14%, and the AUC by more than 5%, respectively.Obviously, the deep learning model shows its power, and our model is suitable for anticancer peptide identification and prediction.ACP-DL is a competitive model used to predict anticancer peptides and accelerate related research.The comparison experiment results proved our assumption.In this study, we proposed a deep learning LSTM model to predict potential anticancer peptides using high-efficiency feature representation.More specifically, we developed an efficient feature representation approach by integrating binary profile feature and k-mer sparse matrix of reduced amino acid alphabet feature to fully exploit peptide sequence information.Then we implemented a deep LSTM model to automatically learn how to identify anticancer peptides and non-anticancer peptides.To the best of our knowledge, this is the first time that the deep LSTM model has been applied to predict anticancer peptides.Meanwhile, to evaluate the capability of the proposed method, we further compared ACP-DL with widely used machine-learning models in the same benchmark datasets, including ACP740 and ACP240; experimental results on the 5-fold cross-validation show that the proposed method achieved outstanding performance compared with existing methods, on benchmark dataset ACP740 with 81.48% accuracy at the AUC of 0.894 and on dataset ACP240 with an accuracy of 85.42% at the Spec of 89.94 and the AUC of 0.906, respectively.The improvement is mainly from the deep LSTM model’s model parameter optimization and effective feature representation from original peptide sequences.In addition, we have contributed two novel anticancer peptide benchmark datasets, ACP740 and ACP240, in this work.It is anticipated that ACP-DL will become a very useful high-throughput and cost-effective tool, being widely used in anticancer peptide prediction as well as cancer research.Further, as demonstrated in a series of recent publications in developing new prediction methods,49–51 user-friendly and publicly accessible web servers will significantly enhance their impacts.It is our wish to be able to provide in the future a web server for the prediction method presented in this paper.In this study, we proposed a novel deep learning LSTM model to predict anticancer peptides, named ACP-DL, using high-efficiency features provided by k-mer sparse matrix and the binary profile feature.Furthermore, we evaluated ACP-DL’s predictive performance of anticancer peptides in benchmark datasets ACP740 and ACP240.Moreover, we compared ACP-DL with three widely used machine-learning models in the same datasets, including SVM,44 RF,46 and NB,47 to prove the robustness and effectiveness of the proposed method.Eventually, we made a summary, analysis, and discussion of the ACP-DL.We constructed two novel benchmark datasets in this work for ACP identification, named ACP740 and ACP240.As previous studies suggested, the new datasets comprised both positive and negative datasets, while positive samples were experimentally validated ACPs and AMPs without anticancer function were collected as negative samples.We selected 388 samples as the initial positive data on the basis of Chen et al.’s21 and Wei et al.’s24 studies, of which 138 were from Chen et al.’s work and 250 were from Wei et al.’s work.Correspondingly, the initial negative data were 456 samples, of which 206 were from Chen et al.’s work and 250 were from Wei et al.’s work, respectively.To avoid the bias of dataset, the widely used tool CD-HIT52 was further used to remove those peptides sequences with a similarity of more than 90%.As a result, we finally obtained a dataset containing 740 samples, of which 376 were positive samples and 364 were negative samples.As the same procedure, to validate the generalization ability of the predictive model, we further constructed an additional dataset, named ACP240, which initially included 129 experimentally validated anticancer peptide samples as the positive dataset and 111 AMPs without anticancer functions as the negative dataset, respectively.Moreover, those sequences with a similarity of more than 90% were removed using the popular tool CD-HIT.52,The similarity setting was consistent with previous studies.21,22,The CD-HIT is available at http://weizhong-lab.ucsd.edu/cdhit-web-server.There was no overlap between dataset ACP740 and dataset ACP240, and these two datasets are both non-redundant datasets.The two benchmark datasets are publicly available at https://github.com/haichengyi/ACP-DL.We also encoded the peptide sequence by using the k-mer sparse matrix previously proposed.41,In detail, its k-1 consecutive nucleotides and k consecutive nucleotides are regarded as a unit.3-mer of peptides is composed of 3 amino acids.53,First the 20 amino acids were reduced into 7 groups based on their dipole moments and side chain volume: Ala, Gly, and Val; Ile, Leu, Phe, and Pro; Tyr, Met, Thr, and Ser; His, Asn, Gln, and Tpr; Arg and Lys; Asp and Glu; and Cys.16,54,55,So, the peptide sequence was reduced into a 7-letter alphabet.Then we scanned each peptide sequence from left to right, stepping one amino acid at a time, which is considered the characteristics of each amino acid.LSTM is an improvement of a recurrent neural network, which is mainly used in the natural language processing and speech recognition field.57–59,Different from a traditional neural network, an RNN can take advantage of sequence information.Theoretically, it can utilize the information of arbitrary length sequence; but, because of the problem of vanishing gradient in the network structure, it can only retrospectively utilize the information on time steps that are close to it in practical applications.To solve this problem, LSTM was presented with specially designed network architecture, which can learn long-term dependency information naturally.A general architecture of LSTM is composed of an input gate, a forget gate, an update gate, and a memory block.The improvement of LSTM is mainly from incorporating a memory cell that accepts the network to learn when to forget previous hidden states and when to update hidden states, according to the input information through time.It uses dedicated storage units to store information.To our knowledge, the deep LSTM model was first applied to predict novel anticancer peptides in this work.LSTM selectively passes information through a gate unit, which mainly is by means of a sigmoid neural layer and a dot multiplication operation.Each element of the sigmoid layer output is a real number between 0 and 1, representing the weight that the corresponding information passes through.For example, 0 means no information is allowed, and 1 means let all information pass.The implementation of the deep learning model is based on the Keras framework, which is capable of running on top of TensorFlow, Theano, or CNTK and is supported on both GPUs and CPUs.It was developed with a focus on enabling fast experimentation.61,H.-C.Y. and Z.-H.Y. conceived the algorithm, carried out analyses, prepared the datasets, carried out experiments, and wrote the manuscript.Other authors designed, performed, and analyzed experiments and wrote the manuscript.All authors read and approved the final manuscript.The authors declare no competing interests.
Cancer is a well-known killer of human beings, which has led to countless deaths and misery. Anticancer peptides open a promising perspective for cancer treatment, and they have various attractive advantages. Conventional wet experiments are expensive and inefficient for finding and identifying novel anticancer peptides. There is an urgent need to develop a novel computational method to predict novel anticancer peptides. In this study, we propose a deep learning long short-term memory (LSTM) neural network model, ACP-DL, to effectively predict novel anticancer peptides. More specifically, to fully exploit peptide sequence information, we developed an efficient feature representation approach by integrating binary profile feature and k-mer sparse matrix of the reduced amino acid alphabet. Then we implemented a deep LSTM model to automatically learn how to identify anticancer peptides and non-anticancer peptides. To our knowledge, this is the first time that the deep LSTM model has been applied to predict anticancer peptides. It was demonstrated by cross-validation experiments that the proposed ACP-DL remarkably outperformed other comparison methods with high accuracy and satisfied specificity on benchmark datasets. In addition, we also contributed two new anticancer peptides benchmark datasets, ACP740 and ACP240, in this work. The source code and datasets are available at https://github.com/haichengyi/ACP-DL.
3
Influence of alkalinity and temperature on photosynthetic biogas upgrading efficiency in high rate algal ponds
Biogas from the anaerobic digestion of organic matter constitutes a promising renewable energy vector for the production of heat and power in households and industry .Raw biogas is mainly composed of CH4, CO2 and other components at lower concentrations such as H2S, oxygen, nitrogen, siloxanes, ammonia and halogenated hydrocarbons .The high content of CO2 significantly reduces the specific calorific value of biogas, increases its transportation costs and promotes emissions of CO and hydrocarbons during combustion.On the other hand, H2S is a toxic and malodorous gas that severely reduces the lifespan of the biogas storage structures, pipelines, boilers and internal combustion engines .The removal of these biogas pollutants is mandatory in order to comply with the technical specifications required for biogas injection into natural gas grids or use as a vehicle fuel .State-of-the-art physical/chemical or biological technologies for CO2 removal often need a previous H2S cleaning step, while the few technologies capable of simultaneously removing CO2 and H2S from biogas exhibit a high energy and chemicals consumption, which limits their economic and environmental sustainability for biogas upgrading .In this context, algal-bacterial symbiosis represents a cost-effective and environmentally friendly platform for the simultaneous removal of CO2 and H2S from raw biogas in a single step process .Photosynthetic biogas upgrading in algal-bacterial photobioreactors is based on the light-driven CO2 consumption by microalgae coupled to the oxidation of H2S to either elemental sulfur or sulfate by sulfur-oxidizing bacteria using the oxygen photosynthetically produced .The environmental and economic sustainability of the process can be boosted with the integration of wastewater treatment in the photobioreactor devoted to biogas upgrading .In this regard, digestate or domestic wastewater can be used as an inexpensive nutrient source for microalgae and bacteria growth during photosynthetic biogas upgrading, which in turn would reduce the costs associated to nutrients removal .Recent investigations have focused on the optimization of the simultaneous biogas upgrading and digestate treatment in photobioreactors.These studies have identified the optimum photobioreactor configuration , the strategies for minimizing oxygen concentration in the biomethane and the influence of light intensity, wavelength and photoperiod regime on the final quality of the upgraded biogas under indoors conditions .Unfortunately, most of these previous works did not result in a biomethane composition complying with the specifications of most European regulations due to the limited CO2 mass transfer rates from the raw biogas to the aqueous phase .In this context, a recent study conducted outdoors in a high rate algal pond interconnected to an external absorption column for the simultaneous treatment of biogas and centrate suggested that both alkalinity and temperature in the algal-bacterial broth can play a key role on the final biomethane quality .Indeed, culture broth alkalinity determines the kinetics of both microalgae growth in the HRAP and CO2/H2S absorption in the absorption column .Likewise, culture broth temperature directly impacts on the gas/liquid equilibria and biomass growth kinetics .However, despite the relevance of these environmental parameters on the performance of photosynthetic biogas upgrading, no study has evaluated to date the effect of alkalinity and temperature on the final quality of biomethane in algal-bacterial photobioreactors.This work systematically evaluated the influence of inorganic carbon concentration and temperature in the cultivation broth on biomethane quality in a 180 L HRAP interconnected to a 2.5 L absorption column via external recirculation of the settled cultivation broth under indoor conditions.The tested inorganic carbon concentrations are typically encountered in high and medium strength digestates and domestic wastewater, respectively, while the tested temperatures are representative of spring-autumn and summer seasons in temperate climates.A synthetic gas mixture composed of CO2, H2S and CH4, was used in this study as a model biogas.Centrate was collected from the anaerobically digested sludge-dehydrating centrifuges at Valladolid wastewater treatment plant and stored at 4 °C prior to use.The average centrate composition was as follows: inorganic carbon = 459 ± 83 mg L−1, total nitrogen = 576 ± 77 mg L−1 and S-SO42− = 4.7 ± 3.4 mg L−1.NH4Cl was added to the raw centrate to a final TN concentration of 1719 ± 235 mg L−1 in order to simulate a high-strength digestate and thus minimize the flow rate of centrate used in the pilot plant.The experimental set-up was located at the Department of Chemical Engineering and Environmental Technology at Valladolid University.The set-up consisted of a 180 L HRAP with an illuminated surface of 1.2 m2 divided by a central wall in two water channels.The HRAP was interconnected to a 2.5 L absorption column via external liquid recirculation of the supernatant of the algal-bacterial cultivation broth from a 10 L conical settler coupled to the HRAP.The remaining algal bacterial biomass collected at the bottom of the settler was continuously recirculated to the HRAP in order to avoid the development of anaerobic conditions in the settler due to an excessive biomass accumulation.The HRAP cultivation broth was continuously agitated by a 6-blade paddlewheel at an internal recirculation velocity of ≈20 cm s−1.A photosynthetic active radiation of 1350 ± 660 μmol m−2 s−1 at the HRAP surface was provided by six high-intensity LED PCBs operated in a 12 h:12 h light/dark regime.Six operational conditions were tested in order to assess the influence of alkalinity and temperature on biomethane quality.The influence of IC concentrations of 1500, 500 and 100 mg L−1 was evaluated in stages I–II, III–IV and V–VI, respectively, while a temperature of 35 °C was maintained during stages I, III and V and a temperature of 12 °C during stages II, IV and VI.The HRAP was initially filled with an aqueous solution containing a mixture of NaHCO3 and Na2CO3 before inoculation to adjust the initial IC concentration to the corresponding concentration set in the operational stage.The IC concentration of the digestate fed to the HRAP during each operational stage was also adjusted accordingly.Thus, IC concentrations of 1500 and 500 mg L−1 were obtained by addition of NaHCO3 to the raw centrate, while IC concentrations of 100 mg L−1 were achieved via an initial centrate acidification with HCl aqueous solution to a final pH of 5.5 in order to remove IC by air-aided CO2 stripping followed by NaHCO3 addition to adjust the IC concentration.The temperature of the HRAP cultivation broth was controlled with an external heat exchanger.A consortium of microalgae/cyanobacteria from outdoors HRAPs treating centrate and domestic wastewater at the Department of Chemical Engineering and Environmental Technology at Valladolid University and at the WWTP of Chiclana de la Frontera, respectively, was used as inoculum in each operational stage.During the illuminated periods, the HRAP was fed with the modified digestate as a nutrient source at a flow rate of 2 L d−1, while synthetic biogas was sparged into the absorption column under co-current flow operation at a flow rate of 4.9 L h−1 and a recycling liquid flow rate to biogas flow rate ratio of 0.5 .Tap water was continuously supplied in order to compensate water evaporation losses.A biomass productivity of 7.5 g dry matter m−2 d−1 was set in the six operational stages evaluated by controlling the biomass harvesting rate.The algal-bacterial biomass was harvested by sedimentation after coagulation-flocculation via addition of the polyacrylamide-based flocculant Chemifloc CV-300 .This operational strategy resulted in a process operation without effluent.Approximately two weeks after the beginning of each stage, the system had already achieved a steady state, which was confirmed by the negligible variation of most parameters during the rest of the stage.The ambient and cultivation broth temperatures, the flow rates of digestate, tap water and external liquid recycling, and the dissolved oxygen concentration in the cultivation broth were monitored three times per week during the illuminated and dark periods.The PAR was measured at the HRAP surface at the beginning of each stage.Gas samples of 100 μL from the raw and upgraded biogas were drawn three times per week in order to monitor the CO2, H2S, CH4, O2 and N2 concentrations.The inlet and outlet biogas flow rates at the absorption column were also measured to accurately determine CO2 and H2S removals.Liquid samples of 100 mL of digestate and cultivation broth were drawn three times per week and filtered through 0.20 μm nylon filters to monitor pH, dissolved IC, TN and SO42−.In addition, liquid samples of 20 mL were also drawn three times per week from the cultivation broth to monitor the TSS concentration.Unfortunately, no analysis of the microbial population structure was conducted in this study.The DO concentration and temperature were monitored with an OXI 330i oximeter, while a pH meter Eutech Cyberscan pH 510 was used for pH determination.The PAR at the HRAP surface was recorded with a LI-250A lightmeter.CO2, H2S, O2, N2 and CH4 gas concentrations were analysed using a Varian CP-3800 GC-TCD according to Posadas et al. .The dissolved IC and TN concentrations were determined using a Shimadzu TOC-VCSH analyser equipped with a TNM-1 chemiluminescence module.SO4−2 concentration was measured by HPLC-IC according to Posadas et al. , while the determination of TSS concentration was carried out according to standard methods .The ambient and cultivation broth temperatures, pH, cultivation broth TSS concentrations, the flow rates of digestate, tap water and external liquid recycling, the dissolved oxygen concentration, and the flowrate and composition of biogas were obtained under steady state operation.CO2-REs and H2S-REs were calculated according to based on duplicate measurements of the biogas and biomethane composition.The results here presented were provided as the average values along with their corresponding standard deviation.A t-student statistical analysis was performed in order to determine the statistically significant differences between the pH value at the bottom and the top of the absorption column.In addition, the t-student test was applied to determine the effect of temperature at the different alkalinities tested.Finally, a one-way ANOVA was performed to determine the effect of alkalinity and temperature on the quality of the biomethane produced along the six operational stages.The average water loss by evaporation in the HRAP during process operation at 35 °C was 15.9 ± 1.2 L d−1 m−2, while this value decreased to 1.9 ± 0.4 L d−1 m−2 at 12 °C.The maximum evaporation rate recorded in this study was ~1.8 times higher than the maximum reported by Posadas et al. in a similar outdoors HRAP during summer in a temperate climate and ~2.6 times higher than the highest value estimated by Guieysse et al. in an arid location.The high water losses here recorded were caused by the high and constant temperatures of the cultivation broth throughout the entire day and the high turbulence induced by the oversized paddlewheel typical in lab-scale systems .On the other hand, the lower temperature prevented water losses, the minimum value recorded being in the range obtained by Posadas et al. in a similar outdoors HRAP during spring in a temperate climate.The average DO concentrations in the cultivation broth during the illuminated period were 10.1 ± 2.1, 14.4 ± 0.9, 13.5 ± 0.8, 16.6 ± 1.9, 8.8 ± 0.8 and 16.5 ± 1.7 mg O2 L−1 during stages I, II, III, IV, V and VI, respectively; while the DO concentrations during the dark period averaged 1.3 ± 0.5, 6.2 ± 1.2, 3.7 ± 0.1, 7.0 ± 0.9, 4.6 ± 0.6 and 10.0 ± 0.5 mg O2 L−1 in stages I to VI, respectively.The higher DO concentrations recorded at 12 °C were attributed to the increased oxygen solubility at low temperatures .No pernicious effect of these DO concentrations on microalgae activity was expected since inhibition of photosynthesis typically occurs above 25 mg O2 L−1, and the values remained within the optimal range to support nutrients and CO2 bioassimilation .The average pHs in the HRAP during stages I, II, III, IV, V and VI were 11.0 ± 0.0, 10.5 ± 0.3, 10.5 ± 0.4, 9.7 ± 0.2, 7.2 ± 0.3 and 7.5 ± 0.2, respectively.These findings confirmed that the influence of the IC concentration in the cultivation broth was higher than that of the temperature on the steady state pH of the cultivation broth, which was in accordance with previous results from Posadas et al. .Moreover, the highest pH values here recorded matched those observed by Toledo-Cervantes et al. during the simultaneous treatment of biogas and digestate in a similar experimental set-up, while Lebrero et al. reported comparable pHs to the lowest values obtained in this study when evaluating biogas upgrading in a transparent PVC column photobioreactor.A higher pH in the cultivation broth enhances the mass transfer rate of the acidic gases from biogas to the liquid phase, which ultimately results in higher upgrading performances as discussed below .TSS concentrations of 0.4–0.5 g L−1 were recorded during process operation at both high and medium alkalinity.Thus, the biomass concentration in the cultivation broth at the imposed biomass productivity during stages I to IV was representative of the operation of conventional outdoor raceways, where TSS concentration typically ranges from 0.3 to 0.5 g L−1 .However, the biomass concentration and productivity, during stages V and VI, decreased to 0.2 g TSS L−1 and 5–7 g dry matter m−2 d−1 respectively, due to the lower carbon load supplied in the feed and the lower CO2 mass transfer in the absorption column mediated by the low pH of the cultivation broth.Average CO2-REs of 99.3 ± 0.1, 97.8 ± 0.8, 48.3 ± 3.6, 50.6 ± 3.0, 30.8 ± 3.6 and 41.5 ± 2.0% were recorded during stages I, II, III, IV, V and VI, respectively.During stages I and II, the high CO2 mass transfer rates between the biogas and the liquid phase were promoted by the high pH and high buffer capacity of the cultivation broth.The initial pH of the system was roughly maintained in the cultivation broth of the HRAP and along the absorption column as a result of the high alkalinity of the digestate.During stages III and IV, a slight decrease in the pH of the cultivation broth from the initial value occurred as a result of biogas absorption in the column due to both the acidic nature of CO2 and H2S and the lower buffer capacity of the media, thus resulting in lower CO2-REs.This effect was more pronounced in stages V and VI, where the low buffer capacity of the cultivation broth was unable to maintain a constant and high pH, which resulted in the lowest CO2-REs recorded in this experiment.The pH of the cultivation broth significantly differed between the bottom and the top of the absorption column at medium and low alkalinity.Higher L/G ratios would have avoided these high pH variations along the absorption column.Nevertheless, a lower biomethane quality would be expected at high L/G ratios as a result of the enhanced O2 and N2 stripping from the recycling cultivation broth to the upgraded biogas .These data was in accordance to Lebrero et al. , who reported an average CO2-RE of 23% at a pH 7 and of 62% when the pH of the cultivation broth was increased up to 8.1.Overall, these results showed the relevance of inorganic carbon concentration to maintain a high pH in the scrubbing cultivation broth during biogas upgrading.On the other hand, a negligible effect of the temperature on CO2-RE was found at high and medium alkalinity.However, the higher CO2 solubility at lower temperatures resulted in a higher CO2-RE at 12 °C compared to that achieved at 35 °C under low alkalinity.This suggests that, despite the lower alkalinity of the cultivation broth could be partially compensated with the decrease in temperature, the latter mediated a major effect on CO2 mass transfer.C-CO2 desorption ratios, defined as the ratio between the mass flow rate of IC desorbed from the cultivation broth and the total mass flow rate of IC supplied to the system and considering a carbon content of 50% in the microalgal biomass , of 51, 50, 2 and 4% were recorded in stages I, II, III and IV, respectively.However, a negligible C-CO2 desorption was estimated at low alkalinities as a result of the low CO2 mass transfer in the absorption column and low IC input via centrate addition, which ultimately resulted in process operation under carbon limiting conditions.The highest CO2 desorption rates obtained during stages I and II were associated to the high IC concentration in the cultivation broth, which supported a positive CO2 concentration gradient to the atmosphere even though IC was mainly in the form of CO32−.On the contrary, IC was preferentially used by microalgae rather than removed by stripping despite the low pH prevailing in the cultivation broth at low alkalinity.These results agreed with those reported by Meier et al. , who identified stripping as the main mechanism responsible for carbon removal in a 50 L photobioreactor fed with a mineral medium and connected to a bubble column.Similarly, Alcántara et al. observed a 49% CO2 loss by desorption in a comparable 180 L HRAP interconnected to an absorption column during the simultaneous treatment of biogas and centrate.Average H2S-REs of 96.4 ± 2.9, 100 ± 0, 93.4 ± 2.6, 94.7 ± 1.9, 66.2 ± 6.9 and 80.3 ± 3.9% were recorded during stages I, II, III, IV, V and VI, respectively."The higher H2S-REs compared to CO2-REs were attributed to the higher dimensionless Henry's Law constants of H2S, defined as the ratio between the aqueous phase concentration of H2S or CO2 and its gas phase concentration .The highest H2S removals were achieved at the highest alkalinities, corresponding to the highest pH along the absorption column.Similarly, Franco-Morgado et al. obtained H2S-RE of 99.5 ± 0.5% during the operation of a HRAP interconnected to an absorption column using a highly carbonated medium at a pH of 9.5.On the other hand, the low pH in the cultivation broth together with the large decrease in pH in the absorption column under low alkalinity caused the poor H2S removal recorded.These results were in accordance with those reported by Bahr et al. , who observed a significant deterioration in the H2S-RE from 100% to 80% when the pH in the absorption column decreased from 7 to 5.4 in a similar HRAP-absorption column system.No significant effect of the temperature was observed at high-medium alkalinity on the removal of H2S.On the contrary, higher H2S-REs were recorded at 12 °C under low alkalinity likely due to the increase in the aqueous solubility of H2S.H2S oxidation ratios of 100%, 87% and 94% were obtained at 35 °C during stages I, III and V, respectively.However, an incomplete oxidation of H2S occurred at 12 °C, resulting in ratios of 55%, 67% and 33% during stages II, IV and VI, respectively.The remaining sulfur being most likely present as S-intermediates or biomass.Incomplete H2S oxidation was also reported by Toledo-Cervantes et al. , who estimated than only 40% of the absorbed H2S was oxidized to SO42− in a similar experimental set-up.Interestingly, the high DO concentrations in the cultivation broth at 12 °C did not result in higher H2S oxidation ratios likely due to the lower microbial activity at low temperatures.An average CH4 content of 98.9 ± 0.2, 98.2 ± 1.0, 80.9 ± 0.8, 82.5 ± 1.2, 75.9 ± 0.7 and 79.2 ± 0.7% was obtained in the final biomethane during stages I, II, III, IV, V and VI, respectively.The high CH4 contents in stages I and II were attributed to the high absorption efficiency of CO2 and H2S and the limited desorption of N2 and O2.Furthermore, a negligible CH4 absorption in the absorption column was observed along the six operational stages, with average losses of 2.8 ± 3.4% regardless of the alkalinity or temperature.Posadas et al. obtained slightly lower CH4 losses in an outdoors HRAP, while CH4 losses of 4.9 ± 2.4% were reported by Toledo-Cervantes et al. in a similar indoors system.At this point it should be pointed out that the composition of the biomethane produced in stages I and II complied with most European regulations for biogas injection into natural gas grids or use as autogas in terms of content of CH4 and CO2 < 2.5–4% .In fact, the CO2 content in the upgraded biogas accounted for 0.3 ± 0.1, 0.9 ± 0.3, 18.4 ± 1.0, 16.9 ± 0.8, 23.0 ± 0.9 and 20.3 ± 0.6% during stages I, II, III, IV, V and VI, respectively.During stages I to IV, H2S concentrations below 0.03% were recorded in the upgraded biogas, which complied with EU regulations.Moreover, no significant differences in O2 and N2 content of the upgraded biogas were observed during the six operational stages, which also matched the levels required by most European regulations.These results might be explained by the low L/G ratio applied during the study, which entailed a limited O2 and N2 stripping from the cultivation broth to the biomethane in the absorption column .No significant effect of the microalgae population structure on the removals of CO2 and H2S, and on the stripping of N2 or O2, was expected above a certain photosynthetic activity threshold.In our particular study, the control of the biomass productivity guaranteed a constant rate of photosynthetic activity along the process regardless of the microalgae species dominant.In addition, previous works have consistently reported no-correlation between the dominant microalgae species and biogas upgrading performance .The alkalinity of the cultivation broth was here identified as a key environmental parameter influencing biomethane quality.A negligible effect of the temperature on the quality of the upgraded biogas was recorded at high-medium alkalinity, while temperature played a significant role on biomethane quality at low alkalinity.Biomethane composition complied with most European regulations for biogas injection into natural gas grids or use as a vehicle fuel when photosynthetic biogas upgrading was carried out at high alkalinity.In addition, this study also revealed that low alkalinity media might induce inorganic carbon limitation, which ultimately decreases the CO2 mass transfer from biogas as a result of a rapid acidification of the scrubbing cultivation broth in the absorption column.
Algal-bacterial photobioreactors have emerged as a cost-effective platform for biogas upgrading. The influence on biomethane quality of the inorganic carbon concentration (1500, 500 and 100 mg L−1) and temperature (12 and 35 °C) of the cultivation broth was evaluated in a 180 L high rate algal pond (HRAP) interconnected to a 2.5 L absorption column via settled broth recirculation. The highest CO2 and H2S removal efficiencies (REs) from biogas were recorded at the highest alkalinity (CO2-REs of 99.3 ± 0.1 and 97.8 ± 0.8% and H2S-REs of 96.4 ± 2.9 and 100 ± 0% at 12 and 35 °C, respectively), which resulted in CH4 concentrations of 98.9 ± 0.2 and 98.2 ± 1.0% at 12 and 35 °C, respectively, in the upgraded biogas. At the lowest alkalinity, the best upgrading performance was observed at 12 °C (CO2 and H2S-REs of 41.5 ± 2.0 and 80.3 ± 3.9%, respectively). The low recycling liquid to biogas ratio applied (0.5) resulted in a negligible O2 stripping regardless of the alkalinity and temperature, which entailed a biomethane O2 content ranging from 0 to 0.2 ± 0.3%.
4
The genome of the protozoan parasite Cystoisospora suis and a reverse vaccinology approach to identify vaccine candidates
Cystoisospora suis is a protozoan parasite of the phylum Apicomplexa.This phylum contains almost exclusively obligate endoparasites of animals, including species of great medical and veterinary relevance such as Plasmodium falciparum and Toxoplasma gondii.According to recent reevaluations of the coccidian phylogeny, the position of C. suis in the family Sarcocystidae constitutes an outgroup of the cluster containing the genera Neospora, Hammondia and Toxoplasma.The closest outgroup genus of C. suis in the family Sarcocystidae is Sarcocystis, while the closest outgroup family of Sarcocystidae is Eimeridae, which contains the genus Eimeria.Cystoisospora suis is responsible for neonatal porcine coccidiosis, a diarrheal disease of suckling piglets that causes significant economic losses in swine production worldwide.The disease is commonly controlled with the triazinone toltrazuril, but drug costs and pressure to reduce the use of drugs in livestock production are increasing.Furthermore, resistance to toltrazuril has been described among parasites of the genus Eimeria and must be considered likely for C. suis.To date, no other drugs have been shown to be effective if administered in a way that is compatible with field use.Alternative options for the control of this pathogen include vaccination.Immunological control measures commonly provide longer lasting protection than chemotherapeutic interventions and do not leave chemical residues in the host or the environment.Vaccine development for apicomplexan parasites has been hindered in part by their relatively complex life cycles and lack of in vitro and in vivo models for screening.Moreover, many apicomplexans have proved capable of evading immune killing by targeting immunoprivileged sites or through extensive antigenic diversity.Vaccination against some apicomplexan parasites such as the Eimeria spp. that infect chickens has been possible using formulations of live unmodified or attenuated parasites, but vaccine production requires passage and amplification in live animals with implications for cost, biosafety and animal welfare.For other protozoan parasites, success has been harder to achieve.However, progress towards antigen identification that could lead to development of recombinant or vectored vaccines has improved for several coccidian parasites in recent years.To date, no vaccine has been developed against C. suis although previous studies have shown that cellular and humoral immune responses are induced upon infection, PhD Thesis, Auburn University, USA; Worliczek et al., 2010a; Schwarz et al., 2013; Gabner et al., 2014) and that superinfection of sows ante partum with high doses of oocysts can confer partial maternal protection.Since the use of live, virulent vaccines in large amounts is not practical and attenuated lines are not currently available for C. suis, a systematic search for proteins with antigenic properties is required to find appropriate vaccine candidates for testing and antigenic characterisation.A key step towards the identification of appropriate antigens for many apicomplexans has been the availability of genomic data, urging the development of a C. suis genome sequence assembly.The approach of finding vaccine candidates using a genome sequence has been termed “reverse vaccinology”.This strategy has become a powerful way to identify proteins that can elicit an antigenic response with relevance to host/pathogen interaction.Reverse vaccinology is based on in silico screening of protein sequences to search for motifs and structural features responsible for inducing an immune response.Examples include transmembrane domains, signal peptides for excretion or surface membrane targeting and binding sites for Major Histocompatibility Complex proteins.While this method has been successfully applied in bacterial pathogens, only a few studies have been performed on eukaryotic pathogens.Examples include the apicomplexan species T. gondii and Neospora caninum, as well as helminths such as Schistosoma.Recently, the program Vacceed was developed, providing a high-throughput pipeline specifically designed to identify vaccine candidates in eukaryotic pathogens.This program was tested on the coccidian T. gondii, where it showed an accuracy of 97% in identifying proteins that corresponded to previously validated vaccine candidates.In this work, we applied the reverse vaccinology paradigm to identify potential vaccine candidates in C. suis.To accomplish this, we used Illumina Next Generation Sequencing technology to sequence the C. suis genome and annotate protein-coding genes by combining ab initio and orthology predictions with gene models derived from a C. suis merozoite RNA-Seq library.Additionally, the annotation was manually curated at single gene resolution, greatly enhancing the quality of the gene models.Vacceed was then applied to perform a genome-wide screen for potential immunogenic proteins and identified 1,168 proteins with a high immunogenicity score.Finally, we validated the immunogenic potential of a C. suis-specific 42 kDa transmembrane protein by performing an independent immunoblot analysis using positive polyclonal sera from infected piglets.These results show how reverse vaccinology, combined with comparative genomics and transcriptomics, can be applied to a eukaryotic pathogen to guide the identification of novel vaccine candidates as a starting point to develop a vaccine against C. suis.Moreover, the C. suis genome represents the first genomic sequence available for a member of the Cystoisospora group and it might serve as a reference for future studies involving Cystoisospora spp.For preparation of genomic DNA for sequencing, C. suis oocysts were isolated from experimentally infected piglets, left to sporulate in potassium bichromate and purified using a caesium chloride gradient.DNA was extracted from 2.5 × 106 washed and pelleted sporulated oocysts using a Peqlab Microspin tissue DNA kit following the manufacturer’s instructions.RNAse A digestion was performed on the DNA before final purification.Cystoisospora suis merozoites were maintained in IPEC-J2 cells as described earlier.Free merozoites were harvested by collecting supernatant 6 days p.i. and purified on a Percoll® density gradient.Purified merozoites were then filtered through Partec CellTrics® disposable filters, washed twice with PBS and pelleted by centrifugation at 1000g for 10 min.Total RNA was extracted from purified pelleted merozoites using a QIAamp RNA blood mini kit according to the manufacturer’s instructions and was quantified using a spectrophotometer.A total of 80 million paired-end 100 bp reads were generated from an Illumina HiSeq 2000 platform.A genome sequence was assembled into contigs with CLC Genomics Workbench version 7.5 using default parameters.In this study, we did not attempt to assemble chromosomes, as the main interest was in identifying proteins to screen for immunogenic features.To remove possible contaminants, we aligned the contigs against the non-redundant RefSeq protein database using BLASTN with default parameters and removed all contigs with a hit to non-apicomplexan organisms.Alignment of C. suis contigs to N. caninum and T. gondii genomes was performed with PROmer version 3.0.7.Repetitive content was computed using RepeatMasker version 4.0.5.Genes were annotated using Maker version 2.31.8, which is a pipeline that combines different annotation tracks into a final set of gene models.Each annotation track was produced using the following programs:Augustus is an ab initio gene predictor that can be trained with an accurate set of gene models, if available.To construct the training set we started from the Cufflinks gene track, as it was based on the most accurate evidence for transcription, namely RNA-Seq data.Transcript sequences from the Cufflinks track were given to ORFPredictor version 2.3 to predict the location of each Coding Sequence.Transcripts were then filtered according to the following criteria: transcripts with incomplete CDS were removed; in the case of multiple isoforms, the isoform with the longest CDS was retained; genes without introns were removed; genes with at least one exon made entirely of untranslated regions were removed; genes separated by at least 500 bp from the previous or next gene were retained; genes with containing ambiguous nucleotides in the upstream or downstream 500 bp flanking regions were removed.The resulting gene set was used to train Augustus version 3.01 to generate a new species model that was provided as a species parameter for Augustus to predict gene locations on the contigs and generate the final track.Snap is an ab initio gene predictor.We used the same training set constructed for Augustus to train Snap version 2006-07-28 and generate the parameters file that was provided to Snap together with the contigs to produce the gene track.Exonerate is a homology based gene predictor which generates gene models based on the assumption that gene sequence and structure is conserved among closely related species.To create this gene track, protein sequences from Eimeria tenella, N. caninum, Hammondia hammondi and T. gondii were downloaded from ToxoDB release 24 and aligned to the contigs with Exonerate version 2.2.0.Cufflinks is an evidence-based gene predictor which constructs gene models from RNA-Seq data.Total RNA was extracted from merozoites harvested from an in vitro culture 6 days p.i. and sent for sequencing to GATC Biotech AG using an Illumina sequencer HiSeq 2500 to generate paired-end reads of 100 bp.A combined reference including the pig and C. suis genome was created, as the raw reads were likely to contain residual pig RNAs from the cell culture from which the merozoites were extracted.The Sus scrofa genome version 10.2 was downloaded from Ensembl and concatenated to the C. suis contigs.Reads were mapped to this combined reference with TopHat version 2.1.0.Reads mapped to the pig genome were filtered out and the resulting BAM file was provided to Cufflinks version 2.2.1 to reconstruct gene models.Cufflinks additionally computes the expression level of each gene using the standard FPKM measure, which normalises the number of reads mapped to a gene by gene length and the total number of reads in the dataset).Transcripts with low expression were removed and the output was converted to GFF3 format using the cufflinks2gff3 script from the Maker pipeline.Additionally, junctions derived from TopHat were converted to GFF3 with the tophat2gff3 script from Maker and added to the Cufflinks GFF3 to get the final track.After generating the four gene tracks, Maker was run to generate the uncurated Maker track with the following parameters that were provided in the configuration file:genome = FASTA with contigs,augustus_species = species name from the Augustus training,snaphmm = HMM parameter file from the Snap training,protein = FASTA with proteins from E. tenella, N. caninum, H. hammondi and T. gondii,Furthermore, these additional parameters were specified:.The BAM file from TopHat, the four gene tracks and the uncurated Maker gene track were loaded into the Integrative Genomics Viewer.Each gene was independently curated in the following way: in the case of incongruences among tracks, priority was given to the Cufflinks evidence, followed by Exonerate, Augustus and Snap.We decided to prioritise Augustus over Snap, as Snap produced a high number of very short terminal exons, which we cautiously regarded as unreliable.Gene models corresponding to lowly expressed genes were resolved only in some cases, as their exon–intron structure was often very fragmented.The genome and annotation of C. suis are available in the National Center for Biotechnology Information, database under the accession number PRJNA341953.To assess the quality of the annotation, cuffcompare was run to compute the fraction of final gene models confirmed by each type of evidence.To evaluate the completeness of the annotation, the core eukaryotic proteins from Parra et al. were downloaded from http://korflab.ucdavis.edu/datasets/cegma/core/core.fa and aligned to the contigs with BLASTP, retaining only the best hit.Eukaryotic core proteins that did not align to contigs were further analysed to check for their presence in the RNA-Seq dataset using the following steps: RNA-Seq reads from merozoites were de novo assembled into transcripts using Oases with default parameters; unaligned proteins were aligned to the assembled transcripts with TBLASTN.Functional annotation was performed with Blast2GO version 3.1.3 on the protein sequences using the following steps: blastp-fast was run against the local version of the nr database downloaded on 01 December, 2015 and used to generate a first version of the Gene Ontology functional annotation; InterProScan was run and results were merged to the initial GO annotation in order to extend it, ANNEX was run to further augment the GO annotation.The final functional annotation also allowed the identification of additional transposable elements that were not found by RepeatMasker.Initially, 403 protein sequences containing unknown amino acids were filtered out.The software Vacceed was used to identify potential immunogenic proteins.Vacceed implements a machine learning approach that combines independent sources of evidence for immunogenic features computed by different tools: WoLf PSORT, SignalP, TargetP, Phobius, TMHMM, MHC-I Binding and MHC-II Binding.The program finally assigns a score between 0 and 1 to each protein to rank the protein from low immunogenicity to high.We excluded MHC-II Binding in our analyses as data about MHC-II allele binding affinity is unavailable for the pig.Moreover, prior to running Vacceed the tool MHC-I Binding was trained using known immunogenic proteins as input.Since no known immunogenic proteins were available for C. suis, we used T. gondii proteins from Goodswen et al. to train MHC-I Binding for affinity against Swine Leukocyte Antigens alleles, as T. gondii was the closest relative for which a dataset of immunogenic and non-immunogenic proteins was available.Moreover, as the computational burden of the MHC-I Binding predictor was very high, we divided the Vacceed analysis into two steps: we ran Vacceed on all the protein sequences using each tool except MHC-I Binding and ranked the proteins according to score; we selected all proteins with a score > 0.75 and reran Vacceed on this subset using all the tools, including MHC-I Binding.Finally, from the score distribution obtained after step ii, we selected as candidates only proteins with a score ⩾ 0.998, corresponding to the top 25% of the score distribution.To assign orthologs of C. suis genes in the other coccidian species, protein sequences from E. tenella, Sarcocystis neurona, N. caninum, H. hammondi and T. gondii were downloaded from ToxoDB release 24 and clustered with the C. suis proteins using the Orthology MAtrix software with parameters LengthTol = 0.30.For genes for which OMA did not detect an ortholog in any species, we manually screened for its presence in the ToxoDB database by looking at the section “Orthologs and Paralogs within ToxoDB” within the gene entry.For 2D gel electrophoresis, 6 × 106 merozoites harvested from cell culture were purified and concentrated as described above and directly dissolved in DIGE buffer DTT, 20 mM Tris) and centrifuged again at 20,000g for 10 min at 4 °C.The protein concentration of each lysate was determined by Bradford assay.For separation in the first dimension, an aliquot of 40 μg of protein was diluted in 300 μl of rehydration solution CHAPS, 12.7 mM DTT, 2% immobilised pH gradient buffer, 0.002% bromophenol blue) and used to rehydrate 13 cm IPG strips with a non-linear gradient pH 3–10 for 18 h at room temperature.Isoelectric focusing was carried out using a Multiphor II electrophoresis chamber.After IEF, the IPG strips were equilibrated with 10 mg/ml of DTT in equilibration buffer SDS, 0.002% bromophenol blue, 1.5 M Tris–HCl) for 20 min and further incubated in the same buffer for another 20 min, replacing DTT with 25 mg/ml of iodoacetamide.The IPG strips were then washed with deionised water.In the second dimension, SDS–PAGE was performed using vertical slab gels under reducing conditions at 15 mA for 15 min, followed by 25 mA in a Protean II electrophoresis chamber.Each gel was stained with silver and scanned using the program ImageMaster™ 2D platinum v.7.0.Proteins separated by 2D gel electrophoresis were transferred onto a Trans-Blot® nitrocellulose membrane for 150 min at 35 V, 150 mA and 6 W on a Nova Blot semi-dry transfer system.The membranes were dried overnight in the dark at room temperature.The next day, blots were blocked for 1 h using 2% BSA.After three washes with TTBS buffer, the blots were incubated with porcine anti-C.suis serum diluted in TTBS buffer under gentle agitation at room temperature for 30 min.After rinsing in TTBS for 15 min, blots were exposed to biotinylated goat anti-pig IgG as secondary antibody for 30 min at room temperature, incubated with ABC solution and finally detected by 3,3′-5,5′-tetramethylbenzidine.Pre-colostral sera from non-infected piglets served as negative controls.The spot of interest from 2D gel was excised, washed, destained, reduced with DTT and alkylated with iodoacetamide.In-gel digestion was performed with trypsin according to Shevchenko et al. with a final trypsin concentration of 20 ng/μl in 50 mM aqueous ammonium bicarbonate and 5 mM calcium chloride.Dried peptides were reconstituted in 10 μl of 0.1% trifluoroacetic acid.Nano-HPLC separation was performed on an Ultimate 3000 RSLC system.Sample pre-concentration and desalting were accomplished with a 5 mm Acclaim PepMap μ-precolumn with a flow rate of 5 μl/min using a loading solution in 0.05% aqueous TFA).The separation was performed on a 25 cm Acclaim PepMap C18 column with a flow rate of 300 nl/min.The gradient started with 4% B and increased to 35% B over 120 min.It was followed by a washing step with 90% B for 5 min.Mobile Phase A consisted of mQH2O with 0.1% formic acid.The injection volume was 1 μl partial loop injection mode.For mass spectrometric analysis, the LC was directly coupled to a high-resolution quadrupole time of flight mass spectrometer was used.For information-dependent data acquisition, MS1 spectra were collected in the range 400–1500 m/z.The 25 most intense precursors with charge state 2–4 which exceeded 100 counts per second were selected for fragmentation, and MS2 spectra were collected in the range 100–1800 m/z for 110 ms. The precursor ions were dynamically excluded from reselection for 12 s.The nano-HPLC system was regulated by Chromeleon 8.8 and the MS by Analyst Software 1.7.Processed spectra were searched via the software Protein Pilot against T. gondii extracted from the UniProt_TREMBL database as well as against our C. suis protein database using the following search parameters: Global modification: Cysteine Alkylation with Iodacetamide, Search effort: rapid, FDR analysis: Yes.Proteins with more than two matching peptides at > 95% confidence were selected.We generated a total of 14,776 contigs from 80 M paired-end Illumina reads.The assembly had an N50 of 29,979 bp, with minimum and maximum contig lengths of 89 bp and 285,055 bp. A graphical distribution of contig lengths is shown in Fig. 1A. To exclude bacterial and other contaminations, we aligned the contigs to the nr database and retained 14,630 contigs without matches to other organisms outside the Apicomplexa and covering a length of 83.6 Mb.We used this set of contigs for the remainder of the analyses.In Table 1 we report a comparison about genome size and GC content between C. suis and other coccidians.To investigate evolutionary divergence from other coccidians we aligned the C. suis contigs to the genomes of the closest relatives, T. gondii and N. caninum, in the coccidian phylogeny.Only 27.8% and 28.1% of the C. suis bases aligned, respectively, to T. gondii and N. caninum.This translated into 722 C. suis contigs successfully aligned to T. gondii, covering a total of 3,143 C. suis genes, and 733 contigs aligned to N. caninum, including 3,195 C. suis genes.The GC content of the C. suis genome was 50.0%, similar to that of other coccidian species.Among repetitive sequences 5.07% were simple repeats, 1.84% low complexity regions and 0.02% small RNAs.We identified 93 genes associated with transposons distributed as follows: 58 genes from the ty3-gypsy subclass, six tf2 retrotransposons, one FOG transposon and 28 genes encoding for retrotransposon accessory proteins, such as gag/pol.Most apicomplexan parasites possess a special organelle called the apicoplast, which is a plastid acquired through secondary horizontal transfer from an algal ancestor and has functions related to secondary metabolism.We attempted to identify contigs corresponding to the apicoplast genome.By aligning the T. gondii apicoplast sequence to the C. suis contigs using BLASTN, we found that 89.4% of the T. gondii sequence mapped to three C. suis contigs.Most of the apicoplast sequence was covered by contig 294, with additional short fragments located on contigs 1252 and 6453.Gene annotation confirmed the apicoplast origin of these contigs, with a total of 25 protein-coding genes all well conserved with T. gondii.The genes encode for 14 ribosomal proteins, the elongation factor tu, orf c, d, e, f, the caseinolytic protease C, four RNA polymerases and the cysteine desulfurase activator complex subunit.Four of the ribosomal proteins contain a premature stop codon, which might suggest that they are pseudogenes.Finally, five T. gondii genes did not have an ortholog in the C. suis apicoplast.To further characterise apicoplast genes we looked at RNA-Seq expression data from merozoites, however we found none of the genes to be expressed in either C. suis or T. gondii at this stage.The complete list of apicoplast genes is available in Supplementary Table S3.To annotate protein-coding genes we applied the MAKER pipeline, as outlined in Section 2.3, followed by a manual curation of each individual gene.To quantify the effect of curation we computed the total number of genes, the percentage of exonic base pairs overlapping with exons generated from the RNA-Seq dataset) and the percentage of genes with UTRs before and after curation.The initial uncurated annotation contained 10,065 genes with a nucleotide level sensitivity of 68.8%, 5,553 genes with 5′ UTRs and 4,911 genes with 3′ UTRs.After curation, we obtained 11,572 genes, a nucleotide level sensitivity of 85.1%, 9,806 genes with 5′ UTRs and 8,485 genes with 3′ UTRs.These results showed that our curation greatly enhanced the quality of the annotation by increasing the concordance with transcriptional evidence and the number of genes with UTRs.The total gene number appears to be higher in C. suis compared with other coccidians, although lowly expressed genes might represent transcriptional noise and artificially increase the total gene count.To test this hypothesis, we removed genes with FPKM < 1, as this threshold is commonly used to distinguish transcriptional noise from real transcription.We detected 1,207 genes with FPKM < 1, implying that lowly expressed genes can only partially account for the higher number of genes in C. suis compared with other species.To verify the completeness of our annotation, we checked for the presence of eukaryotic core genes that should be conserved in all eukaryotes.From 458 eukaryotic core genes, 396 were present in our gene catalogue.We looked for the presence of the missing genes in filtered contigs and in transcripts reconstructed de novo from the RNA-Seq dataset.Of the remaining 62 eukaryotic core genes, none were found in the filtered contigs and 16 were found in the de novo assembled transcripts.A total of 46 eukaryotic core genes was thus not found in our C. suis data.To shed light on whether these might constitute missing annotations or genuine losses, we looked for their presence in the proteomes of the coccidian species listed in Table 1.We found 29 genes present in at least one coccidian species.The remaining 17 genes were not found in any species, and thus were likely true gene losses in this clade.If this set of genes is excluded from the initial list of eukaryotic core genes a final completeness of/ * 100 = 93% is obtained.To further assess the quality of the annotation we assigned to each gene a binary vector summarizing the kind of evidence that was used to annotate it.Generally, the most reliable source of annotation is transcription evidence, followed by orthology inference and ab initio predictions.Of a total of 11,572 genes, 10,452 were supported by transcription evidence.Among the remaining 1,120 genes, 435 were supported by orthology evidence and 338 only by ab initio predictors.Finally, 347 genes were only partially supported by any kind of evidence and would require further curation.Next, we predicted CDS for 11,545 genes using ORFPredictor.We classified genes according to the completeness of the CDS as follows: 7,801 genes had a complete CDS, i.e. with start and stop codon; 1,964 had only a start codon, 1,161 only a stop codon and 619 were CDS fragments, i.e. without start and stop codons.The complete annotation of C. suis including exons, introns, CDS and UTRs is available at the NCBI database under accession number PRJNA341953.Finally, the phylogenetic distribution of C. suis genes was assessed using OrthoMCL and found that 6,387 of the protein coding genes were assigned to an ortholog group.We applied the tool Vacceed to screen the C. suis proteome for vaccine candidates.Due to the high computation time of the MHC-I Binding tool, the Vacceed analysis was divided into two steps.Fig. 3A shows the score distribution after the first step of Vacceed.This resulted in a clearly bimodal distribution from which 2,905 proteins with a score > 0.75 were selected.During the second step of Vacceed we refined this set including the MHC-I Binding tool and obtained 1,168 final candidate proteins.By looking at the classification of candidates by functional class, we observed that most of the proteins had no annotated function or contained transmembrane domains but had unknown function.Among proteins with known function, the most abundant were channels and transporters, followed by proteins involved in metabolism and biosynthesis.Notable was the presence of apicomplexan-specific secretory organelles proteins such as rhoptry kinases, microneme and dense granule proteins, which are involved in parasite motility, attachment, invasion and re-modelling of the intracellular parasite environment.Next, we wanted to establish the contribution of different immunogenic features in defining a protein as immunogenic, as Vacceed utilises different sources of evidence from various tools: WoLf PSORT to predict protein subcellular localisation; SignalP, TargetP, Phobius for detecting transmembrane domains; TMHMM and MHC-I Binding.For this analysis, we excluded MHC-I Binding due to its high computational burden.For each tool, Vacceed was run on the C. suis proteome using all other tools except for the tool in question and the score distributions were compared with the one computed using all tools.All correlations were very high, showing that removing any one of the tools from Vacceed did not significantly affect the score distribution.However, it appeared that TMHMM was the program that contributed most to the immunogenic score since, when it was omitted, the correlation with the score computed with all tools was the lowest.Vaccine candidates in coccidians can point to homologous proteins that may also induce immune protection against other coccidians.For this reason, we collected all tested vaccine candidates from an extensive screen of the published literature and from the VIOLIN database for E. tenella, S. neurona, N. caninum and T. gondii.No previously tested candidates were found for H. hammondi.We found 13 candidates for E. tenella, one for S. neurona, 19 for N. caninum and 43 for T. gondii, giving a union set of 58 total candidates.These mostly included apicomplexan-specific secretory organelle proteins and surface antigens, which are known to be directly involved in the invasion process, but also a heterogeneous set of other proteins with no overrepresented function.Out of the 58 candidates, 34 were also present in the C. suis proteome and seven overlapped with our set of C. suis vaccine candidates, namely proteins orthologous to microneme proteins TgMIC8 and TgMIC13, the dense granule antigen TgGRA1, the surface antigen TgSAG1, the cyclophilin CyP, the immune mapped protein IMP1 and the protein disulphide isomerase PDI.This enrichment of previously described vaccine candidates in our Vacceed set was highly significant.Notably, most previously described rhoptry and dense granule candidate vaccine proteins were absent in C. suis, as they originated before the split of T. gondii, H. hammondi and N. caninum.On the other hand, many microneme proteins were present in C. suis but they were not classified as vaccine candidates in our study: three of those had a Vacceed score that was only slightly lower than our threshold, while the remaining three had a very low score.Proteins that are phylogenetically restricted to C. suis or conserved only among closely related species might be more attractive candidates for experimental testing, since proteins with homologs and conserved epitopes in the host might induce an unwanted autoimmune response.To study the evolutionary conservation of vaccine candidates, we applied OrthoMCL and classified proteins according to taxonomic levels.Approximately 28% of the proteins were very conserved in eukaryotes or shared by all organisms, another 29% were restricted to coccidians, apicomplexans or alveolates, while the largest fraction was not assigned to any ortholog group.If one would exclude the most conserved proteins from in vitro testing, there is still a large set of candidates to be investigated.By looking at GO functional enrichment of apicomplexan-specific and coccidian-specific proteins we observed a significant overrepresentation for “calcium ion binding” and “protein kinase activity”.Similarly, for coccidian-specific proteins the most prominent functional terms were “calcium ion binding” and “cyclic-nucleotide phosphodiesterase activity”.It is usually assumed that highly expressed genes are more likely to induce a sustained immunogenic response compared with lowly expressed ones.To gain insight into the expression of genes encoding for immunogenic proteins we performed an RNA-Seq experiment using polyadenylated RNA purified from C. suis merozoites, as those constitute the primary intracellular reproductive stage of C. suis and interact directly with the host during invasion.Genes with functions related to invasion were the most highly expressed and included surface antigens, apicomplexan-specific secretory organelles, cell adhesion and motility, and parasitophorous vacuole-related genes.When looking at the total ranked set of candidates according to expression level, we observed that more than 50% of the highly expressed candidates had unknown functions.Other proteins such as transporter abcg89, cytochrome b and c, were highly expressed and well characterised, but phylogenetically highly conserved and thus less suitable for further experimentation.Finally, 13 uncharacterised genes with very high expression and C. suis specificity might constitute attractive candidates for in vitro testing.To produce a more stringent list of candidates, we selected from the 1,168 vaccine candidates only those proteins that were highly expressed in merozoites and without orthologs outside the Apicomplexa.This resulted in a set of 220 proteins.These include 152 proteins with unknown function, of which 88 contain transmembrane domains.Additionally, there were 17 surface antigens related to the TgSAG/SRS gene families, 12 apicomplexan-specific secretory organelles proteins including orthologues of TgAMA1, TgMIC6, TgMIC13, TgROP6, TgROP12, TgROP27, TgROP32, nine proteins involved in metabolism and biosynthesis, seven channels and transporters proteins and three proteins related to cell adhesion.For the complete list of candidates see Supplementary Table S5.To test whether vaccine candidate proteins interact with the host immune system and induce an immune response during C. suis infection, we performed a 2D immunoblot experiment using positive polyclonal sera from experimentally infected piglets.We resolved crude lysates from cultured C. suis merozoites using a broad range of 2D gels.These revealed 18 spots that were easily visualised by silver staining, likely corresponding to the most highly expressed proteins in the merozoite proteome.To detect proteins that are recognised as antigenic by serum antibodies of infected hosts, we performed an immunoblot of the 2D gel probed with highly positive sera from experimentally infected piglets.This revealed one immuno-reactive spot, whereas no reactive spots were detected in the immunoblot probed with precolostral sera.Protein sequencing by mass spectrometry showed that the spot corresponded to a set of eight proteins.Remarkably, one of these proteins overlapped with our set of vaccine candidates.This protein had no annotated function, but showed a very high expression level and was C. suis-specific, according the OrthoMCL orthology assignment, making it a very attractive vaccine candidate.The protein was predicted to be short, with a molecular weight of 42 kDa and encoded by a single-exon gene located on contig 2816.To further characterise this protein we analysed its sequence using Phobius, which identified two transmembrane domains interspersed by a short cytoplasmic region and followed by a longer extracellular tail.Screening of this protein with the B-cell epitope predictor from the IEDB Analysis Resource tools revealed the presence of several putative epitopes along the sequence.No additional information about the function of CSUI_005805 was currently available, as this protein lacks orthologs in other organisms.By virtue of all its features such as a high Vacceed score, high expression, species specificity and in vitro immunoreactivity, we conclude that the CSUI_005805 protein constitutes an attractive vaccine candidate for further experimental testing.In this study, we sequenced, assembled and annotated the genome of C. suis, an apicomplexan species of worldwide veterinary relevance.We used this new resource to predict a panel of putative vaccine candidate proteins, which hopefully will serve to develop a novel subunit vaccine.We performed this analysis by combining in silico predictors of protein immunogenicity with transcription and comparative genomics data.Comparison with publicly accessible genome sequences for other coccidian species identified a relatively large assembly for C. suis, with only S. neurona found to be bigger.To understand whether this discrepancy was due to expansion of intergenic regions within the C. suis genome, we compared the length of genic and intergenic regions in C. suis and T. gondii and found no significant difference in proportions between the two species.Similarly, the proportions of exon to intron lengths were also not significantly different between the two species.Finally, repetitive regions were also not responsible for the difference in genome size.However, we caution that different genome assembly technologies and annotation strategies for different species might bias the comparison of genomic features among assemblies.Features such as average GC content were also consistent with those reported for other coccidians.Alignment of the C. suis contigs to assemblies representing the closest coccidian relatives showed that less than 30% of the C. suis assembly could be aligned to T. gondii or N. caninum.It has previously been shown that 90% of the N. caninum contig base pairs could be aligned to the T. gondii assembly, indicating a greater evolutionary divergence between C. suis and T. gondii.The next step will be to estimate the evolutionary divergence in millions of years between C. suis and its sister species.Our annotation of the predicted C. suis transcriptome suggests more than 11,000 genes, considerably higher than most other coccidians, but consistent with the larger genome size.We evaluated the quality of our annotation using a range of metrics and found 86% of the core eukaryotic genes to be present.Notably, if we exclude core eukaryotic genes that are absent in the whole coccidian clade, thus also expected to be absent in C. suis, this proportion rises to 93%.Expression data validated 90% of the gene models, pointing to a high degree of completeness and reliability of the gene structures.Additionally, we performed a gene-by-gene manual curation, which greatly enhanced the quality of the annotation, by increasing the concordance with transcriptional evidence by almost 20% and the number of genes with UTRs by ∼30%.The larger number of genes predicted compared with most other coccidians might be a consequence of more orphan genes, supported by the fact that more than 40% of the C. suis genes could not be assigned to any orthologous group.Alternatively, fragmented gene models might have resulted in an overestimation of gene number.However, gene numbers in other coccidians might also have been underestimated since most recent RNA-Seq based annotations have not yet been incorporated into ToxoDB.In our annotation, most of the genes identified had coding potential.While non-coding RNAs were not explicitly annotated, it is likely that polyadenylated non-coding RNA, such as long non-coding RNAs, constitute a minor fraction of the gene catalogue of C. suis, as also previously shown for T. gondii.Another feature specific to the apicomplexan clade is the presence of the apicoplast organelle in most of its members, except the gregarine-like Cryptosporidium.Two contigs were found to contain most the C. suis apicoplast genome, confirming the presence of this organelle in this species.Comparison with T. gondii revealed a high level of conservation for the C. suis apicoplast genes, although some genes contained premature stop codons, implying a recent pseudogenisation event.This phenomenon has also been described in T. gondii, where it was suggested that internal stop codons might be interpreted as tryptophan coding by the translation machinery.The absence of transcripts derived from these genes within the RNA-Seq data preclude confirmation for C. suis since transcription may simply have been low in the single merozoite lifecycle stage sampled.Consistent with these results, T. gondii orthologs of C. suis apicoplast genes also had very low expression levels in 3 days p.i. merozoites according to the expression data from the ToxoDB database.Screening the predicted C. suis proteome, 1,169 putative vaccine candidates were identified using the software Vacceed.We further characterised the candidates according to function, conservation, expression and overlap with candidates that had been tested in other coccidians.Most of the candidates were annotated as of unknown function and remarkably many had no orthologs in other coccidian species.Such diversity might be due to accelerated evolution of proteins that interact with the immune system of the host, as formerly reported for other apicomplexan species.Vaccine candidate proteins involved in host interaction and invasion such as apicomplexan-specific secretory organelles proteins, surface antigens and cell adhesion proteins were highly expressed in merozoites, as might have been expected given their function in the C. suis lifecycle.Interestingly, when vaccine candidates identified in other coccidian species were compared, only 22% of the candidates with orthologs in C. suis had a high Vacceed score.To understand why some known candidates had a low score in C. suis we looked at the partial scores from the various tools that constitute the Vacceed pipeline.Many proteins had very low partial scores, indicating the absence of specific signals for membrane, secretion or MHC1-binding epitopes.Additionally, when we looked at protein domains from the InterPro database we did not find any domain related to membrane, secretion or interaction with the immune system.This indicates that membrane-related signals might not always be required features for an anticoccidial vaccine candidate.A relatively high proportion of candidates identified in other coccidians had no orthologs in C. suis.By looking at the phylogenetic patterns of these proteins, these candidates were found to be either specific to E. tenella or they were proteins that originated just before the split of N. caninum, H. hammondi and T. gondii, mostly rhoptry kinases, dense granules and some surface antigens of the SRS family.These results also reflect the likely fast evolution of immune-related proteins.Finally, by overlapping the vaccine candidates obtained by Vacceed with proteins identified from an immunoblot experiment of pig serum, we pinpointed a promising new vaccine candidate corresponding to a 42 kDa transmembrane protein with unknown function.However, only a few proteins were recognised by positive sera from infected piglets.More sensitive detection methods or increased amounts of proteins on the gel would certainly reveal more positive spots.To further confirm the usefulness of candidates identified by reverse vaccinology and immunoblotting, recombinant proteins must be generated and characterised in vitro and in vivo in further experiments,In summary, we combined reverse vaccinology with transcriptomics and comparative genomics to identify a list of vaccine candidate proteins for further experimental testing.In order to restrict this set of candidates, new indicators of immunogenicity could be incorporated into the Vacceed pipeline, which is feasible due to the modularity of this tool.Studies on putatively immunogenic proteins of C. suis will also greatly enhance our understanding of the immune mechanisms underlying protection in porcine cystoisosporosis.Lastly, the genome and annotation of C. suis constitute a new step in the genomic era of apicomplexans.As the genus Cystoisospora can also be found in other hosts such as dogs, cats and humans, we anticipate that these resources will help to unravel the evolutionary mechanisms of host specificity in apicomplexan parasites.
Vaccine development targeting protozoan parasites remains challenging, partly due to the complex interactions between these eukaryotes and the host immune system. Reverse vaccinology is a promising approach for direct screening of genome sequence assemblies for new vaccine candidate proteins. Here, we applied this paradigm to Cystoisospora suis, an apicomplexan parasite that causes enteritis and diarrhea in suckling piglets and economic losses in pig production worldwide. Using Next Generation Sequencing we produced an ∼84 Mb sequence assembly for the C. suis genome, making it the first available reference for the genus Cystoisospora. Then, we derived a manually curated annotation of more than 11,000 protein-coding genes and applied the tool Vacceed to identify 1,168 vaccine candidates by screening the predicted C. suis proteome. To refine the set of candidates, we looked at proteins that are highly expressed in merozoites and specific to apicomplexans. The stringent set of candidates included 220 proteins, among which were 152 proteins with unknown function, 17 surface antigens of the SAG and SRS gene families, 12 proteins of the apicomplexan-specific secretory organelles including AMA1, MIC6, MIC13, ROP6, ROP12, ROP27, ROP32 and three proteins related to cell adhesion. Finally, we demonstrated in vitro the immunogenic potential of a C. suis-specific 42 kDa transmembrane protein, which might constitute an attractive candidate for further testing.
5
Drivers and emerging innovations in knowledge-based destinations: Towards a research agenda
Understanding the drivers and the typologies of innovation in destinations represents one of the main challenges for academics, policy makers and managers who are called on to define the evolutionary process of tourism in the complexity of human-technology interaction.The phenomenon of innovation has been receiving increasing attention in tourism research for the last 10 years and is considered to be a key factor in the competitiveness and sustainability of enterprises, organisations and destinations.Although innovation is an emerging topic of research, and innovation in destinations has been recognised as one of the main drivers of local development, the existing studies are fragmented; tourism innovation remains an empty buzzword that is extremely fragmented and largely ignored, and it lacks a specific theoretical framework.Several papers emphasise the key role of information and communication technologies in innovative processes, but technologies represent only a small part of the innovation drivers of the ‘complex world’ of destinations in which diverse actors interact; these actors are influenced by the social, economic and political factors of the destination and/or region and generate multidimensional and unusual forms of innovation.Considering the complexity of the tourism experience, which is co-created by the interaction among tourists, destination organisations and the local community, diverse forms of innovation can emerge and present new challenges for research on innovation in tourism destinations, in which human-technology interaction can play a significant role.New ways of thinking and interpreting tourism and innovation, including destination management, can capitalise on the connections between technological and societal changes by emphasising the local contexts in which innovation is nurtured.Innovation is a contextual process embedded in a geographical space.The literature considers a destination as a local innovation system in which public and private actors generate a co-evolutionary process of innovation that is dynamically influenced by the spatial dimension.Geographical proximity creates virtuous circles among knowledge, collective innovativeness and pervasive innovation and generates spill-over effects.Studies in diverse research fields − such as regional development, local systems of innovation, sociology, entrepreneurship and knowledge management − can advance the debate on innovation in destinations by combining the tourism theoretical domain with other conceptual frameworks in which local contexts play a significant role.This critical review paper aims to contribute to the debate on innovation in destinations, an emerging stream of research, by cross-fertilising diverse theoretical domains and proposing an integrated theoretical framework.This framework was constructed by adopting an integrative literature review as a useful research method for the emerging streams of research to discuss and integrate the existing fragmented studies of diverse research fields coherent with the destination management theoretical framework and to identify new challenges for research on innovation in tourism destinations.This critical review paper proposes an overarching theoretical framework for innovation in knowledge-based destinations.The paper identifies four forms of innovation in destinations – namely, experience co-creation, smart destinations, e-participative governance and social innovation − as a result of the synergies among four destination actors, the learning process and knowledge sharing that are facilitated by social capital and ICT platforms.The discussion and conclusion present the theoretical advances attained by this exploratory analysis of destination innovation and offer avenues for future research and challenges that should be explored by academics, policy makers and destination managers.This research was built on an integrative literature review, as useful qualitative research for emerging research topics would benefit from a holistic conceptualisation and synthesis of the literature.This method has consistently been adopted by other studies in tourism research.The integrative literature review is a distinctive form of research that integrates the existing literature and explores new knowledge through reviews, discussions, critiques and syntheses that allow a comprehensive literature review or a reconceptualisation of the existing frameworks.An integrative literature review differs from a systematic literature review because an integrative review can encompass any work design, with the implication that it is less standardised than a systematic review.The integrative review follows the conceptual structuring of the research topic, organised around the main concepts of the review topic, and provides a map in which the main concepts and streams of research have been connected.The work design used for this study consists of the literature that addresses the following nine concepts related to innovation in tourism destinations: innovation in tourism research; knowledge management and innovation in tourism local contexts; ICT infrastructures; social capital; political and institutional actors; destination management organisations; the local community; and local firms.The existing literature has discussed these nine concepts separately or as pairs but not all together, thus failing to cross-fertilise these diverse theoretical domains.This methodology allows the design of a conceptual framework that describes innovation in destinations and defines a preliminary research agenda that poses “provocative questions and provides direction for future research”.The research follows two phases.The first, through an analysis of the literature on innovation in tourism research, examines the main topic through a critical analysis and deconstructs and reconstructs works in the literature.Although the papers analysed in this phase present a classification and review of tourism innovation, thereby opening up spaces for new avenues of research, they present some limitations in capturing the complexity of innovation in tourism destinations."In the second phase, the main topic identified in the first phase is cross-fertilised and synthesised with different theoretical domains considering several seminal papers to examine the topic's ideas and concepts and proceed with a critical analysis.The literature, including various theoretical and empirical studies, has been organised by the main topics and is summarised in specific tables.The phenomenon of tourism innovation has gained relevance in academic research in recent years and has intensified the debate on the typologies of innovation and the drivers of innovativeness."Following Schumpeter's seminal classification of innovation, in which innovation can be interpreted as something ‘new’, as new or improved products, new production processes, new markets, new supply sources and new forms of organisation, scholars have introduced the concept of innovation and related classifications in diverse fields of research.From this perspective, innovation concerns the process of problem-solving and generating new ideas; however, it also requires the acceptance and implementation of processes, products, or services that involve the capacity to change or adapt.In this integrative literature review, only the following three articles address a review of the literature in tourism innovation and an analysis of the types of innovation that attempt to conceptualise this theoretical domain: ‘A review of innovation research in tourism’, with 432 citations; ‘100 innovations that transformed tourism’, with 32 citations; and ‘A systematic review of research on innovation in hospitality and tourism’, with 57 citations."Consistently, Hjalager's proposals of innovations in the tourism domain apply and consolidate Schumpeter's innovations and introduce specific innovations in tourism.Forms of innovation include the following: product or service innovations, as changes or new meanings of products or destinations are perceived by tourists to be new tourism experiences; process innovations, related to backstage activities, which often increase efficiency and productivity through technological investment and generate new combinations of processes; and managerial innovation, which impacts the organisational model and human resources management in new ways to empower human resources and enhance productivity and workplace satisfaction.Management innovation occurs when new destination governance models, such as tourism boards or destination management organisations, are introduced to co-ordinate, integrate and manage diverse stakeholders in destination strategies and marketing."Institutional innovation has been interpreted as a new collaborative/organisational structure or legal framework that redirects or enhances local actors' actions and generates network forms that change the institutional logic and power relations. "Furthermore, in a more recent work, Hjalager considers 100 innovations that have transformed tourism and identifies the following diverse categories of innovation: changing the product/service elements that create tourists' experiences and that increase the social and physical efficacy of the process; increasing the productivity and efficacy of tourism firms; building new destinations; enhancing mobility to and within destinations; enhancing opportunities to transfer and share information; and changing the institutional logic and power relations. "Gomezelj's study proposes a systematic literature review of innovation in tourism by analysing 152 papers that adopt diverse criteria, such as location, point of view, level of analysis, and the method and forms of innovation.The innovations discussed were classified considering the process as follows: general, institutional, product/service, knowledge importance, environmental process, entrepreneurial characteristics, green innovation, and managerial and theoretical.Applying a bibliometric analysis, Gomezelj identifies nine clusters of papers, namely, fundamental studies, the resource-based view and competitive advantage, organisational studies, networking, innovation in service, innovation systems, knowledge, management of organisational innovation and technology.These fields of research are discussed at the following three levels of analysis: the micro-level or firm level, at which innovative ideas are developed by enterprises, clusters and networks and are analysed considering the ICT and knowledge role; the macro-level, at which the effects of innovation on society, regions and tourism destinations are discussed, including their determinants and barriers; and the general level, at which innovation systems, or the collaborative approach of different institutions, aim to improve destination or regional development or the interweaving of ideas developed in firm clusters and their implementation in destinations.Although innovation is an emerging topic in tourism research, it remains fragmented, with the word ‘innovation’ often used as a ‘catchy tag’ with several different definitions.The consolidated literature classifies the diverse typologies of innovation in the tourism domain by adopting the traditional Schumpeterian approach that characterises the manufacturing industry as mainly technology-driven, which describes innovation in the tourism industry.The distinction between the different types of innovation manifests limitations and grey areas in the tourism and hospitality domains that impede the capture of the complexity of local tourism contexts but open spaces for new avenues of research.Tourism destination is a complex domain in which numerous private and public actors interact influenced by the social, economic and political factors of the context, and they generate a holistic tourism experience embedded in a specific local context that involves tourists.This type of tourism complexity calls for an interdisciplinary perspective to propose an interpretation of innovation in tourism destinations as a pervasive and contextual phenomenon, considering how the value of the context can play a significant role in generating and sharing knowledge, which nurtures innovation.The richness of knowledge in context, as a public good that does not involve rivalry, influences local innovation and provides opportunities at the spatial, sectorial and network levels.This broader perspective considers the conjoint effect of local and sectorial influences, in which co-location and transversal networks drive the process of connectivity and knowledge sharing, which enhances innovation generation and dissemination.Accordingly, through the triple helix model of innovation, the local co-evolution of different actors generates a spiral of innovation and knowledge transfer among the networks of institutions, universities, firms and other actors through relations exchange."Spatial proximity and concentration enhance learning through interaction; transform local contexts, including tourism's local contexts, as specific learning systems in which collective innovativeness is nurtured by tacit and explicit knowledge; and create a dynamic spiral of knowledge conversion that leads to innovation.In the tourism domain, the literature unanimously argues that knowledge management plays a significant role in facilitating innovation and competitive advantage not only at the firm level but also at the network, cluster and destination levels.A conceptual framework of the innovation process in destinations has been proposed that describes the role of the knowledge management theoretical framework in enhancing the comprehension of the destination innovation phenomenon.This framework explicates the process of innovation creation and management in destinations, which supports the knowledge and learning of multiple tourism-related agents at the local and national levels to define the following five stages: the development and sharing of tacit knowledge; the integration between tacit and explicit knowledge; the creation of innovative knowledge; the development of policies and strategies to transform knowledge into an innovation type; and the transfer and implementation of innovation.The role of cognitive, social and relational factors embedded in context in tacit and codified knowledge generation, knowledge dissemination and knowledge sharing within firms, network and clusters is a significant determinant of knowledge-based innovation in destination.Buzz and intensive face-to-face interactions between public and private actors nurtures the tacit knowledge embedded in local contexts and the collective learning that enhances the iterative process and dynamic spiral of knowledge conversion in the collective innovative capacity that leads to innovation and a spill-over effect.The development and integration of explicit and tacit knowledge represent a driver of innovation in destinations.Knowledge creation and the development of policies and strategies transform the destination into an incubator for the innovation of new products, new companies and new businesses at the local and regional levels.The transfer of innovation to and the implementation of innovation at the destination require destination managers to develop core competences and dynamic capabilities, including the ability to manage technology and to co-ordinate diverse actors.Tourism destinations are ideal contexts for generating innovation through clusters and informal and formal networks in which heterogeneous private and public actors interact; this innovation combines individual and collective knowledge and activates the value co-creation process with tourists to enhance destination competitiveness.Such forms of location-specific innovation are not easily transferable between places and are thus unique, which creates conditions for the defensible competitive advantage of the destination.Such examples and authentic and creative tourism or responsible tourism represent possible innovative forms of tourism in which tacit knowledge and collective learning can differentiate the destination.The theoretical frameworks of knowledge-driven innovation in local contexts guide us to define the concept of the knowledge-based destination as “a social community that serves as an efficient vehicle for creating and transforming knowledge into economically rewarding products and services for its stakeholders in an innovative process that continually facilitates the growth of its regional economy”."The knowledge-based destination summarises Nonaka and Konno's ‘ba’ concept.It represents the context in which collective and shared knowledge, both tacit and explicit, emerges through the interaction of diverse destination actors.‘Ba’ provides the physical, virtual and cognitive spaces to create, develop, codify, share and disseminate collective knowledge and facilitates diverse forms of innovation in the destination.The synergies among knowledge, collective learning and innovation are embedded in a specific local context and are activated by public and private actors, which creates conditions for a local system of innovation in which diverse learning systems enhance the opportunities to nurture tacit and explicit knowledge and facilitate collective innovation capacity.Consequently, collective innovation in the destination becomes a social process that transforms valuable individual and common knowledge through a learning system involving diverse actors that is facilitated by platforms that enhance knowledge sharing and communication processes.This critical review paper integrates existing but separate theoretical frameworks that describe the six drivers of four emerging innovations in knowledge-based destinations to reduce the grey areas in this emerging field of research.To overcome the traditional approach of innovation typologies, this paper attempts to capture the complexity of the local tourism context by identifying four emerging destination innovations as a holistic result of the collective and pervasive knowledge generated by the interaction among four destination actors – political actors, destination management organisations, enterprises and local communities – which is facilitated by two platforms.The framework considers two platforms that facilitate innovation in knowledge-based destinations, namely, the ICT infrastructure, which the consolidated literature confirms as one of the drivers of innovation in tourism, and social capital, which is an underdeveloped field of research in innovation studies.The two platforms create destination conditions that facilitate interaction, define soft and hard connections, facilitate knowledge sharing among diverse public and private actors and drive innovation.The four emerging innovations that result from the interaction and synergies among the six internal drivers of innovation in knowledge-based destinations are experience co-creation, smart destinations, e-participative governance and social innovation.The emerging innovations are presented in the following paragraphs that define the key questions creating possible avenues of research.In tourism as a knowledge-intensive industry, ICT has played a re-engineering role that changes the paradigm by which organisations, destinations and tourists communicate, collaborate and interact.ICT infrastructures have activated a process of restructuring traditional tourism products in the management of complex tourism experiences.Technological applications in the tourism sector can be summarised by considering the diverse opportunities for development that they have contributed to the creation of new firms, including tourists sharing experiences on social media, decision support tools for firms, marketing intelligence sources, e-learning tools, automation tools, game changers, transformers of the tourism experience and co-creation platforms.In the tourism domain, ICT infrastructures represent the drivers of innovation; they support managerial decision making and enhance openness and participation through their capacity to find new intermediation forms and develop their interactive interfaces between organisations and tourists.Indeed, by removing the traditional barriers of communication and interaction, ICT has facilitated the recourse to new forms of creation, organisation and consumption.Different ICT-based tools used at the destination level generate pervasive knowledge and drive innovation through the presence of platform connections among political actors, destination management organisations, enterprises and local communities.Examples of ICT-based tools include destination management systems, virtual and augmented reality, location-based services, computer simulations, intelligent transport systems, etc.Finally, the transition to the Web of Thought fosters innovation processes and enhances opportunities for co-creating destination value through the digital engagement of diverse stakeholders in social communication and knowledge sharing.The following table systematises the studies of the main authors.Social capital identifies a social structure based on norms, values, beliefs, trust and forms of interaction that facilitates tacit and codified knowledge sharing and generates collective actions.The research on social capital has received increasing attention and has become an interdisciplinary topic that involves the social structure of societies, organisations, networks and local contexts, which creates opportunities to interpret its role in the destination.Social capital in destinations can be analysed using three dimensions, namely, the structural, cognitive and relational dimensions.The structural dimension of destination social capital describes the non-hierarchical and hierarchical connections among the diverse stakeholders and actors that enable the generation of interpersonal and inter-organisational interactions and that facilitate collective actions and co-ordination among community members.The cognitive dimension refers to the values, attitudes, norms, and beliefs that create obstacles to or opportunities for sharing knowledge about and collaborating in local development.The relational dimension is a critical aspect of social capital that identifies the trust among stakeholders.The hard and soft linkages of social capital constitute an infrastructure in knowledge-based destinations that allows tacit and codified knowledge sharing and enhances the collaboration and co-creation among diverse destination actors – i.e. local governments, small businesses, residents and other stakeholders – which stimulate changes and nurture either incremental or radical innovations.The following table systematises the studies of the main authors.The consolidated literature recognises the centrality of political and institutional actors in creating advantageous conditions for innovative tourism clusters and networks in destinations.Actors play the roles of co-ordinators, planners, legislators, regulators, stimulators, promoters and financers of innovations in tourist destinations.Other functions of political and institutional actors include sharing educational resources among public and private actors to facilitate knowledge spill-overs, promoting networks and incubating tourist clusters, thereby reducing risk-financing or opportunism and free-= riding, facilitating market access to all tourist actors and activating innovation co-creation and increasing productive entrepreneurial initiatives and technology transfers.In facilitating and guiding these processes, political actors attempt to search for the correct balance between innovation and community preservation in both planning and implementation.Political actors move towards polycentricity in effective policy formulation and implementation through a hybrid approach that reconciles the complex negotiation in a joint decision-making process in which policy agents, firms, residents, and other stakeholders participate in resolving common development problems.The following table systematises the studies of the main authors.Traditionally, DMOs have had the legitimacy and competence to plan and manage destination development, including the co-ordination of marketing processes, to facilitate place brand building and to engage stakeholders in destination decision making.The evolution of the DMO is changing the role played by stakeholders in destination management, shifting it towards the embedded governance model that reconciles the top-down and bottom-up perspectives and in which stakeholder co-ordination and integration results in participative models of destination management."Consequently, a DMO's legitimacy and institutional mechanisms, which are legitimised by political actors, are also derived from formal and informal interactions with diverse destination stakeholders based on destination social capital and knowledge sharing.In this redefined scenario, the DMO can play a new role and become a learning organisation that promotes the enhancement of trust and collaboration in social capital and the use of ICT infrastructures as intelligent platforms; this role enhances organisational, community and individual learning and knowledge sharing and guides stakeholders towards diverse forms of innovation.The following table systematises the studies of the main authors."The analysis of the possible influences of the local community on tourism development and destination competitiveness considers aspects strongly related to social capital, such as knowledge sharing, value and behavioural patterns, the quality of residents' lives, cultural identity and local community participation.Different levels of local community participation, which range from manipulative participation to citizen power, influence the effectiveness and pervasiveness of destination decision making.Although an active community role is becoming central in the academic debate, the manner in which it generates innovative processes in the destination remains an unexplored topic of research."Developing an innovative community requires creating conditions that encourage a shift from residents' passive to active roles in knowledge generation, knowledge sharing and open communication channels, as social networks among residents and other types of actors increase co-operation, co-ordination and integration and innovative proposals and actions.Community participation can be analysed in three forms: coercive, induced and spontaneous participation.In coercive participation, local actors do not influence destination decision making; they assume a passive disposition and manifest a low level of interaction with key actors, such as government authorities and a restricted number of private actors who define the future of the destination.Coercive participation limits the conditions for innovation, which is relegated to tokenism.In induced participation, the community does not control the decision-making process, but it has a consultative role, which manifests conditions for proposing or contributing to the destination innovation process.In spontaneous participation, local actors have a high ability to participate in decision making and to interact and co-ordinate with other actors, which presents opportunities for innovative processes.The following table systematises the studies of the main authors.Small and medium-sized enterprises in tourism destinations play a significant role in enacting creative destruction processes, and they contribute to dynamic knowledge regeneration and promote innovation.Drivers and forms of local company innovation can be diverse, such as new organisational forms, new marketing approaches and experiential services, ICT infrastructures that facilitate networking and collaboration in the tourism destination and promotion of social changes that impact the community and economic sectors.The entrepreneurial capability to innovate in a destination is determined by three factors.First, the geographical proximity effect allows for knowledge generation, i.e. the sharing and assimilation of new information, innovation and technologies by competitors, residents or policy agents, which reduces R&D investments and costs.The second factor involves whether the organisational structure can innovate when presented with financial resources for R&D, a high level of deconcentration, a strategic orientation and high quality standards.The third factor, human capital, represents the driving force that innovatively connects all organisational resources and creates synergies with networks and destinations, thereby reducing the risk of failure in the innovation processes.Indeed, the entrepreneurial propensity to innovate is influenced by the co-operative relationships with other firms and is embedded in diverse innovation systems, such as tourism destinations.The following table systematises the studies of the main authors.Our conceptual framework proposes the following four emerging innovations in knowledge-based destinations as the holistic result of six drivers: experience co-creation, smart destinations, e-participative governance, and social innovation."After Pine and Gilmore's seminal work, the experience economy became a pervasive subject and has come to involve diverse topics and fields of research, including the tourism domain, in which the paradigm of experience co-creation nurtures the process of innovation.Innovation in destinations, as a result of experience co-creation, emerges as the collective action of diverse actors and is facilitated and triggered by the elements of social capital – such as trust, openness, networking and collaboration – and technological tools.ICT, e-tourism, virtual communities and gamification have reshaped the destination models to transform social interactions among destination actors and tourists in which experiences are dynamically co-created through stakeholder contributions, which thus defines a participatory approach to destination development."These technologies enable knowledge-based processes in destinations that are powered by user participation, openness and stakeholder engagement, making it possible to re-invent tourist experiences and enhance the differentiations among destinations.These results require maintenance to keep the experience alive over time.Experiential tourism, supported by social capital and ICT, poses significant challenges to reinterpreting the role of destination actors in generating innovation.Diverse questions emerge and create the following avenues for future investigations:How can DMOs and political actors exploit the disruptive power of ICT and digital platforms to facilitate knowledge sharing, trust and collaboration in the local community to enhance experience co-creation?,How can social capital building and stakeholder engagement enhance the maintenance of experience innovation to support dynamic experience co-creation with tourists?,How can actors capitalise on experience co-creation to generate value for stakeholders and destinations?,A smart destination can be seen as part of the evolutionary concept of smart cities, in which interconnected technological tools – ICT infrastructures, the Internet of Things, cloud computing and end-user Internet service systems, and augmented and virtual reality – connect destination stakeholders, which enhances the opportunities to communicate, collaborate and nurture knowledge.A smart destination creates opportunities to engage stakeholders in using ICT infrastructures dynamically as a neural system to allow knowledge sharing and the dispersing of innovation so that tourists can be included in the co-creation experience.Combining human capital, social capital and innovations, a smart destination combines efficiency with experience co-creation and sustainability.Regarding experience co-creation, a smart destination constitutes a pervasive innovation that includes diverse actors and stakeholders in the process and requires social capital that has the ability to facilitate knowledge sharing and trust.Diverse questions relate to this innovation, which create the following avenues for future investigation:How can smart destinations enhance the interactions between hosts and guests in various phases to thus improve their satisfaction?,Can smart destinations create opportunities for new destination models in which technological and social platforms enhance the quality of life and sustainable development?,How can DMOs and political actors create an inclusive process of smart destination building?, "The prevalent literature supports the shift towards forms of destination governance in which destination stakeholders' engagement plays a significant role and creates opportunities for innovation.The evolutionary process of destinations, where top-down governance models have been succeeded by hybrid models in which stakeholder engagement plays a significant role, has been accelerated by ICTs and digital platforms.ICTs and digital platforms provide digital spaces to enhance stakeholder engagement in decision making, which reduces the boundaries among diverse actors.E-participative governance models represent an emerging destination innovation that creates new avenues for future research.Some possible key questions include the following:How can the power of ICTs and digital platforms be enhanced to facilitate stakeholder engagement in destination planning, co-ordination and collaboration?,How can social capital be nurtured to facilitate community participation?, "Alternatively, how can e-participative governance impact social capital, transform culture, values, and so on, and consequently change the destination's identity?",What roles exist for DMOs and local actors?,Social innovation has received increasing attention by diverse academic fields of research and in political-institutional debates as a pervasive topic that impacts both society and local firms.The recent literature reviews diverse streams of research and analyses phenomena from a multidisciplinary perspective to present diverse definitions and to identify the challenges and implications for social and local development.Interesting implications for research on destination innovation emerge from these streams of research."In particular, Schumpeter's theories of entrepreneurship, social entrepreneurship and social innovation are closely related concepts, and organisational innovation impacts social wellbeing, which causes positive spill-over effects for society.The multidisciplinary approach adopted in these studies allows for the consideration of social innovation as a new concept that produces social change and introduces new solutions − products, services, models, processes, etc – that influence social capital, local development and knowledge capabilities.Social innovation involves both changes in the social capital structure and a new way to solve social imbalances, and it represents a novel social technology that creates social value to transform the destination patterns.This innovation influences attitudes, behaviour and the multiple levels of interactions of diverse actors in tourism destinations that involve unusual key players − such as local communities, non-profit and non-governmental organisations, etc – in the exploitation and exploration of destination resources and opportunities of innovation.Such forms of social innovation can drive new destination models redefining the relationships among actors.New relationships among destination actors debunk the consolidated top-down process, and forms of soft power prevail, thereby upending the traditional relationships and roles in the destination architecture and power.The following diverse challenges for future research have emerged:How can governance nurture social capital and entrepreneurship to facilitate diffused and successful social innovation?,How do local community bottom-up processes activate social innovation to create new solutions and creative spaces?,How do the local community and entrepreneurs interact in these processes?,How can social innovation enhance opportunities to activate spontaneous stakeholder participation in the experience of co-creation and e-participative governance?,How can social innovation drive a novel social technology that creates social value and reduce social imbalances in tourism destination?,Although academics and policy makers around the world consider innovation to be one of the main drivers of destination development and competitiveness, the research on innovation in tourism destinations is fragmented, manifests grey areas in the tourism domain and lacks an integrative theoretical framework that can capture the complexity of the destinations in which diverse public and private actors interact.This conceptual paper cross-fertilises and discusses the relevant literature in the tourism and other theoretical domains and proposes an integrative theoretical framework that interprets destination innovation as a complex and evolutionary knowledge-driven phenomenon resulting from human-technology interactions.This framework considers emerging innovations in knowledge-based destinations as a holistic and pervasive result of the collective knowledge generated by the interaction among four destination actors and facilitated by two platforms in a specific local context.Although it mainly explores this emerging stream of research, this paper also presents some preliminary contributions to the theoretical debate on innovation in destinations.First, the paper designates borders and differences among innovation in the tourism domain, innovation in the tourism destination and innovation in the knowledge-based destination.Innovation in the tourism domain, as defined by the consolidated literature, classifies the diverse typologies of innovation by adopting the traditional Schumpeterian approach and the lens of the manufacturing industry.This approach captures the forms of innovation at the level of the single tourism organisation/networks and manifests certain grey areas in interpreting the tourism and hospitality domains.This paper calls for overcoming the generic term of tourism innovation by defining specific research areas of innovation investigation, such as hospitality, destination, etc, in which a specific theoretical framework can be developed and consolidated.Consequently, innovation in destinations can follow the application of the traditional tourism innovation approach in which forms of innovation such as ICT tools do not embrace the complexity of the knowledge-based destination."Innovation in knowledge-based destinations, such as Nonaka and Konno's ‘ba’ concept, overcomes the borders of the single actors and/or ICT platform and emerges as the result of collective and shared knowledge, both tacit and explicit; this approach represents a holistic and pervasive result of human-technology interactions.Second, this paper argues that specific local contexts matter in destination innovation.Contexts assume a repository role of spatial and cross-sectorial knowledge generation and dissemination, which drives the pervasive and emerging innovations of the destination."The destination represents a specific learning system based on the geographical dimensions and multiple tourism-related agents' interactions to generate a dynamic spiral of knowledge sharing, collective innovativeness and pervasive innovation. "The destination's capacity to reach a high level of innovativeness is subject to the value in the context of six drivers of innovation, namely, four local public and private actors and two platforms.The four actors can play diverse roles with varying amounts of authority in driving the four typologies of innovation to leverage social capital and ICT.This paper opens up new avenues of research through which to analyse the role of public and private actors in this dynamic spiral of knowledge sharing, collective innovativeness and pervasive innovation facilitated by technological platforms and social capital.The paper suggests the creation of local conditions to facilitate offline and online stakeholder engagement as a key element to enhance knowledge generation, sharing and transformation to thus activate innovation processes at the destination.Third, the integrative framework presented here overcomes the limited focus on technology-driven innovation at the destination and introduces to the theoretical debate the complementary role of social capital and ICT infrastructures in creating conditions that facilitate innovativeness, stakeholder engagement and bottom-up processes for pervasive and holistic destination innovation.The consolidated literature emphasises the disruptive role of ICT in the tourism innovation process but neglects the significant role of social capital.Social capital and ICT represent the structural, cognitive and technological platforms of the destination in which human/organisational and technological factors converge to facilitate interaction, collaboration, trust building and knowledge sharing among the four diverse actors and to trigger the emergence of diverse forms of destination innovation.Accordingly, with the new way of thinking and interpreting tourism and innovation, this paper suggests capitalising on the connections between technological and societal changes in local contexts.It opens up a new scenario for the role of institutions and local actors in building social capital that can nurture innovation acceptance and innovativeness in local contexts to enhance the effectiveness of innovative ICT tools.Fourth, this approach goes beyond the current innovation paradigms that analyse innovations in the tourism domain, which usually present traditional forms of innovation based on the manufacturing paradigm that are considered in a single and reductive way.The complexity of the tourism experience co-created by the interaction between tourists and local actors is associated with the complex dynamic spiral of knowledge sharing, collective innovativeness and pervasive innovation, which requires a new interpretation of innovation in destinations.This paper identifies four emerging innovations as the pervasive and holistic results of the collective knowledge generated by the interaction among four destination actors and facilitated by ICT infrastructures and social capital.Overcoming the traditional innovation paradigms, this integrated framework proposes advances in academic research that presents four destination innovations as the result of the convergence of diverse typologies of innovations that are transforming tourism and local contexts, specifically, experience co-creation, smart destinations, e-participative governance and social innovation.In these innovations, difficulties emerge in defining the borders between the diverse determinants and the emerging typologies of innovation because they are strongly interrelated, and a synergetic process intervenes between the determinants and the emerging innovation.All innovations are the intangible result of interdependences among the six determinants of destination innovations, and they simultaneously redefine the six determinants.Finally, the preliminary key questions related to these four emerging innovations create avenues for future research and identify the challenges for academics, policy makers and destination managers to understand and strengthen the possible role of destination actors and their synergies in destination innovation under the conditions of knowledge-driven innovation in destinations.Emerging innovations that influence behaviour and multiple levels of interactions of diverse actors create changes in the social capital structure and introduce new ways to co-create value in the context that drives new destination models.New destination models can be derived from emerging innovations and can be designed and analysed in future theoretical and empirical research.Emerging innovations, such as social innovation, can open up new scenarios in which unusual relationships among destination actors debunk the consolidated top-down process to create new patterns of relationships, influences and power beyond the six innovation drivers.This paper is not without limitations.First, this integrative literature review overlooks the phenomenon of innovation.As previous literature suggests, some papers adopt words such as ‘creativity’ or ‘change’ to debate innovation.Consequently, the paper underestimates the ‘soft’ forms of innovation, such as creative cities, which are transforming the consolidated paradigms in destinations.Second, the paper discusses four emerging innovations that represent a preliminary synthetic design of possible destination innovations to contribute to a research agenda for academics, policy makers and destination managers.This review does not aspire to be exhaustive, and other possible innovations can be identified, discussed and validated by the theoretical research and empirical analysis in future papers.Third, there are other external factors that influence the innovation process of destinations, including tourists, which are unexplored in this paper.Future research will overcome this limitation with a more holistic and comprehensive model in which tourist participation in knowledge generation and destination innovation processes can play a significant role."Because this is still a relatively young field of research, further research is needed to underpin this conceptual framework and other diverse and related streams of research through theoretical contributions, in-depth case studies and empirical analysis, which would overcome this paper's limitations.
Research on innovation in tourism is fragmented and confined to traditional paradigms. This critical review paper, which cross-fertilises and discusses the relevant literature in tourism and other theoretical domains, proposes an integrative theoretical framework of innovation in destinations. The paper identifies four emerging innovations – experience co-creation, smart destinations, e-participative governance and social innovation – as evolutionary, knowledge-driven phenomena that are generated by the interaction among four destination actors and facilitated by information and communication technologies (ICTs) and social capital. The discussion and conclusion present some theoretical advances as follows: local contexts matter in destination innovation when assuming a repository role of spatial and cross-sectorial knowledge; social capital and ICT infrastructures facilitate innovativeness and stakeholder engagement; and emerging innovations are pervasive and the holistic results of the collective knowledge of four destination actors and are facilitated by ICT and social capital. The paper offers avenues for future research and challenges that should be explored by academics, policy makers and destination managers.
6
Quantitative multiplex one-step RT-PCR assay for identification and quantitation of Sabin strains of poliovirus in clinical and environmental specimens
Starting in 1988, the World Health Organization has led a worldwide campaign to eradicate poliomyelitis.As a result, the number of poliomyelitis cases has dropped dramatically.The disease has been eliminated from most countries except few endemic regions including Afghanistan, Pakistan, in addition to few others that still experience small outbreaks caused by vaccine-derived poliovirus.Poliovirus surveillance in clinical and environmental samples is an important part of the polio eradication campaign.Identification and quantitation of polioviruses in stool and environmental samples are a part of this surveillance.Unlike Oral Polio Vaccine, vaccination with Inactivated Polio Vaccines does not induce adequate intestinal immunity that would prevent infection with the virus.Efforts are now underway to improve its ability to induce mucosal immunity, and development of methods to evaluate the mucosal immunity is an important part of this work.The most direct way of doing this is by challenging IPV recipients with OPV followed by quantifying the level of virus excretion in stool.The method described in this paper could significantly facilitate this task.The conventional method for identification of polioviruses in clinical specimens is based on virus isolation according to a specific algorithm using inoculation of RD and L20B cells, followed by virus identification using tests such as enzyme-linked immunosorbent assay, probe-hybridization, micro-neutralization, EIA with type specific polio antibodies or quantitative RT-PCR using the ITD v5.0 kits.In the end, to confirm the virus identity the sequencing of about 900 bases of VP1 gene is performed.These methods are time-consuming and laborious.In addition, the need for increased poliovirus containment limits the use of this approach to laboratories that conform to strict GAP-III requirements.Several molecular-based procedures have been developed for identification of poliovirus serotypes, including ELISA, reverse transcription-PCR followed by hybridization with specific oligonucleotides, and quantitative RT-PCR that uses degenerate primers with mixed-base and inosine residues.Such modified primers make the assay broadly specific, but may diminish its sensitivity.Also, both virus growth in cell cultures followed by ELISA or RT-PCR followed by hybridization with specific oligonucleotide probes are multistep procedures that complicates their use for large scale analysis."Recently we've developed an osRT-PCR assay for quantitation of poliovirus and for identifying each serotype based on specific DNA amplicons sizes.The limitation of this method is its ability to quantify only one serotype per reaction.In this study, we propose the multiplex version of the method."We've improved the previous osRT-PCR assay to include specific fluorescent oligonucleotide TaqMan probes for each OPV strain, which enabled multiplex identification and quantitation of all three serotypes of poliovirus in the same real-time RT-PCR reaction.Stocks of US neurovirulence reference vaccine with known virus titers were used as positive controls for qmosRT-PCR.They contained 108.90, 108.72 and 108.92 50% cell culture infectious doses per milliliter of Sabin 1, 2, and 3 strains respectively.Twenty-nine RNA samples extracted from sewage and stool samples were used to validate the qmosRT-PCR assay.These samples were collected in Israel in a study approved by institutional review boards, and were shown to contain poliovirus.The samples 1 to 23 in Table 1 are RNA samples that were extracted from viruses isolated from sewage in Israel.Samples 24 to 26 are poliovirus isolates recovered from stools collected in Israel from a poliovirus excretor.Samples 27–29 are virus isolates from sewage samples.The stool samples in Table 5 are collected from clinical trial of OPV2.Enteroviruses of species A, B, C and D were used to assess the specificity of qmosRT-PCR.The strains included Human Coxsackievirus B1, B2 and B3, and Echovirus 11 were purchased from American Type Culture Collection.Human coxsackievirus A13, A15, A16, A17, A18, A20, A21 and A24, and Human enterovirus 70, 71 and D68 were kindly provided by Dr. Steven Oberste.The specific primers for each of the three oral poliovirus vaccine strains, were described previously.They were based on nucleotide sequences of the P1 capsid region of RNA sequences that are unique for each poliovirus Sabin strains and are not present in genomes of other enteroviruses.PCR amplification with these primers resulted in DNA fragments 266, 122, and 199 nucleotides long for Sabin 1, 2, and 3 viruses, respectively.To use these primers in multiplex format of quantitative osRT-PCR, oligonucleotide probes were designed specifically for OPV strains and were synthesized by Thermo Fisher Scientific.Oligonucleotide probes specific to each serotype of poliovirus contained a fluorescent molecule at the 5′ end and a non-fluorescent quencher at the 3′ end.Fluorescent labels in oligonucleotides specific to types 1, 2, and 3 were 6-carboxy-fluorescein, 2′-chloro-7′phenyl-1,4-dichloro-6-carboxy-fluorescein and 2′-chloro-5′-fluoro-7′,8′-benzo-1,4-dichloro-6-carboxy-fluorescein, respectively.The probes were prepared at 1 μM concentration each.Forward and reverse primer mixtures were prepared at 40 μM for Sabin 1 and 3, and 20 μM for Sabin 2.Preparations were stored at − 20 °C."Viral RNA was extracted from clinical samples, environmental isolates, and from poliovirus-infected cell culture fluids using QIAamp viral RNA mini kit and according to the manufacturer's protocol.The extracted RNA was eluted in a final volume of 60 μl of sterile RNase-free water and stored at −80 °C freezer.Quantitative multiplex osRT-PCR reactions were prepared in 96-well optical plates in a final volume of 25 μl using 2 μl of viral RNA and QuantiFast Multiplex RT-PCR Kit.Briefly, oligonucleotide probes Sab1-FAM, Sab2-VIC and Sab3-NED were used at a final concentration of 25 nM each in a mixture with three pairs of primers at a concentration of 0.8 μM for Sabin 1 and 3, and 0.4 μM for Sabin 2.The qmosRT-PCR procedure was performed using real-time PCR System ViiA7 at the following thermal cycling conditions: one cycle incubation for 20 min at 50 °C and 5 min at 95 °C, followed by 45 cycles, each consisting of 15 s at 94 °C, 15 s at 50 °C, and 50 s at 60 °C.To check the sensitivity of the qmosRT-PCR in the worst condition, 30 μl of 10-fold dilutions of each Sabin strain were mixed and serially spiked in stool extract known to be free from poliovirus.The RNA was extracted from the prepared viruses-stool supernatants and was analyzed by qmosRT-PCR as described above.The stool supernatant was prepared as described previously.To check the sensitivity of the qmosRT-PCR in the absence of PCR inhibitors, 10-fold dilutions of Sabin strains RNAs were prepared in different combinations in cell culture medium and analyzed by qmosRT-PCR.In addition, to evaluate the ability of qmosRT-PCR to specifically quantify in the same reaction all three Sabin strains with varying concentrations, different combinations of RNA of Sabin strains at concentrations of 1, 10 and 100 CCID50/reaction were analyzed.The DNA library was prepared for Illumina sequencing using 0.25-0.5 μg of total RNA for fragmentation with ultrasonicator Covaris to generate fragments 300–500 nt of size suitable for illumina sequencing.The fragmented RNA samples were used to prepare DNA libraries using NEBNext mRNA Library Prep Master Mix Set for Illumina as described.The quality of DNA libraries was analyzed with BioAnalyzer.The deep sequencing was performed in multiplex format on MiSeq producing 250 nucleotide-long paired-end sequencing reads.The raw sequencing data were analyzed against the RNA sequences of Sabin 1, 2 and 3 strains respectively by the SWARM and HIVE custom software.Previously, three serotype-specific pairs of primers were selected to amplify specific DNA amplicons with different sizes for Sabin 1, 2, and 3 polioviruses.The specificity of the three multiplex primer sets was tested by conventional PCR performed with single or multiplex sets of primers.The primers were shown to be specific for each Sabin strain.In addition to Sabin strains the primers were also able to amplify wild type strains of type 1 and type 3 poliovirus that are closely related to Sabin 1 and Sabin 3 viruses, but not wild type 2 virus or type 3 virus, and also were unable to amplify any other enteroviruses.To demonstrate that the osRT-PCR can be used for multiplex identification and quantification of polioviruses in one reaction, the QuantiFast Multiplex RT-PCR Kit was used with the three primer pairs and fluorescent TaqMan probes described above that are specific for each OPV serotype.The specificity of this method was demonstrated by its ability to identify and quantify each poliovirus serotype individually and when mixed in different combinations, as well as by its inability to amplify other enteroviruses belonging to species A, B, C, and D.To evaluate the ability of qmosRT-PCR to specifically quantify in the same reaction Sabin strains that have different concentrations, different combinations of RNAs of Sabin strains at concentrations 1, 10 and 100 CCID50/reaction were analyzed by qmosRT-PCR; the method could specifically quantify each Sabin strain in all RNA combinations/concentrations analyzed.The smallest concentration of 1 CCID50/reaction was quantified for each Sabin strain in the presence of 100 CCID50/reaction of other two Sabin strains.This result is presented in Fig. 2.The lower limit of detection of the qmosRT-PCR assay was evaluated in multiplex format by spiking stool extract known to be poliovirus free with known quantities of Sabin 1, 2, and 3 viruses.All three Sabin strains we analyzed in the same qmosRT-PCR reaction.Lowest detectable virus concentration was found to be 2.4–24, 0.2–2 and 2.5–25 CCID50/ml for Sabin 1, 2 and 3 respectively.The linearity range was about 6 log10 for Sabin 1 and 3, and about 7 log10 for Sabin 2.The sensitivity of the qmosRT-PCR in the absence of the PCR inhibitors was evaluated by analyzing serial 10-fold dilutions of different combinations of RNAs of Sabin 1, 2 and 3.The method was found to have a very large linearity range of 7–9 log10 and be very sensitive being able to quantify 0.03–3.36 CCID50/ml of Sabin virus depending to the RNA combinations of different strains.Twenty-nine clinical and environmental samples were analyzed with qmosRT-PCR.In order to confirm the result of this method the Illumina sequencing was performed for the same samples.Sequence analysis focused on the capsid-coding region.Its location in Sabin genomes is from nucleotide 721 to 3539 for Sabin 1, nucleotide 721 to 3540 for Sabin 2, and nucleotide 721 to 3400 for Sabin 3.The qmosRT-PCR results showed the presence of type 1 poliovirus in 1 sample, type 2 in 11 samples, and type 3 in 5 samples.Deep-sequencing confirmed the presence of type 1 poliovirus in 1 sample, and type 3 in 5 samples, but detected type 2 virus in 25 samples.The analysis of the poliovirus structural region showed that the difference between qmosRT-PCR and deep-sequencing results for type 2 poliovirus detection was due to the high divergence of the structural region of undetected viruses and to the accumulation of mutations in primer-binding sites.Previously these viruses were shown to be highly divergent vaccine-derived polioviruses.Four of type 2 poliovirus samples, 1 type 1 and 5 type 3 that were correctly identified by qmosRT-PCR were confirmed by deep sequencing to be Sabin strains.The qmosRT-PCR was designed specifically to identify Sabin strains used in OPV.These results demonstrate that the method was able to identify and quantify 100% of all three Sabin strains and their closely-related viruses in clinical and environmental samples.Several lots of monovalent and trivalent OPV from different manufacturers were analyzed by the qmosRT-PCR.The assay accurately identified and quantified all OPV lots.Four lots of conventional IPV were analyzed and as expected the result showed the presence of only poliovirus type 1 as this assay was designed to identify Sabin strains and their closely related derivatives.IPV is made from Mahoney, MEF-1 and Saukett strains.Mahoney is closely related to Sabin 1, while MEF-1 and Saukett are significantly different from Sabin 2 and Sabin 3.Similar tests with Sabin IPV showed that qmosRT-PCR method correctly identifies and quantified all three Sabin strains.The monovalent OPV2 was tested in a clinical trial as a part of Fighting Infectious Diseases in Emerging Countries study.Eighteen RNA samples extracted from stool samples collected from this clinical trial were analyzed by qmosRT-PCR.Fifteen samples were positive for poliovirus type 2, and no samples were positive for Sabin 1 or Sabin 3.To confirm the result of the qmosRT-PCR, the same samples were subjected to Illumina sequencing.No poliovirus was identified in samples 8, 13 and 18 by qmosRT-PCR and was confirmed by Illumina sequencing.This result demonstrated that qmosRT-PCR can be used for analysis of OPV shedding during clinical trials.The worldwide campaign to eradicate poliomyelitis may result in complete polio eradication within a few years from now.Wild type 2 poliovirus was declared eradicated in 2015, and wild type 3 poliovirus has not been detected since November of 2012.Additional efforts are needed to complete eradication and to maintain polio-free status.They include continued high vaccination coverage, laboratory containment of poliovirus stocks and other infectious/potentially infectious materials, as well as surveillance of poliovirus in clinical and environmental samples.After circulation of wild polioviruses is stopped, the use of OPV will be discontinued to prevent emergence of vaccine-derived polioviruses, and replaced by immunization with IPV.This has already happened with type 2 poliovirus, and currently bivalent OPV containing only Sabin 1 and Sabin 3 component are used for routine immunization.Therefore, monitoring for Sabin viruses becomes an important tool to validate the switch from OPV to IPV.Another aspect of the switch is that IPV is unable to stimulate effective mucosal immunity to prevent poliovirus replication in the gastro-intestinal tract of vaccinees, which may result in continuous circulation of poliovirus.Therefore, an improved IPV is under development that would stimulate better mucosal immunity and secure better protection against poliovirus infection.In addition, a more genetically stable OPV is under development to be used for emergency response in post-eradication period.Evaluation of mucosal immunity induced by new polio vaccines can be done by challenging immunized individuals with OPV and measuring the level of the virus excretion in stool.Thus, the quantitative multiplex osRT-PCR for identification and quantification of Sabin strains in stool samples could be used in such clinical studies.The conventional method for poliovirus isolation from clinical and environmental samples involves growing poliovirus in cell cultures followed by its identification by qRT-PCR ITD v5.0 kit developed at CDC.It is a time-consuming and labor-intensive process that takes 1–2 weeks.In this context, several molecular methods were developed.However, most of them include more than one step to generate the final results and are not suitable for large scale analysis needed for poliovirus surveillance.Some of these methods are based on viral cDNA preparation and PCR amplification, followed by restriction endonuclease analysis, hybridization with specific oligonucleotide probes, or based on a specific amplicon size for each poliovirus serotype.A specific RT-PCR assay was developed used deoxyinosine degenerate primers for identification of poliovirus and was later adapted for quantitation of poliovirus based on the use of real-time PCR.The use of deoxyinosine residues in primers weakens the primers annealing to template consequently lowers the sensitivity of the assay."Recently we've developed two one-step RT-PCR methods for direct identification and quantitation of all three Sabin strains used in OPV in clinical and environmental specimens.The first is multiplex osRT-PCR based on amplicon size to identify Sabin-derived polioviruses in clinical samples, while the other is based on real-time osRT-PCR procedure to quantify individual poliovirus serotypes with SYBR green dye.In this communication the primers described in the previous work were used together with oligonucleotide TaqMan probes labeled with specific fluorescent dye for each OPV serotype to perform a qmosRT-PCR assay for identification and quantitation of the three OPV strains in the same reaction.The method proved to be very sensitive, able to detect the equivalent of 2.4–24 CCID50/ml of OPV type 1, 2.5–25 of type 3, and 0.2–2 CCID50/ml of type 2 in stool supernatant, and 0.03–3.36 CCID50/ml of Sabin strains in DMEM-supernatant with large linearity ranges and to be very specific.This method rapidly identified and accurately quantified all three serotypes of Sabin viruses from environmental specimens that were confirmed by deep sequencing and previously shown to be genetically closely related to Sabin strains indicating that it could be used for multiplex identification of Sabin strains in environmental and clinical specimens.The analysis of different lots of monovalent, trivalent OPV and IPV from different manufacturers showed that this method was able to identify and to quantify each of their Sabin strains.The ability of qmosRT-PCR to quantify virus suggests that it could be used to develop a rapid PCR-based titration and neutralization assays for polioviruses, as the traditional version of such methods are time consuming, labor and not suitable for automation.Such PCR-based methods were developed previously for other viruses.In conclusion, the methods described in this communication represent a simple and rapid alternative to traditional cell culture-based methods for identification and quantification of individual serotypes of Sabin polioviruses in samples generated during clinical trials of new poliovirus vaccines, as well as during routine poliovirus surveillance, and for development of PCR-based poliovirus titration and neutralization assays that are suitable for automation needed for consistency of results and high throughput applications.
An improved quantitative multiplex one-step RT-PCR (qmosRT-PCR) for simultaneous identification and quantitation of all three serotypes of poliovirus is described. It is based on using serotype-specific primers and fluorescent TaqMan oligonucleotide probes. The assay can be used for high-throughput screening of samples for the presence of poliovirus, poliovirus surveillance and for evaluation of virus shedding by vaccine recipients in clinical trials to assess mucosal immunity. It could replace conventional methods based on cell culture virus isolation followed by serotyping. The assay takes only few hours, and was found to be simple, specific, sensitive and has large quantitative linearity range. In addition, the method could be used as readout in PCR-based poliovirus titration and neutralization assays.
7
Acute Acalculous Cholecystitis due to primary acute Epstein-Barr virus infection treated with laparoscopic cholecystectomy; a case report
Epstein-Barr Virus is a human herpes virus 4, transmitted through intimate contact between susceptible persons and asymptomatic EBV shedders.It usually presents with fever, pharyngitis and lymphadenopathy .Majority of individuals with primary EBV infection recover uneventfully .Acute Acalculous Cholecystitis is usually seen in hospitalized and critically ill patients with major trauma, shock, severe sepsis, total parenteral nutrition and mechanical ventilation .This work has been reported in line with the surgical case report criteria .Not commissioned, externally peer reviewed.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in- Chief of this journal on request.We report a 25-year- old woman who presented with fever T 38.6C, sore throat, abdominal pain, nausea, vomiting and anorexia.On physical exam, she had right upper quadrant abdominal tenderness without signs of lymphadenopathy.She was in her usual state of health as an average healthy woman with medical history of polycystic ovarian syndrome, gastroesophageal reflux disease and mild intermittent asthma until ten days prior to admission when she had headache, fever, cough and muscle aches.Her primary care physician prescribed her a prophylactic course of Oseltamivir 75 mg twice daily for five days and after the third day of treatment, her symptoms continued to worsen with development of abdominal pain and she walked into our emergency room.Initial investigations showed: negative Flu/RSV by PCR, WBC count, lymphocytes count, atypical lymphocytes, conjugated bilirubin, alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, positive EBV VCA IgM antibody, negative EBV VCA IgG antibody and high EBV count by PCR which confirmed presence of acute EBV infection.On day 2 of admission, increasing intensity of abdominal pain with worsening of liver function, warranted further investigations: conjugated bilirubin, alanine aminotransferase, aspartate aminotransferase and alkaline phosphatase.Abdominal ultrasonography was unremarkable and abdominal computed tomography showed mild gallbladder distension, mild gallbladder wall thickening and mild pericholecystic fluid collection with no layering stones, sludge or biliary ductal dilation.Hepatobiliary iminodiacetic acid scan showed non-accumulation of the isotope within the gallbladder which confirmed the presence of AAC , .On day 4 of admission, abdominal pain was worsening and her blood pressure dropped into 80/50 mm/Hg.Conservative management was advised initially, but her abdominal pain was intolerable so she opted for a laparoscopic cholecystectomy.Intraoperatively, the gall bladder was found to be edematous, markedly inflamed and no gallstones were found.The final pathology report of the removed gallbladder showed AAC.She was given a single dose of 2 gm of intravenous Cefazolin preoperatively.She was not given glucocorticoids or acyclovir.Her symptoms improved after surgery and she was discharged on the fifth day of admission.Liver function returned to normal levels two weeks after surgery.Although majority of cases with primary acute EBV infection recover without sequelae, few cases of AAC have been reported as a complication of primary acute EBV infection .Kottanattu et al. reported in a systemic review of the literature, 37 cases of AAC in primary acute EBV infection which were published between 1966 and 2016, all cases always recovered without surgery or corticosteroids, following a hospital stay of 25 days or less ."Agergaard and Larsen did another literature review showed 26 cases of AAC in acute primary EBV infection, only one patient had laparoscopic cholecystectomy and the rest recovered without surgery, also broad-spectrum antibiotics have no impact on severity of disease's symptoms, course or length of hospital stay .The distinguishing feature of our patient that she looked seriously sick, on and throughout admission.Her abdominal pain was worsening, she did not tolerate pain medications and she had high EBV viral load which usually correlates with disease severity .Conservative management was advised initially but she opted for surgery and had laparoscopic cholecystectomy .She received a single dose of intravenous Cefazolin preoperatively.She was discharged on day 5 of admission.AAC is a rare complication of primary acute EBV infection which is usually managed conservatively without surgery, however, our patient had laparoscopic cholecystectomy due to intolerable abdominal pain.AAC should be suspected in patients with acute EBV infection, presenting with abdominal pain.Our institution does not require ethical approval for publishing case report.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.First Author: Kamal Rezkallah.Corresponding Author: Kamal Rezkallah.Kamal Rezkallah: Literature review, data collection, original draft writing and final approval of the draft to be submitted.Khlid Barakat, Abdurraheem Farrah, Shesh Rao, Monica Sharma, Shyam Chalise and Teresita Zdunek: Literature review, draft revising and final approval of the draft to be submitted.
Introduction: Epstein Barr virus (EBV) is a human herpes virus 4, transmitted through intimate contact between susceptible persons and asymptomatic EBV shedders. It usually presents with fever, pharyngitis and lymphadenopathy. Majority of individuals with primary EBV infection recover uneventfully. Acute Acalculous Cholecystitis (AAC) is usually seen in hospitalized and critically ill patients with major trauma, shock, severe sepsis, total parenteral nutrition and mechanical ventilation. Case presentation: We report a 25-year- old woman presented with acute Epstein-Barr Virus (EBV)infection and hepatobiliary iminodiacetic acid (HIDA) scan confirmed presence of Acute Acalculous Cholecystitis (AAC). Conservative management was advised initially, but she had a laparoscopic cholecystectomy due to intolerable abdominal pain. Conclusion: AAC is a rare complication of acute EBV infection and it is usually managed conservatively, although our patient had laparoscopic cholecystectomy due to intolerable abdominal pain.
8
Enhancing board motivation for competitive performance of Thailand's co-operatives
The main stream of board research focuses on studying the relationship between board characteristics for instance board size, board diversity, chairman-CEO duality, and the performance of the firm.However, research results are inconclusive as the relationship is not clear and the mechanism that explains the relationship is still in a black box.Therefore, many researchers have been trying to understand this mechanism and board motivation is one interesting factor that should be studied further.Even though a lot of studies have been conducted on the topic of motivation, studies related to board motivation are very few.Most of the research on this topic does not study the directors’ motivation directly.The main stream of governance literature is dominated by the agency theory which assumes that directors are motivated to protect principles’ interest from opportunistic behavior of the agents.However, the validity of this assumption is argued by other theories.For example, the stakeholder theory assumes that board motivation is to protect the benefits of stakeholders whom the board members represent.The stewardship theory is another theory which assumes that the Board of Directors’ are motivated by altruism.Therefore, many studies focus on studying the theory with which board motivation is consistent with.Moreover, Hambrick et al. explained that knowledge on motivation of executive directors has increased.However, knowledge on motivation of non-executive directors or outside directors is still not clear.In the present, good governance practices encourage a company to have more outside directors compared to inside directors.The agency theory explains that outside directors are motivated to protect the shareholder’s benefits rather than their own benefits, because they are independent from a company’s management.Nevertheless, there is an argument that the non-executive director is also an agent for the company owner.Therefore, they will work for their own benefits, similar to the executive directors.Another explanation is that outside directors are motivated by self-interest, such as reputation.They expect that, for their good work, they will be re-elected as a board member or elected as another companies’ board member.Additionally, Fehr and Gächter and Osterloh and Frey used the economic theory of firm to explain that all behaviors of a board member are motivated by external factors such as financial compensation."For a democratic member-based organization such as a co-operative, it is unlikely to lead to an agency problem because all members of the Board of Directors are the co-operatives shareholders and are elected from the co-operative member's meeting.The main motivation of the co-operatives Board of Directors is supposed to work for member benefits because the philosophy of the co-operative focuses on cooperating and supporting each other.Still, there is an objection that motivation in terms of collective action, like working as a member of co-operative Board of Directors, depends on costs and benefits of that work too.Olson described that a person will work for the greater good, if the benefits from his/her works are greater than the costs he/she pays for that work.Therefore, shareholders are motivated to monitor a manager if they believe that the additional personal cost of monitoring will be lower than the benefits which will be received in return.From the researcher’s observation, the most important problem related to co-operative board of directors is the lack of motivation among board members, managers, as well as staff, to drive their co-operatives forward.This is because the co-operative is a collective enterprise and one core principle is “one person, one vote,” regardless of the number of shares owned, so that individual has no motivation to invest in the success of the co-operative.Moreover, board members receive very small financial returns.Co-operative board compensation ranged from $60–$2500/year in 1986 while total board compensation levels regardless of company size ranged from $134,000–$250,000 per year in 2014.Therefore, their volunteer work is without much motivation and they tend to spend minimal time, skill and effort in co-operatives compared to the time, skill and effort invested in work that will generate greater personal income and benefits elsewhere.However, past studies of the relationship between the motivation of co-operative Board members and co-operative performance in Thailand e.g. Rapeepat and CPD are qualitative research.It still does not explain the relationship based on clear evidence."Hence this article is a test of the relationship between the co-operatives board motivation and the co-operatives performance using Vroom's Expectancy Theory of motivation.This study would be beneficial to both theoretical and policy development to enhance co-operative board members’ motivation.The co-operatives operate on a self-help philosophy.Most co-operatives are governed by board members who work voluntarily and are elected by the members.This philosophy implies that the membership influences the management of board members and the co-operative manager while the board members are motivated to protect the member’s benefits."However, the results of empirical studies and experts' opinions on the co-operatives performance have pointed to various governance problems, such as financial scandals, the failure of democracy, faulty management, monopoly of power, the restriction of members’ participation, rent seeking behavior, corruption by boards of directors, lack of transparency in the decision-making process or weakness of monitoring and control mechanisms, etc.These raise doubts about the quality of co-operative governance.One question regarding the issue of governance is the duty of board members in working or not working as representatives of the members, since co-operative board members are not under internal or external pressure as an investor-owned firm’s board members are.Members rarely participate in their board members’ work, due to having a lack of involvement in the election of the board members and a lack of member participation in monitoring and controlling the board operations.Also, most co-operatives do not have board performance evaluation mechanisms because their operation is complex and multi-purpose which leads to conflict on how to evaluate board performance.Consequently, it is difficult for the co-operative Board of Directors to have important information and to monitor the co-operatives operations efficiently.Conversely, an investor-owned firm has the sole purpose of maximizing profit.Moreover, companies registered in the Stock Exchange have to comply with corporate governance practices related to board roles and responsibilities and board evaluation.Accordingly, to evaluate an investor-owned firm’s performance is quite simple.Normally, co-operative board members are not under pressure from outside because their shares are not traded on the market.Thus, there is no external control from the market, and the rules to protect members’ rights are weak.Oppositely, a public company limited is bound by the rules and regulations of the Stock Exchange which aim to protect minor shareholders’ rights which include for example, the right in nominating and selecting a person to be a board member, right in monitoring board operations, right in receiving reports on board operations and a company’s performance, etc.For a company to have good performance and be attractive to investors, Board of Directors has to put their effort in monitoring and controlling the company’s management.Based on the operations and governance problems of co-operatives and restricted understanding of board behaviors, particularly with Board conflicts between motivations to work on the basis of voluntary work for the public’s interest or for personal gain, this research studies the Thailand’ co-operatives board motivation and the factors influencing it which has never been studied before as it shows in Table 1.This research will improve the understanding of the co-operative board motivations which will help in understanding the co-operatives board behaviors and may lead to improved governance quality and co-operative’s performance in Thailand.Chareonwongsak developed a new conceptual framework on the co-operative board member’s motivation and the co-operative’s performance based on Vroom’s expectancy theory which explained that motivation is a function of three factors: expectancy, instrumentality and valence.Expectancy is the perception of how increased effort leads to better performance.This includes the availability of resources and occupational support.Instrumentality is the perception of how much benefit will result from performance.This factor depends on transparency in the evaluation process and how the relationship between performance and outcomes is understood.Finally, valence is the degree of value placed on performance outcomes.Also, board motivation depends upon expectancy, instrumentality and valence.Expectancy is affected by the decision making process, board authority and function, board composition, board size, board term, board structure, board meeting, CEO-chairman duality, board skill and support given to the work of the board.Instrumentality is affected by transparency in the evaluation process, other direct benefits and compensation whereas valence is affected by financial status and financial burden.The conceptual framework can be illustrated in Fig. 1.The conceptual framework proposed by Chareonwongsak is used in this study and the hypotheses that are going to be tested can be summarized in Table 2.In this study, the primary data was collected using questionnaire surveys from the co-operative board members and managers.For secondary data, the financial and non-financial indicators of the co-operatives, was collected from the Co-operative Auditing Department and Co-operative Promotion Department, and was used to build the co-operative performance indicators.The unit of the sample is the co-operative board of directors in Thailand and the scope of population is 76,469 members of board of directors from 7165 co-operatives.The sample size is determined based on the data analysis method which is structural equation modeling.SEM is a large sample statistical technique that requires a sample size of at least 200.Finally, 330 samples were planned to be collected for this study.To cover all types of co-operatives and to reach out to board members who are truly representative of the population, the best sampling method for this study was found to be proportional stratified, multistage random sampling.The method involved stratifying all co-operatives by types, calculating each co-operative type proportionately to a population, for the first stage, drawing randomly a co-operative in each stratum according to the number calculated and, for the second stage, drawing randomly board members from those sampled co-operatives.However, the stratified quota sampling method was used because some stratums were too small and would not have been representative of the sample.Therefore, sample size of each stratum was adjusted.Then, co-operatives from each stratum were selected to meet their quota number and board members were drawn randomly as a representative of that co-operative.The data collection could have been done in several ways.The researcher used the telephone interview method because the data needed from co-operative board members and co-operative managers is detailed and explanation might be necessary which could be provided via the phone call.Moreover, this method can reduce cost and time, compared to face to face interviews, to reach out to co-operatives which have spread throughout Thailand.The response rate from telephone interview was about 95%, with only a few refusing to participate or didn’t complete the survey.These samples were excluded from the analysis.However, because of unforeseen circumstances during the survey process, the researcher was unable to satisfy the minimum planned number of respondents in some co-operative types.For one complete sample unit, data was needed from co-operative board members and co-operative managers.But in many cases, both co-operative board members and managers from the same co-operatives were not available.Another reason preventing the researcher to access samples was lack of complete contact information of co-operative board members and managers.The actual number of respondents from each co-operative type is shown in Table 3.The objective of this study is to quantify the extent of the relation between the latent variables expectancy, instrumentality, valence and motivation and motivation and performance).Most statistical methods can’t estimate latent variables.Factor analysis can estimate latent variables, but it is used to group variables or to test the hypothesis that the items are associated with.For that reason, only structural equation modeling could meet the need of this study because SEM allows a set of relationships between one or more independent variables, either continuous or discrete, and one or more dependent variables, either continuous or discrete, to be examined.Both independent variables and dependent variables can be either measured variables or latent variables.Before the analysis, the collected data had to be examined for any missing data, outliers, nonlinearity and non-normality of data first because SEM is the analysis of correlation among variables.Having missing data, outliers, nonlinearity and non-normality of data will affect the variance-covariance among variables and affect the result of SEM analysis.The numbers of samples after data preparation process were 319.After finishing the data preparation process, the researcher calculated the sampling weights, which were used to adjust the proportion of sample size from each stratum to be similar to a population.The calculated sampling weights are shown in Table 3.As the data is not normal and most of it is categorical or ordinal data, the Weighted Least Squares Means and Variance Adjusted method in SEM, which does not assume normally distributed variables and provides the best option for modeling categorical or ordinal data, was used as the estimation method.It was found that the motivation level of the co-operatives board members was rather high.The mean average of the answers from different items ranged from 4.84 to 5.85, and the standard deviation was in the range of 0.91 and 1.44.The mean average of Expectancy was in the range from 4.05 to 4.48 with the standard deviation ranging from 0.63 to 0.97.The mean average of Instrumentality was in the range from 4.86 to 5.85 and the standard deviation was in the range from 0.96 to 1.40.The mean average of Valence was in the range from 4.07 to 5.74 and the standard deviation was in the range of 1.07–1.47.This variable had more dispersion of scores than the other variables.It appears that co-operative board members do not value financial benefits, or being honored for being a committee member as a major incentive.These two items had the lowest means.The compensation they value the most is feeling of having done good and gaining satisfaction from having utilized their full potential.In the measurement model, there are four latent variables to be measured: the motivation of co-operative board members, expectancy, instrumentality x valence, and the co-operative’s performance.The multiplicative form between instrumentality and valence is chosen because in this study, the questions used to measure Instrumentality and the questions used to measure Valence correspond with each other item by item.This multiplicative form is more consistent with the theory.This model specification is consistent with the study of Campbell and Pritchard.Finally, it was found that MOTIV3, MOTIV4 and MOTIV5 are appropriate indicators for measuring motivation, EXPECT1 and EXPECT2 for measuring EXPECT, IV2, IV7, and IV8 for measuring Instrumentality and Valence, and ROE52, COOPLEVE and NOLOSS for measuring co-operative performance.The final adjusted model had Goodness-of-fit indices that all passed the criteria.The model’s degree of freedom was 38.χ2/degree of freedom ratio was equal to 1.43.Root-mean-square error of approximation was 0.036.CFI was equal to 0.986 and Tucker–Lewis index was equal to 0.980.For the test of convergent validity, standardized factor loadings, composite reliabilities, and average variance Extracted were considered.The CR had to exceed the level of 0.6, while the AVE had to exceed the level of 0.5.From Table 7, it was found that all standardized factor loadings exceed the threshold value of 0.4; all CRs exceeded the threshold value of 0.6, and all AVEs exceeded the threshold value of 0.5, except the AVE of “PERFORM” which was equal to 0.489, a little lower than the threshold level.In an “interesting”, “first-time” study, Ping argued that AVE slightly below 0.50 might be “acceptable” and it is considered acceptable if it does not produce discriminant validity problems.According to Ping, AVE of 0.489 for “PERFORM” can be considered “acceptable” and can be summarized as all latent variables passing the test of convergent validity because this study is one of the first studies to be conducted on board motivation and performance of Thailand’s co-operatives and discriminant validity was not seen to be a problem.For the test of discriminant validity, the AVE values of any two constructs have to be greater than its squared correlation.It can be seen from Table 7 that no matter which pairs of constructs are considered, both AVE values are greater than its squared correlation.In conclusion, all the latent variables pass the discriminant validity test.In conclusion, the measurement of all the latent variables is valid and reliable.After the appropriate measurement model of latent variables was obtained from conducting CFA, a structural model which presents the causal relationships among those latent variables, supplemented with other observed variables, was then constructed using the SEM.Hypothesized causal relationships between latent variables and other observed variables were included.All causal relationships needed to be statistically significant and the sign of their coefficients had to be consistent with the theory or hypothesis.A number of variables that were expected to determine each element of motivation were tried in the model according to the predetermined hypotheses.Furthermore, control variables were also included to reduce the estimation bias that would emerge from the omission of variables.Control variables of the board motivation that were tested are sex, age and education level of co-operative board members, homogeneity/heterogeneity of goods and services offered, geographic dispersion and size of co-operative.The control variables of the co-operative performance which were tested are government support, co-operative league support, technology use, market condition, co-operative member characteristics, operating problems, business plan, law and regulation, member participation, level of communication in co-operative and co-operative type.The final structural model that satisfied all the criteria of model estimation is illustrated by Fig. 2.There are five explanatory variables of Expectancy, which are BFUNC6, BCOMPO4, BAUTHO1, BMEET2, LNSEMINAR, and two explanatory variables of instrumentality and valence are included, which are LNMEETAL1Y, and COMRULE4.Three control variables of co-operative performance are included which are DUALITY,FINTYPE, and GOVSUP3.For control variables of board motivation, no variable were included in the model because some cause model misfit whereas some might fit the model but they are not statistically significant.All the fit indices indicate the model fits satisfactorily and the data is presented in Table 8.Table 9 presents the estimated standardized factor loadings of all causal relationships in the final model.All factor loadings are statistically significant at a confidence level of more than 95%.For the hypotheses testing, all 4 hypotheses are accepted.From the results, there are two points that should be discussed.The study result supports hypothesis No. 1, i.e. the work motivation of the co-operative boards has a significant positive relationship with the performance of the co-operative they serve, which is measured by return on equity, having no loss in the last 2 accounting periods and CPD’s classification of co-operative’s level of internal management quality.Expectancy and instrumentality multiplied by valence of co-operative boards was found to have a positive relationship with their work motivation, which is consistent with hypotheses No. 2, 3 and 4.The importance of the above result is that it provides knowledge about the direct relationship between the board’s work motivation and the co-operative’s performance, which has never been found in past related research.Some research work in the past addressed the motivation to become a board member or to work as a volunteer in a non-profit organization, but the studies did not investigate the relationship between such motivation and organizational performance.Although there were some studies emphasizing the relationship between extrinsic motivations, for example, financial compensation, positions, etc. and the performance of the firm, they neglected the difference between the motivation of board members and the process of motivations of the board members.Even though there was some research in the past that studied the relationship between board motivation and board effectiveness, Alby and Taylor, Richard, and Thomas, etc.), research in this area still had some problems.The results of this research lack reliability because the research method had several weaknesses, for example, the number of respondents was relatively small or there was a reliability problem with the data, etc.Another problem is the lack of criteria for defining and measuring board effectiveness due to the lack of a clear understanding about the mechanism that connects board effectiveness and firm performance."This research is an effort to study another dimension of the relationship between board motivation and the performance of organizations by using Vroom's expectancy theory, which is a theory that has been recognized and widely used in research related to the study of motivation.This research has attempted to cover the weaknesses of past research on the motivations of board members in nonprofit organizations by using quantitative research methods and utilizing a relatively large sample size and data from several sources.The descriptive statistics of variables which are a manifestation of instrumentality and valence reflect the fact that the people who took over the positions as board members had diverse motivations."By observing the variables which have the highest average scores, i.e. INSTRU6 and VALENCE6, it is shown that the co-operative board members gave the most importance to doing activities for the benefit of others.This is consistent with research in the past, which stated that altruism was an important motivation for people to work as a member of the co-operative and other non-profit organizations, while financial incentives, having increased financial benefit had the lowest average.However, the instrumentality and valence variables that are significantly correlated with effective motivations, which have an influence on the co-operative’s performance, are not those of altruism.These effective motivations are not those of financial motivation, but they are instead of receiving honors and awards from the co-operative or government agencies, feelings of accomplishment and being well-known in the community and society.The importance of this result is that it challenges beliefs about the good motivation of the co-operative’s board members."Research in the past has indicated that good motivations are those associated with altruism and one who wishes to volunteer which will make non-profit organizations or co-operatives successful and developed, while bad motivations are associated with seeking one's own interest.However, this study found that the motivation of doing well for the benefit of society and that financial motivation do not have a significant relationship with effective work motivation of the co-operative board members.Why the motivation to do activities for the benefit of society does not have a significant relationship with effective work motivation is probably connected with the lack of clarity of the indicators that reflect the interests of society.A co-operative is an organization that has the important principle to act for the benefit of members and society, causing the operation of a co-operative to be complex and have a variety of purposes, with no conclusion of how to correctly measure its performance.Co-operative board members with incentives consistent with the principles of the co-operative could have a different understanding regarding indicators about the performance of the co-operative.Furthermore, the indicators used for the analysis in this model are the financial ratios of the co-operatives, which might not be appropriate indicators to measure the social benefits."The reason why financial incentives do not have a relationship with the co-operative board's effective work motivation may be explained by the opinion of experts, who argued that the financial compensation that co-operatives provide to their board members is not much and people who become board members are aware of this issue already.In addition, it is possible that the respondents to the questionnaire survey did not truly answer this issue in order to maintain their own image.The reason why the three IV variables above have a positive relationship with effective work motivation of board members can be explained as follows.These motivations themselves require an obvious output and outcome resulting from the board performance because they are directly connected to the output and outcome of the co-operative’s performance that can be measured."For example, a director with the motivation of ‘Being honored and awarded by the co-operative or the public sector' will try to push the performance of the co-operative to meet the criteria to be awarded.This is consistent with the criteria of the government agencies, which often use the financial ratios to rank co-operatives.Another example is that if the directors have motivation to be well-known in the community and society, they will try to push their performance to the point that their co-operative will have no loss or to be classified as having good quality of internal management, so that the Co-operative will gain a good reputation.However, characteristics of the three kinds of motivation can be interpreted in two ways.The first is that these motivations come from the basic need of human nature to be respected and to feel proud.The second is that these motivations are the desire to grow in their careers.If the second interpretation is chosen, this will conform to past literature, which indicated that board members of the firms seek reputational benefit or opportunities in order to become a director in other organizations.In the case of co-operatives, board members must be elected from co-operative members, so there is less opportunity for a board member to be a board member in other co-operatives.However, according to the opinions of some experts, co-operative board members may have the motivation to build their reputation in order to be elected as board members of co-operatives at higher levels, such as the co-operative federation, co-operative league, etc.In other words, being a board member in a co-operative is used as a ladder to step up to a higher political level.Furthermore, the analysis result also shows that instrumentality multiplied by valence has more influence on work motivation than expectancy.This result can be explained as follows.Each co-operative board member may perceive that their effort does not have much effect on the performance of a co-operative, as a co-operative is an organization with multiple objectives, both business and social objectives.Moreover, there are many factors that affect the performance of a co-operative.This includes member participation, policies and promotions from the government, quality of the management team and employees, and the competitive environment of the market.Many factors are beyond the control of the board.However, what board members expect from working as a board member is closely related to the co-operative’s performance that can be measured.For example, if the co-operative is profitable and can pay a lot of dividends to members, it will allow board members to be honored by the members and feel successful.In this study, it was found that the work motivation of the co-operative boards has a significant positive relationship with the performance of the co-operatives they serve.Expectancy, instrumentality and valence of co-operative boards have a positive relationship with their work motivation.The importance of these results is that it provides knowledge about the direct relationship between the board’s work motivation and co-operatives performance, which has never been found in past related research.The instrumentality and valence variables that are significantly correlated with effective motivations, which have influence on the co-operative performance, are not those of altruism.Nor are these effective motivations those of financial motivation, but are instead associated with being well-known in the community and society, receiving honors and awards from the co-operative or government agencies, and feelings of accomplishment.The variable which has a significant positive relationship with expectancy are board authority and function measured by a decision maker for strategy, policy, goals and co-operative board which rarely participates in co-operative activities; board composition measured by the degree to which co-operative board members understand the context of the co-operative because they are also members of the co-operative; board meeting quality measured by the degree to which meeting agenda concerns small issues and is just to inform; and knowledge of directors measured by average number of seminars on co-operative development attended in a year in term of log-transformations.The variables which have a positive relationship with the instrumentality and valence of the co-operative board includes the opinion of the board members on the fairness and reliability of the compensation-setting process and meeting allowances that board members receive in 1 year in terms of log transformation.Results from the model show that other factors which have a significant relationship with the co-operative performance are the way the co-operative appoints their managers, financial-type co-operative, and government price intervention.The above findings lead to practical proposals to enhance motivation for co-operative board members and to develop co-operative performance as follows.From this study, co-operative board members who have motivation that positively affects co-operative’s performance are those who value the feelings of accomplishment, the reputation and recognition.Therefore, to create appropriate motivation of the co-operative board members is to provide proper incentives or rewards for the co-operative board members.The co-operative movement should set up honor certificates or rewards to give to co-operative board members in a hierarchical manner from the co-operative level to the national level.There should be a design of indicators and mechanisms to evaluate the performance of co-operative board members which are clear, transparent and credible, in order to be used as information in providing awards or honoring certificates to co-operative board members and boards of co-operatives at all levels across the country.Knowledge and skills cause a higher expectancy of co-operative board members, so one way to help improve the performance of co-operatives is obtaining co-operative board members who have knowledge and ability.A succession plan, to prepare the co-operative members to have knowledge about the co-operative system, understand the role of members and board members and be knowledgeable about the business and management system, will make the co-operative have a pool of quality members who will be elected to serve as co-operative board members in the future.Seminar attendance is positively correlated with expectancy.This implies that co-operative board members should develop their specific knowledge and the skills which are necessary for working as co-operative board members.Co-operatives should provide suggestions for the election of new co-operative board members in advance of the time, so that the new board members will have an opportunity to obtain a set of orientations which will equip them to have sufficient knowledge and skills necessary to serve the co-operative.Board authority and functions have a relationship with board member expectancy."Therefore, the co-operatives should focus on the development of good governance in co-operative organizations, also called ‘good co-operative governance, especially the development of the co-operative board members to have better knowledge and understanding about the principles and guidelines of good co-operative governance, and the roles and responsibilities of the co-operative board members.Due to the scope and resource constraints of this study, this study has some limitations that deserve to be clarified.This study cannot analyze the effect of factors that have or have not been applicable to co-operatives over the past due to some conditions.Analyzing the effect of these factors is worth pursuing because it could lead to solutions for a transformation in co-operatives, for example, changes in coop principles may lead to the emergence of a new-generation of co-operatives.To solve the problem of bias that may occur in the data about board motivation which was collected using the self-assessment method in this study, this data should be collected from different methods for e.g. collected from other sources such as from co-operative members.Samples used in this research might not be the best representative of a population due to limited data on financial and non-financial data of the co-operative as well as contacts of all co-operative board members.The measurement of co-operative performance is still a challenge, particularly the measurement of performance in social or community terms.Development of co-operative performance measurements will lead to a more accurate performance assessment, and the analysis of such data may lead to different conclusions from this study.In this study, the latent variable “PERFORM”, which is measured by three manifest variables − ROE52, NOLOSS and COOPLEVE, has an AVE value of 0.489, which is slightly less than the threshold value of 0.5.This might have occurred because of a few variables, probably “ROE52” which is a financial indicator being used to measure co-operative performance, and which has a high measurement error variance.The latent variable “PERFORM” could have a higher reliability and validity, if other alternative indicators measuring co-operative performance are included in the model in future research."The data about financial and non-financial indicators used in this research to measure co-operative’s performance is from only one year's observation.Using time series data might provide different results.Different types of co-operative may have different behaviors in detail."As this study analyzes co-operatives' behaviors as a whole, it might have lost some details of behaviors of each type of co-operative.In the future, further study on each type of co-operative is needed.It is to specifically state that “no competing interests are at stake and there is no conflict of interest” with other people or organizations that could inappropriately influence or bias the content of the paper.
This study aims to answer the main questions “does the motivation of co-operative boards of directors affect co-operative performance in Thailand, and to what extent?”, and “Which factors affect the motivation of a co-operative's board of directors?” The methods used for the study is the Structural Equation Modeling (SEM). Data used for the model estimation are collected primarily by questionnaire surveys from both Board of directors and the managers from the co-operatives in Thailand. Secondary data is the financial and non-financial indicators of the co-operatives, which were collected by Co-operative Auditing Department and Co-operative Promotion Department. The study result suggests that the motivation of co-operative boards of directors significantly affects co-operative performance. Factors that are found to affect board member motivation include board authority and function, board composition, board meeting quality, board members' skill, transparency in the evaluation and compensation setting process and financial compensation.
9
Growth faltering in rural Gambian children after four decades of interventions: a retrospective cohort study
The combination of fetal growth restriction, underweight, stunting, and wasting in later childhood, suboptimal breastfeeding, and micronutrient deficiencies have been estimated to cause more than 3 million child deaths annually, equivalent to 45% of the global total.1,Among these factors, the association between undernutrition and mortality is confounded by the effects of deprivation but is probably at least partly causal as evidenced by the greatly elevated hospital case fatality rates of undernourished children compared with better nourished children.2,The Millennium Development Goals adopted underweight as a key indicator for MDG1, but stunting has since been adopted as the preferred indicator because it offers a more stable index of long-term malnutrition.Latest estimates suggest that rates of stunting have been declining in most regions, but there remain 159 million children with stunting worldwide.3,The prevalence of stunting has declined most slowly in sub-Saharan Africa, and as a consequence of population growth the absolute number of children with stunting has increased.3,Stunting rates fall rapidly as countries pass through the economic transition, but the key elements of progress that alleviate growth faltering are poorly understood, thus limiting the design of interventions and the targeting of health and development inputs in populations that remain impoverished.In this study, we analyse a longitudinal dataset spanning almost four decades of growth monitoring in three rural African villages that have received an unprecedented level of health-orientated interventions.A meta-analysis4 of previous interventions for water, sanitation, and hygiene has not yielded strong grounds for optimism regarding the likely efficacy of such investments at the levels currently offered.The analysis of randomised trials included more than 4600 children studied over 9–12 months of intervention, and its findings showed no evidence of any beneficial effect on weight-for-age or weight-for-height and only a marginally significant effect on height-for-age of less than a tenth of a standard deviation.Additionally, the Lancet Series on Maternal and Child Nutrition5 reinforced the conclusion that nutrition interventions alone will have little effect on childhood undernutrition and estimated that, even if scaled up to 90% coverage, the implementation of all of the currently identified evidence-based interventions relating to nutrition would eliminate only about 20% of stunting globally.The results of ongoing trials to test the effect on growth of WASH interventions are keenly awaited.6,7,Evidence before this study,We searched PubMed and subsequent reference lists of relevant articles, with combinations of the terms “secular trends in growth”, “growth faltering”, “undernutrition”, “wasting”, “stunting”, “underweight”, “African children”, “rural”, “underfive”, and “infants” between June 1, 2012, and Feb 28, 2015.All studies published between Jan 1, 1980, and Feb 28, 2015, that had the relevant search terms were included.The quality of the evidence was inadequate for the research questions that we posed, including what the secular trends were in growth in rural African children younger than 2 years during the past four decades, and how the effect of seasonality on the growth of these children has changed during the past four decades.A small number of longitudinal studies from east and central sub-Saharan Africa described the growth patterns in cohorts of young African children over a period of less than a decade, assessing the effect of seasonality, immunisation uptake, and maternal health factors on the patterns of growth faltering.These findings showed that weight declined after the first 3 months in infants and that improved growth in infancy was associated with immunisation status and indices of adequate maternal nutritional status, whereas the rainy season was associated with reduced growth velocity.However, none of these studies described secular trends in these growth patterns.Additionally, the multicountry analyses used cross-sectional data, making interpretation of trends in growth faltering over time within individual populations difficult.Several studies from southern Africa assessed the secular trends in growth in children of school age and older children, whereas other researchers combined different cohorts in their analyses, making it difficult to contextualise the associated trends in the social environmental and health interventions within the respective populations.Added value of this study,To our knowledge, this study is the first to describe in fine detail the secular trends in longitudinal and seasonal growth patterns of children in a rural sub-Saharan African community with a constant sampling frame.We have documented the introduction of a series of nutrition-specific and nutrition-sensitive interventions resulting in an unprecedented level of health care in these villages.Simultaneous socioeconomic transitions have occurred with increased access to formal education, employment, and income through remittances from family members overseas.Families have become much less reliant on subsistence farming for their income and nutritional needs.These changes have resulted in reduction of mortality to a tenth of its former level in children younger than 5 years, and major reductions in diarrhoeal and other morbidity.Growth has improved but, despite these profound health and socioeconomic changes, the patterns of childhood growth faltering persist with stunting prevalence remaining at 30%.Our findings indicate that communities must exceed a very high threshold for health and environmental change before growth faltering will be eliminated.Implications of all the available evidence,Children in resource limited settings, particularly in sub-Saharan Africa, continue to have suboptimal growth patterns despite access to public health interventions such as immunisation, clean water, and sanitation."Our analysis suggests that mitigation of growth faltering will need these public health interventions to be combined with many other improvements in children's environments, perhaps including improved housing with the provision of piped water directly into the home.Evidence from countries that have passed through the economic transition suggests that poverty reduction promotes such improvements and is accompanied by rapid declines in stunting.The implication therefore is that there is a very high threshold for improvements in living conditions, disease elimination, dietary sufficiency, and access to health care that must be exceeded to eliminate malnutrition.On this basis, we predict that current WASH interventions might not be sufficiently intensive to yield a substantial improvement in child growth, and that greater efforts will be required to meet the new UN Sustainable Development Goals.8,In this study we assessed the aggregate improvements in child growth associated with progressive improvements in a wide range of nutrition-specific and nutrition-sensitive interventions in three rural Gambian villages that have been under continuous growth monitoring for almost 4 decades.We did a retrospective cohort study using routine growth monitoring data for all children whose date of birth had been recorded to assess trends in growth faltering in children younger than 2 years in the West Kiang region of The Gambia during the past four decades.Three rural villages in this region have benefited from free health care provided by the UK Medical Research Council for the past 40 years.Since the 1970s there have been increasing levels of support and interventions such that these villages have benefited from unprecedented levels of nutrition-specific and nutrition-sensitive interventions compared with other such communities in rural low-income settings.Growth monitoring was done on a monthly basis in the 1970s but from 1983 onward, measurements were made at birth, 6 weeks, 3 months and then every 3 months thereafter.Diseases were recorded both at regular child ‘well baby’ clinics and when mothers presented with a sick child, and here we focus on clinical diagnoses for pneumonia, chest infections, diarrhoea, and malaria.Malaria diagnoses were based on positive blood films and, since 2007, on rapid diagnostic tests.9,As described elsewhere,9 the climate in the intervention area has a long, dry harvest season and a wet so-called ‘hungry’ season when agricultural work, depletion of food supply, and infectious diseases are at their peak.Ethics approval for the demographic surveillance of the three villages was granted by the Joint Gambian Government/Medical Research Council Unit The Gambia Ethics Committee.Standard anthropometric measurements were done in the clinic by trained clinic staff and Z scores were calculated against the WHO 2006 growth standards.10,We defined stunting, wasting, and underweight as height-for-age, weight-for-length, and weight-for-age of less than 2 SDs below the WHO reference median.Further details are provided in the appendix.We fitted the effects of age and season on repeated growth parameters using random effects models.Models for boys and girls and each decade were fitted separately.To describe secular changes in rates of stunting, wasting, and underweight, we fitted random effects logistic regression of the binary variable on the first four orthogonal polynomials in age and the first pair of Fourier terms for season.To describe the effect of season on growth, we obtained seasonal patterns of body size by Fourier regression, as described in the appendix.11,To describe the changes in body size with age, we produced plots of mean Z score versus age by fitting age with ten-knot cubic regression splines and controlling for season by including the first pair of Fourier terms.We quantified growth faltering as the drop in Z score during the 18 month interval between 3 months and 21 months of age."These estimates are all simple linear combinations of the regression coefficients and their standard errors calculated using the variance–covariance matrix for the regression coefficients, using Stata's post-estimation command lincom.We did not do any formal statistical hypothesis tests.With such large volumes of observational data almost any difference examined would be significant, so statistical significances poorly discriminate between important and trivial patterns in the data.Instead, we focused on estimation of effect sizes and their confidence intervals.All analyses were done with Stata 12.The UK Medical Research Council has provided sustained support for our unique cohort over many years and approved our general research plans every 5 years.MRC played no other role in interpretation of the data or preparation of the manuscript.The corresponding author had access to all of the data and had final responsibility for the decision to submit for publication.From May 1, 1976, to Feb 29, 2012, 4474 children younger than 2 years from these villages were seen at the child clinics in Keneba.Children were included in this analysis if their date of birth was known accurately and they visited the clinic on six or more occasions, giving a total of 3659 children eligible for the study.Those ineligible included 24 with unknown date of birth and 791 visitors who attended the clinic on five or fewer occasions.The median number of visits per child was 16, resulting in a total of 59 371 visits at which anthropometric measurements were made.Most deliveries occurred at home in the presence of a traditional birth attendant but a trained midwife completed a baby check including anthropometric measurements within 5 days of delivery.We analysed secular trends in birth size, because it is an important determinant of postnatal growth and attained size.Data about birth size were available for 2728 babies."We excluded length data because there were more missing data than for weight and head circumference and because birth lengths measured with a length mat in the babies' homes are inherently less reliable than the other measurements.During the four decades of the study period, birthweight Z score increased by 0·26 from a starting point of −0·85.Head circumference at birth Z score increased by 0·58 from −0·36, thus ending up slightly above the WHO standards at 0·22.A small part of this increase might be attributable to a steady increase in maternal height totalling 28 mm.Figure 2 captures the characteristic growth patterns of these rural infants.They are born small and continue to fall away from the WHO standard length centiles throughout the first 2 years of life.Their weight shows early catchup while the infants are still fully breastfed and largely protected from infections; this trend is magnified in their weight-for-length due to the simultaneous decline in length.Mid-upper-arm and head circumferences show a similar resilience in very early infancy.The figure also illustrates the secular trends in growth during the four decades.Length shows a consistent, but limited, improvement.At 2 years, length-for-age Z score had improved by 0·74 from a starting point of −2·10.Weight and head circumference showed an initial improvement by the second decade but little further gain.Weight-for-length showed absolutely no change in the second year of life.Mid-upper-arm circumference increased by a quarter of a Z score.The prevalence of stunting at 2 years almost halved from 57% to 30% and the prevalence of underweight decreased from 39% to 22%.There was no change in the prevalence of wasting.Growth failure is markedly seasonal in this environment, with greater deficits occurring in the rainy season when infections are more common and maternal care declines due to the pressures of farming.Figure 4 shows that there has been a substantial attenuation of the seasonality of growth during the four decades studied.When assessed as the amplitude of Z score fluctuation, this measure was significant for all indices in the order of a tenth of a Z score.We defined growth faltering on the basis of the differences in Z score between 3 months and 21 months post partum.In the 1970s, Z scores for length-for-age, weight-for-age, weight-for-length, and head circumference all fell by between 0·79 and 0·95.Over time, this fall was slightly attenuated for length-for-age and weight-for-length, and more markedly attenuated for head circumference.The decline in Z score for weight-for-age and mid-upper-arm circumference did not change during the period studied.The incidence of diarrhoea, malaria, and bronchiolitis in the children younger than 12 months fell by 80% during the four decades studied.Conversely, the incidence of pneumonia seemed to increase during the four decades.Goal 2 of the SDGs, “to end hunger, achieve food security and improved nutrition, and promote sustainable agriculture”, is accompanied by the target to achieve the internationally agreed goals for stunting and wasting in children younger than 5 years by 2025.For stunting, this goal would require a 40% reduction from the current estimate of 159 million stunted children to reach the target of less than 100 million.In Africa there has been a disappointing decline in the prevalence of stunting from 42% in 1990 to 32% in 201512 and, because of population growth, the absolute numbers of children with stunting actually increased from 47 million to 58 million during this period.The prevalence of stunting is now predicted to stabilise at that level because continued population growth offsets a slower-than-required decline in prevalence.By comparison, during the same period the prevalence of stunting in Asia decreased from 48% to 25% and the total number of children with stunting declined from 189 million to 84 million.12,Elimination of stunting creates a complex and paradoxical challenge, which suggests that one or more key causative factors remain unknown.On the one hand, nutrition-specific interventions have repeatedly shown very limited efficacy even when implemented under the optimal conditions of randomised trials,2,13–17 whereas on the other hand, stunting resolves rapidly as wealth and living conditions improve in countries passing through the economic transition.11,18,The longitudinal data presented in this study add to this challenge.During almost four decades the Medical Research Council has made sustained investments in health care and nutrition-related infrastructure within our core study villages; these inputs are unparalleled across rural Africa and would be prohibitively expensive for governments of low-income countries to roll out nationwide.These villages have access to antenatal and postnatal care, and round-the-clock access to clinicians and nurses in a well equipped and efficient primary health-care clinic.All health services are free of charge.All children are fully vaccinated, receive vitamin A, mebendazole, and other health interventions as per WHO protocols.Breastfeeding rates are among the very best worldwide and are further supported by Baby Friendly Community Initiatives accompanied by regular messaging in support of exclusive breastfeeding for 6 months.Open defecation and water obtained from contaminated open wells have been universally replaced by latrines in all compounds and tube well water supplied through clean pipes to standpipes around the villages.These interventions have had a profound effect on mortality in children younger than 5 years9 and the incidence of most diseases, especially diarrhoea.19,Further, children attend regular well-baby checks with growth monitoring and we provide a dedicated treatment centre for severely malnourished children to treat those who do become malnourished.The remittance economy from village members who have migrated overseas, together with incomes from employment at the Medical Research Council, have greatly improved food security and attenuated the stress of the so-called hungry season as reflected in the reduction in the amplitude of seasonal growth faltering in figure 4.This increased wealth has also improved housing conditions and dispersed families over a wider area, reducing overcrowding.Child mortality has fallen, birth spacing has increased, and family size has decreased.There is now free universal primary education with enrolment of about 97%, although this figure drops for secondary education particularly for girls to 30%.22,Furthermore we have, over the years, conducted and published a series of randomised trials of nutritional interventions targeted at pregnant and lactating mothers, infants, and children, with the main aim to improve growth.Our findings have shown at most modest improvements in infant growth, consistent with results from systematic reviews and meta-analyses.2,13,23,The modest increase of 2·8 cm in the mean maternal height is indicative of a small degree of improvement in maternal nutrition during the four decades.Meta-analysis of the relationship between maternal height and birthweight24 yielded an expected effect of 8 g more birthweight per cm of maternal height.The COHORTS group reported a similar value of 0·024 Z scores per cm.25,Therefore the increase in maternal height probably contributed only about 20–30 g of the observed 120 g increase in birthweight.A limitation for our data was the difficulty in deriving a consistent sampling frame for the population under study in an area undergoing rapid change, particularly in the later decades.Changes in the population structure during the past 40 years might have influenced the trends we have reported.We attempted to control for this factor by excluding the children who attended our clinic fewer than six times as likely visitors.However, exclusion of these children might have created a sampling bias, because infrequent attenders might represent resident children who engaged poorly with health care.This potential bias would only affect the trends displayed if the population prevalence of poor attenders changed during the period studied.Another limitation was missing data from the 1970s, particularly birth data, limiting our ability to evaluate the trends in these parameters.Additionally, we omitted birth length data because of poor reliability in the measurements and the small number of measurements that were available in all the four decades.Comparison of the trends noted in our core study villages with those in neighbouring villages receiving less intensive intervention would have been desirable, but such data were not available.Growth has improved during these four decades but, despite the unprecedented levels of investment, the prevalence of low birthweight, childhood stunting, and underweight remains high.The prevalence of wasting has not changed, and growth faltering between 3 months and 21 months has been only marginally attenuated.These data suggest that the refractory stunting must be caused by factors that are corrected as nations pass through the economic transition and advance from low-income and lower-middle-income status.Environmental enteropathy affecting almost all children in low-income settings has been proposed as the mechanism linking growth failure with WASH deficits.26,Our results, together with a previous analysis27 of associations between poor child growth and a range of indicators of socioeconomic status and living conditions in this same community, suggest that there is a very high threshold for WASH improvements that must be achieved before growth faltering can be eliminated.Improved housing conditions, possibly including the provision of piped water directly into the home, might be a necessary step in the global challenge to eliminate childhood malnutrition.Our study villages of Keneba, Kantong Kunda, and Manduar are highly unusual in having the combination of intensive interventions over a protracted period accompanied by systematic growth monitoring; our results might therefore not be generalisable.However, before Medical Research Council inputs and in all other respects such as environment and farming practices they share many characteristics with countless other rural villages in sub-Saharan African in areas of low malaria endemicity.Therefore, we believe that our findings and suggestions for future interventions are likely to be applicable to other similar settings in rural Africa.
Background Growth faltering remains common in children in sub-Saharan Africa and is associated with substantial morbidity and mortality. Due to a very slow decline in the prevalence of stunting, the total number of children with stunting continues to rise in sub-Saharan Africa. Identification of effective interventions remains a challenge. Methods We analysed the effect of 36 years of intensive health interventions on growth in infants and young children from three rural Gambian villages. Routine growth data from birth to age 2 years were available for 3659 children between 1976 and 2012. Z scores for weight-for-age, length-for-age, weight-for-length, mid-upper-arm circumference, and head circumference were calculated using the WHO 2006 growth standards. Seasonal patterns of mean Z scores were obtained by Fourier regression. We additionally defined growth faltering as fall in Z score between 3 months and 21 months of age. Findings We noted secular improvements in all postnatal growth parameters (except weight-for-length), accompanied by declines over time in seasonal variability. The proportion of children with underweight or stunting at 2 years of age halved during four decades of the study period, from 38.7% (95% CI 33.5–44.0) for underweight and 57.1% (51.9–62.4) for stunting. However, despite unprecedented levels of intervention, postnatal growth faltering persisted, leading to poor nutritional status at 24 months (length-for-age Z score −1.36, 95% CI −1.44 to −1.27, weight-for-age Z score −1.20, −1.28 to −1.11, and head circumference Z score −0.51, −0.59 to −0.43). The prevalence of stunting and underweight remained unacceptably high (30.0%, 95% CI 27.0–33.0, for stunting and 22.1%, 19.4 to 24.8, for underweight). Interpretation A combination of nutrition-sensitive and nutrition-specific interventions has achieved a halving of undernutrition rates, but despite these intensive interventions substantial growth faltering remains. We need to understand the missing contributors to growth faltering to guide development of new interventions. Funding UK Medical Research Council, UK Department for International Development.
10
Valuation of vegetable crops produced in the UVI Commercial Aquaponic System
Aquaponics is a food production technology that combines aquaculture and hydroponics in an integrated system.The combination is symbiotic, with each component adding advantages to the other.Fish, which are daily fed a protein-rich diet, generate waste that flows into the hydroponic system.The hydroponic system has an environment suitable for bacteria to convert the waste into compounds required for plant growth.The primary waste product of fish metabolism is ammonia which is excreted by the fish and dissolved in the water.Nitrifying bacteria convert the ammonia to nitrite and then nitrate, which the vegetable crops use for growth.Solid fish waste, eliminated after digestion, contains many of the other macro- and micro-nutrients required by the plants.The hydroponic system serves the purpose of providing an area for nitrification and uptake of nutrients by the plants.This improves water quality for the returning water to the fish component.The integration of the aquaculture and hydroponic systems reduces water discharge into the environment.Aquaponic farmers can produce a great variety of vegetable crops in their systems to meet customer needs and preferences.The UVI Commercial Aquaponic System has been in operation since 1993.Design modifications happened in 1999 and 2003 to improve the system performance.The system has been used to determine the best crops and varieties that could be grown commercially.Lettuce was continuously produced in the system for three years growing a number of types and different cultural conditions.Basil was produced using batch and staggered cropping systems.Several okra varieties were produced over a 3-month period in batch culture.Economic studies of lettuce and basil production have also been made.In general, leafy vegetables grow well with the abundant nitrogen in the system, have a short production period, and are in high demand.Fruiting crops have longer production periods and produce less marketable yield but their value is often higher than the value of leafy produce.Existing economic analysis of commercial aquaponic farms is limited.Several studies develop hypothetical farms based on research data.Chaves et al. incorporated hydroponic tomato production in an aquaponic system.Bailey et al. analyzed three farm sizes growing lettuce.These studies evaluated farm profitability by considering all farm revenues and subtracting all variable and fixed costs to determine a return.There is a growing number of case studies using farm data including the University of Hawai’i and University of Kentucky.These studies use standard accounting techniques, the Modified Internal Rate of Return and Cost of Goods Sold to analyze farm profitability.The Hawai’i case studies included two farms growing only lettuce and one farm with mixed produce.The University of Kentucky studies do not mention the product grown.A method of valuing each crop for comparison was used to quantify the contribution to revenues that each crop can make to the enterprise.This paper provides a method for valuation of vegetable crops produced in the UVI Commercial Aquaponic System from different studies under variable plant spacing, yield, and time to harvest.The UVI Commercial Aquaponics System consists of three main components: fish rearing, solids removal for water treatment and hydroponic vegetable production troughs.The hydroponic troughs are 30 × 1.2 × 0.3 m with a volume of 11.3 m3 and a surface area of 214 m2 for vegetable production.The flow rate of water through the troughs is 125 L/min for a retention time of 3 h.The system pH is maintained around 7.0 with the addition of either calcium hydroxide or potassium hydroxide on alternating days.Chelated iron is added every three weeks with a quantity equal to 2 mg L−1 Fe.Total dissolved solids range from 62 to 779 mg L−1.The fish is fed three times daily ad libitum for 30 min with a complete, floating pelleted diet with 32% protein.Details about fish size, initial and final weight are available on each particular study performed at the UVI Commercial Aquaponics System and used for the valuation of vegetable crops.Fish waste products are the source of nutrients for plant growth.Vegetable crops are cultivated on styrofoam rafts floating on the surface of the hydroponic troughs.The rafts are 2.4 × 1.2 × 3.9 cm, prepared for planting by painting with white non-toxic roof paint.4.8-cm diameter holes are drilled in the rafts at different spacing for the various plant requirements.Planting density range from 0.67 to 30 plants per meter square depending on the crop and mature plant size.Net pots, 5 × 5 cm, are inserted into each hole to hold the rooted seedling.Seedlings are produced in an open-ended, covered greenhouse.Seedling flats, 25.4 × 50.8 cm with 98 cells, 2.54 × 2.54 × 2.54 cm, are filled with ProMix® potting mix, a mixture of 79%-87% peat moss, 10%-14% perlite and 3%-7% vermiculite.Depending on the seed’s requirement, they are surface seeded with a vacuum seeder or manually drill-seeded into 1.5 cm deep holes made in the ProMix® media.The seedling flats are watered to begin the germination process and then covered for 2–3 days until cotyledons emerged.The flats are then uncovered and the seedlings allowed to develop over a 2- to 3-week period.The seedlings are watered twice daily and fertilized once weekly with Peters Professional Plant Starter 4-45-15.Seedlings are ready to transplant when 1–2 sets of true leaves had developed and the roots had grown to encircle the media.They are transplanted into clean rafts in the aquaponic system.Pest management requires spraying once weekly with Bacillus thuringiensis subsp. kurstaki strain ABTS-351, fermentation solids, spores, and insecticidal toxins on all crops to control caterpillars and with potassium salts of fatty acids insecticidal soap to control aphids and white fly on crops susceptible to infestations of those pests.Plants are grown in the system for the required period to come to maturity.Plants yield different mass quantities depending on the part of the plant harvested: whole plant, leaves, or fruit.Lettuce and pak choi were harvested by removing the plant and cutting the roots from the stem.The plant is trimmed of old discolored or insect damaged leaves, and packaged for market.Other leafy plants were harvested by the “cut and come again” method which leaves 15 cm of plant stem to regrow or removes mature leaves and retains the young leaves to continued growth.Kale, collards, swiss chard, and basil were harvested by this method.Basil ‘Genovese’ was produced in staggered production for a 12-week trial.Transplants were placed in one-quarter of the system for 4 weeks.After 28 days the first crop was harvested by the “cut and come again” method which allows for regrowth of the 15 cm of plant remaining after harvest.Each planting was harvested twice for a total of eight harvests.Okra, cucumber, and zucchini yield fruits that are harvested frequently during production.Melon and sorrel yield fruits to harvest at the end of a long growing period.Crops were harvested at maturity.Okra was planted for one trial in fall 2002.Three varieties were planted at two densities.The varieties were ‘Annie Oakley’, ‘North-South’ and ‘Clemson Spineless’.The two densities were 2.8 and 4.0 plants/m2.Two-week old seedlings were transplanted on Oct 1, 2002 and the first harvest was made 33 days later.Pods were harvested three times each week for 49 days for 22 harvests on an 82-d growing season.Fresh fruits and vegetables are shipped in commonly used containers designated by volume or product count and expected weight of the container.Typical shipping containers include carton, ½ carton, carton with 24 units, basket, and cartons with 38.8 L, 35.2 L and 17.6 L.Produce prices were obtained from USDA/AMS.Weekly prices were obtained from the Miami Terminal Custom Report for the period May 1, 2015–April 30, 2016.The most frequently occurring low price and high price were sorted from each products’ weekly prices as representative of the price most likely to be received by a farmer.St. Croix farm price is used as the price for sorrel value since this product is not included in the USDA market prices because of its low volume of sales.Production data, product value, and time to harvest were summarized and calculations made to determine crop value on a weekly basis.The crops are grouped by product type and for leaf product by product harvested.Expected value, $/m2/week, was calculated by formulas 1 and 2.Value = Value * YieldExpected Value = Value ÷ Weeks in cultivationwhere Value is the crop value of weekly low and high prices, Yield is the biomass harvested for the crop during research trials in the UVI Commercial Aquaponic System, and Weeks in cultivation is the time period between transplanting seedlings and harvest of the plant or removal of the plant after multiple harvests of its fruit.Production yield, time to harvest and value from romaine, leaf and bibb lettuce are listed in Table 4.Each crop was planted at different densities because the final size, and expected yield, of each plant is different.The densities are 16, 20 and 30 plants per square meter.Bibb lettuce requires three weeks of grow-out to market size while romaine and leaf lettuce require four weeks.Each type has a different value per head; $0.75–$0.92 for bibb, $0.75–$0.83 for leaf and $0.87–$0.92 for romaine.A farmer assessing value on the individual price per head would select romaine lettuce to produce, given that it has the highest value, followed by bibb and then leaf.Calculated on value per square meter, the farmer would select bibb with the highest density and value followed by leaf and romaine lettuce.The final step is to include value per square meter per week.Expected value were calculated using the crop value divided by the crop cycle.In this case, bibb lettuce has the higher value compared to leaf and romaine because of its higher value per area and its shorter growing period.A farmer would choose to grow bibb lettuce with its returns of $7.50–$9.20 per square meter per week.The higher planting density and the shorter production period overcomes the low individual value of the bibb lettuce.Other leafy greens with different densities, growth periods and values are presented on Table 5.Whole heads of pak choi are harvested while the others − kale, collards, swiss chard and basil − have only their mature leaves harvested or are harvested by “cut and come again.,Time to harvest is three or four weeks.Basil, a culinary herb, stands out with the highest value per kilogram, $8.80-$11.03.Yield was 1.2 kg/m2 in the first harvest and 2.4 kg/m2 in the second.However, the crop has a low planting density and a 4-week growth period.This reduces the value per square meter per week and its value, $3.96–$4.96, is more comparable to pak choi, $3.92–$4.32/m2/week.The three other crops range from $0.23–$1.19/m2/week.Fruiting crops are also evaluated by their density, growth period yields and value.Sorrel and cantaloupe are planted in the system for long growth periods and harvested at the end of that time.Okra, cucumber and zucchini have shorter growth periods and harvests are made several times each week summarized in a total yield.The most productive okra variety and density was ‘North-South’ planted at 4.0 plants/m2 which yielded 3.04 kg/m2 over the harvest period.Sorrel has the highest value per area per week.The value per kilogram is the local St. Croix, USVI price at farm stands during the harvest season, December − January.The expected value of $1.89/m2/week is the highest value for a fruiting crop.Cucumber has high yield, moderate growth period and low value and expected value of $1.24–$1.32/m2/week.Cantaloupe has low value, low yield and a long cultivation period.Its expected value is the lowest of fruiting crops, $0.14–$0.16/m2/week.Rakocy et al. indicated the income from herbs is much higher than from fruit crops.For example, in experiments in UVI’s Commercial Aquaponics System, basil yielded 5000 kg annually at a value of $110,000, compared to okra production of 2900 kg annually at a value of $6400.Expected values are based on Miami terminal prices reported by USDA/AMS.Other terminals have different prices and farmers should use reports from their nearest market for crop valuation.Seasonal availably of locally grown products also effects wholesale prices.Even if a farmer is selling direct to customers an understanding of wholesale prices is needed to assess competition.Understanding product values helps a farmer select crops that give the highest returns to the enterprise.Because of different production densities and time to harvest a common factor of yield per area over time gives a common frame to compare.Market demand for specific products and the desirability of a product mix being available from the farmer to the consumer also plays a role in crop selection.A method of valuing each crop was provided to assist growers quantify the contribution that each crop can make to the business revenue.Historical data provided yield from different varieties, seasons, plant spacing, yield and time to harvest, indicating an expected value to allow proper marketing planning.Bibb lettuce has shown a higher expected value, followed by basil, pak choi, leaf and romaine lettuce.However, the crop value can change over time, and should be carefully evaluated before been used as an investment reference.
The UVI Commercial Aquaponic System is designed to produce fish and vegetables in a recirculating aquaculture system. The integration of these systems intensifies production in a small land area, conserves water, reduces waste discharged into the environment, and recovers nutrients from fish production into valuable vegetable crops. A standard protocol has been developed for the production of tilapia yielding 5 MT per annum. The production of many vegetable crops has also been studied but, because of specific growth patterns and differences of marketable product, no single protocol can be promoted. Each crop yields different value per unit area and this must be considered when selecting varieties to produce to provide the highest returns to the farmer. Variables influencing the value of a crop are density (plants/m2), yield (unit or kg), production period (weeks) and unit value ($). Combining these variables to one unit, $/m2/week, provides a common point for comparison among crops. Farmers can focus production efforts on the most valuable crops or continue to produce a variety of crops meeting market demand with the knowledge that each does not contribute equally to profitability.
11
Reprint of Transport poverty and fuel poverty in the UK: From analogy to comparison
Domestic and transport energy consumption have traditionally belonged to distinct academic and policy silos.Recent developments, however, suggest the need for convergence.The UK is committed to reducing greenhouse gas emissions by 80% by 2050, and reductions have to be achieved across all sectors.This includes both transport and domestic energy uses, which together account for most of household emissions.Strongly connected to this agenda is the need for technological decarbonisation of the private car fleet with a shift to electric vehicles powered through charging from the grid or hydrogen generated from ‘green’ electricity.Affordability in both the domestic and the transport sector is a critically important issue, which has high political salience.However, the approaches to conceptualising energy need and affordability are currently quite different within these two sectors.With an ever closer coupling of domestic energy and energy for mobility these conceptual gaps will become difficult to defend, and this paper, therefore, seeks to explore and propose ways to close that gap.A reflection on energy affordability is also particularly salient now because, whilst the status quo of affordability is unevenly distributed, a transition to a new lower carbon system for domestic energy and mobility could imply quite radical shifts in prices and access to alternatives.This has generated an initial literature which raises concern for the vulnerability of different social groups to the current energy transition, as well as for the accessibility and affordability of energy services across both sectors.This paper begins by situating the debate about energy affordability in the British context, where substantial research traditions exist in both domestic and transport energy consumption.The UK has long established the notion of ‘fuel poverty’, which refers to the affordability of domestic energy and most notably heating.This notion underpins established research and policy agendas in the UK, and it influences how these issues are now being framed in an increasing number of countries.Similarly, the worldwide influence of the UK ‘transport and social exclusion’ research tradition within transport poverty policy in the UK cannot be understated.However, this research has focused largely on low mobility individuals and carless households, while transport affordability, the costs of motoring, and vulnerability to fuel price increases have received less attention than in other countries.In this context, British researchers and NGOs have put forward the notion of ‘transport poverty’, building on an implicit analogy between fuel poverty and transport affordability issues.However, the justification for this analogy, and its implications for how transport affordability should be defined, measured and tackled have rarely been discussed.This paper aims to fill this gap, by critically comparing and contrasting the notions of fuel poverty and transport poverty.In doing so, it questions the assumption of a simple equivalence between the two problems, illustrating how transport consumption is conceptually different from domestic energy and heating consumption in a number of key respects.To the best of our knowledge, this is the first English-language publication to offer a thorough critical discussion of the two problems in a comparative perspective.The article is structured as follows.Section 2 focuses on domestic energy affordability.After an overview of debates in the UK, the notion of fuel poverty is discussed under four headings: consequences, drivers, measurement and policies.Section 3 focuses on transport affordability, starting with a discussion of the fuel poverty - transport poverty analogy in British debates, followed by a comparison of the two problems, which is structured under the same four headings.In Section 4, we conduct a critical assessment of similarities and differences, and outline directions for future research and discuss policy implications.Brenda Boardman’s book “Fuel poverty: from cold homes to affordable warmth” provided a first and well-known definition of fuel poverty as being “unable to obtain an adequate level of energy services, particularly warmth, for 10 per cent of income”.The first UK Fuel Poverty Strategy adopted Boardman’s ‘ten per cent ratio’ definition and committed the government to the ‘eradication’ of fuel poverty by 2016, publishing data and reports annually.Following growing criticism of this definition, in 2010 the government commissioned an independent review.The outcome was the ‘Low-Income-High-Costs’ indicator, which was adopted as the new official definition of fuel poverty in England.LIHC defines fuel poor households as those who have “required fuel costs that are above the median level” and “were they to spend that amount they would be left with a residual income below the official poverty line”.In 2014, 10.6% of English households were fuel poor.An important characteristic of the British debate is the ambiguity about which domestic energy uses are considered.While all energy uses within the home are considered in the official indicators, policy and public discourse typically focus on heating only.For simplicity, in this article we refer to fuel poverty as a space heating issue only.The negative physical health consequences of living in cold and damp conditions have been emphasised, and this magnifies the political salience of fuel poverty in the UK.Living at cold temperatures has been linked to the incidence of cardiovascular events, respiratory problems, rheumatisms and infections, and to increased rates of mortality during winters.In 2014/2015 “an estimated 43,900 excess winter deaths occurred in England and Wales”, 83% of which among people aged 75 and over.Beyond health impacts, fuel poor households face a choice between enduring cold temperatures, incurring debt, and cutting expenditure in other areas, such as food consumption.In mainstream fuel poverty research, lack of warmth is seen to arise from three factors: income, energy prices and energy efficiency.Fluctuations of energy prices over time have been reflected in estimates of the extent and depth of fuel poverty.Recent increases in domestic energy prices reflect changes in global energy markets, but also the cost of environmental obligations put by the government on energy suppliers, which are recouped through higher energy prices.The thermal efficiency of homes is a second key driver, with fuel poverty rates higher for households in dwellings that are larger, older, poorly insulated and/or not connected to the gas grid.At a given moment in time, fuel poverty correlates strongly with low-income: in 2014, fuel poverty rates were highest among the lowest income quintile group.Research has highlighted two types of ‘mismatches’ between the drivers of fuel poverty, i.e. situations where they could offset each other, but they do not:a mismatch between income and energy efficiency.Boardman argues that as “the lower the income of the household, the more energy efficient the property has to be to ensure that they are not in fuel poverty, the poorest people should have the most energy-efficient homes”.In Britain, lower income households are more likely to live in smaller properties, in flats and in modern or recently renovated social housing, all factors that tend to result in higher thermal efficiency.On the other hand, they are more likely to use expensive fuels and less likely to be able to make capital investments on energy efficiency improvements, and this can leave them “locked-in to high energy costs”.Overall, the Hills review found no significant differences in thermal efficiency between income groups, after controlling for tenure,a mismatch between income and fuel prices: low income households generally pay higher tariffs, as a result of payment method, marginal cost pricing and inability or unwillingness to “shop around for the best deals”.Overall, while Boardman argues that the poorest households should have access to the cheapest options the opposite seems to be the case.There are four key components to the official definitions of fuel poverty adopted in the UK, which are important to bear in mind for when we later discuss the measurement of transport poverty.First, the unit of analysis is the household, reflecting the reasonable assumption that household members share income and house space, and therefore will be affected to the same extent by cold temperatures and/or by the negative consequences of spending disproportionate amounts on domestic energy.Second, both official indicators of fuel poverty for England are based on modelled estimates of required spending on domestic energy services.This means that “households whose actual expenditure is low because they cannot afford enough fuel to be warm are not wrongly considered not to be in fuel poverty households who have high expenditure while wasting energy are not considered to be fuel poor”.The modelling of ‘required spending’ on space heating consists of four steps: specification of a temperature standard; application of one of four ‘heating regimes’; estimation of required energy consumption; estimation of required expenditure.A third key component of the measurement is the calculation of the threshold of affordability: having to spend more than the critical value is the, or one of the criteria for being defined as fuel poor.In TPR, the threshold is defined based on cost burden, i.e. the ratio between spending and income.This originally corresponded to twice the median cost burden actually observed in the British population and to the average expenditure ratio among the lowest 30% of the income distribution in 1988.The LIHC indicator is based on a threshold of required domestic energy costs, i.e. not normalised based on income.Here the critical value is not fixed, but corresponds to the median required costs estimated based on the annual sample, equivalised to reflect the different energy needs of different types of households.A fourth component of the measurement, the definition of a critical threshold of income, only applies to LIHC.With the new LIHC indicator, non-poor households are excluded a priori from fuel poverty, unless high required domestic energy costs bring their residual income below the official poverty line.Under Boardman’s previous TPR indicator all households having to spend more than 10% on domestic energy were considered fuel poor, regardless of income.Fuel poverty policies are generally categorised based on the three drivers.Table 3 presents examples of UK policy schemes fully or partially aimed at fuel poverty alleviation, alongside a discussion of similar measures for transport poverty.Income policies consist of government transfers to households to help them pay domestic energy costs.In England, a large share of the expenditure on fuel poverty mitigation is accounted for by these measures.The fact that Winter Fuel Payments are made to all households with an elderly member, regardless of other factors such as income, has drawn criticism as this is seen as a poorly targeted and inefficient measure to mitigate fuel poverty.Price policies.In the UK the degree of government control on energy tariffs is limited due to market liberalisation and privatisation.However the government regulator has the power to impose obligations on the energy suppliers, and since 2011 these are required to provide rebates to low-income or vulnerable households.Energy efficiency.Grants are provided to low-income households that need support to undertake capital expenditure for heating efficiency improvements.There is some overlap with climate policy here, with some schemes aiming at improving the thermal efficiency of the housing stock across the board, but including components that target specifically low-income households and deprived areas.Having discussed the key components of the definition, measurement and policy responses to domestic fuel poverty, we now turn our attention to the issue of transport poverty.While the concept of fuel poverty has a well-established set of definitions, binding policy targets and monitoring processes, there is a relative lack of academic and policy interest for questions related to transport poverty in the UK.So far in this article the term ‘transport poverty’ has been used in a generic way.It is crucial, however, to ensure a clear and consistent use of terms in this area.In the international academic literature, the term is used in two essentially different ways.In a broader understanding, it is used to refer to all kinds of inequalities related to transport and access, i.e. as poverty of transport.In this meaning, the term is used alongside other notions such as ’transport-related social exclusion’, ’transport disadvantage’, etc.In a more specific meaning, ’transport poverty’ is used to refer to the affordability of transport costs.In this understanding, it is used alongside other notions such as ’transport affordability’, ’forced car ownership’ and ’car-related economic stress’.From here on in this paper, we will use ‘transport poverty’ to refer to the former, broader problem, and ‘transport affordability’ to refer to the latter, more specific issue.In developed countries, research on transport affordability has focused mostly on households who need to spend a disproportionate amount of money on car-based mobility in order to access essential services and opportunities.This reflects the fact that motoring accounts for around 80% of all household spending on transport in OECD countries, and the assumption that car ownership and use can be a necessity in car-dependent societies.In the UK, there have been few attempts to quantify the prevalence of transport affordability problems in the population.These have typically been based on an analogy with fuel poverty, but have produced very different findings, as illustrated in Table 1.On the policy side, the Environmental Audit Committee of the House of Commons’ inquiry into “Transport and the accessibility to public services” asked stakeholders whether “a measure of the transport accessibility of key public services, in a similar manner as ‘fuel poverty’, be useful for policy-making”,finding “some support” for the idea.Overall, then, the British debate on transport affordability has been dominated by an analogy with the dominant fuel poverty agenda, in which the substantial equivalence between the two issues has been largely implied and taken for granted.As a result, a critical discussion of the similarities and differences between domestic energy affordability and transport affordability has not been undertaken in the UK yet.Internationally, however, a number of studies have explicitly compared the two issues, both theoretically and empirically, mostly concluding that there are important conceptual differences between them.In the following sections, we build on these contributions and offer a discussion of transport affordability as seen through the lens of British fuel poverty research and policy.This is useful in light of the international influence that the ‘fuel poverty’ notion has had to date.The less obvious causal chain between lack of affordable transport and its negative social consequences might explain the relative lack of policy interest in the problem of transport affordability, especially if compared to the political salience of fuel poverty and its clear negative health consequences.Transport is a derived demand, i.e. a certain amount of mobility is generally required to access services and opportunities, as well as to engage in social activities and networks.Hence transport and access problems have been suggested to be contributory factors to a wide range of poor life outcomes including unemployment, reduced participation in education and training, poor diets, reduced health services usage, as well as exclusion from a wider range of social activities and social networks.In each of these areas, however, transport is generally only one factor among many others in explaining the observed and associated social inequalities.Furthermore, while there is quantitative evidence to demonstrate the relationships between transport disadvantage, reduced social inclusion, social capital and well-being, these are less likely to draw public and political attention than reported figures for excess winter deaths.As with fuel poverty, limiting spending on transport is not the only short-term option for households struggling to afford transport costs, as they can curtail spending in other areas and/or incur debt.There is evidence to suggest that transport and motoring costs are given high priority by households, who prefer to cut other costs first.Notably, empirical studies on energy-related economic stress suggest that households are more likely to maintain their travel patterns and reduce domestic energy consumption than the other way round, suggesting a possible causal link from transport affordability to fuel poverty.The prioritization of transport costs is often explained by the fact that travel is a precondition for employment and income generation for households.The broad issue of transport poverty has a wide range of drivers, including non-economic factors such as disability, age, gender, ethnicity, household type, and cognitive and psychological factors, etc.If the focus is on the more specific issue of transport affordability, however, it makes sense to assume that the drivers are the same as those of fuel poverty: income, prices and energy efficiency.These are critically discussed below.Most research on transport affordability in developed countries has focused on motoring among low income groups.In detail, two distinct manifestations of affordability problems in relation to car ownership and use have been brought to light.First, low-income households are more likely not to be able to afford car ownership.The lack of car ownership is often associated with reduced accessibility to key service facilities and everyday activities, most notably in car dependent areas where modal alternatives are few.This can result in reduced overall travel as a result of ‘suppressed travel demand’.To adopt the terminology of fuel poverty research, many low-income household underspend on travel, as a result of being unable to afford the capital expenditure on car ownership, which would enable them to travel more and satisfy their accessibility needs.The second manifestation is that of households who own and use cars despite limited income, and therefore have to trade-off transport costs against spending in other essential areas, resulting in ‘forced car ownership’ and ‘car-related economic stress’.Mattioliet al. show that around 9% of UK households have low income, high motoring costs, and low capacity to reduce fuel demand in response to higher prices, which leaves them in a situation of vulnerability.In both cases, the recursive relationship between transport expenditure and income generation adds a further layer of complexity.Individuals can be unemployed because they are unable to afford car ownership and/or commuting costs.At the same time, some households are willing to spend large amounts on commuting travel, curtailing other expenses, as the alternative is an even lower standard of living as a result of reduced income.Indeed, empirical studies have found that commuting costs can be very large as compared to household income and that employed households are overrepresented among those spending disproportionate amounts on transport.This recursive relationship has no clear equivalent in fuel poverty.Historically the affordability of all transport modes has increased in real terms in most countries.However, as a result of the massive increase in travelled distances and the associated shift towards car travel the share of transport on total household spending has remained relatively constant over time at the aggregate level, although it varies greatly across different social groups.Lower-income households generally spend a smaller share of their transport budget on vehicle purchase, and more on running motor vehicles and public transport fares.Fig. 1 depicts the evolution of transport costs in the UK between 1996 and 2015, showing that while vehicle prices have significantly declined, other components have increased or remained stable over time.Arguably, these trends are not beneficial to transport affordability, as the components of transport costs that are most significant for low-income households have increased since 2003.Internationally, high oil and motor fuel prices between the early-2000s and 2014 have resulted in a surge of studies on transport affordability.As with fuel poverty and heating, in this paper we adopt a broad definition of energy efficiency as the total amount of transport-related energy required to satisfy a given set of accessibility needs.In this perspective, energy efficiency consists of three components.First, the required travel distances to activity destinations and the practicability of energy-efficient modes, two factors which are strongly influenced by urban form, land use and the characteristics of the built environment.A third factor is the energy efficiency of motor vehicles.These components are discussed in turn below.There is a large body of evidence on the relationships between land use, the built environment and travel behaviour, showing that low density, distance from city centres and mono-functional land use patterns are associated with increased car travel.Empirical studies have shown that higher-density urban areas have lower transport-related energy consumption and carbon emissions, because distances between residences and activity destinations are shorter, and this makes more energy-efficient modes like walking, cycling and public transport more practicable, reducing car dependence.This relationship holds at a lower geographical level: in England there is a negative relationship between degree of urbanity and household energy usage from motor vehicles, as well as with motor running costs, at the small area level.Other studies confirm the inverse relationship between degree of urbanisation and total transport spending.This highlights the importance of the residential location choices of households.These are driven by a wider range of factors than just transport costs, including notably housing costs, but also other factors such as e.g. proximity to social networks, lifestyle choices, etc.Also, from the perspective of households, improving transport energy efficiency through residential relocation is a more difficult and disruptive choice than improving home heating efficiency through housing renovation.Overall, this suggests that the lock-in into low energy efficiency may be stronger for transport affordability than for fuel poverty.A third component of transport energy efficiency is vehicle efficiency.While the energy efficiency of the housing stock increases almost by definition over time, this is not the case for transport energy efficiency as we have defined it here.Historical trends towards suburbanisation have meant relative population gains for the areas with the highest transport-energy consumption.At the same time, technological improvements in vehicle fuel efficiency have historically been offset by other factors, although this may change in the future.As with fuel poverty, there are possible ‘mismatches’ between the drivers of transport affordability listed above.We discuss two examples of mismatches between income and energy efficiency here:Urban research demonstrates the existence of a variety of ‘urban socio-spatial configurations’, i.e. patterns in the distribution of income groups across city-regions.To put it simply, in some urban areas the rich tend to live in the urban core, and the poor in peripheral areas, while in others the opposite pattern is observed.Since generally central areas are characterised by better levels of accessibility and lower car dependence, these configurations have opposite effects on transport affordability problems, i.e. they may compound them, as in Australian cities, or alleviate them, as in Christchurch, New Zealand.This suggests the need for a context-sensitive analysis of the relationships between transport affordability and urban socio-spatial configurations,A second mismatch concerns socioeconomic lags and gradients in access to energy- and cost-efficient vehicle technology.If low-income households owned the most fuel-efficient vehicles, this would help with transport affordability.However, high upfront costs of new and technologically superior vehicles, including electric vehicles, might mean that low income households run less energy efficient vehicles, compounding transport affordability problems.Here the parallel with fuel poverty is accurate, as in both domains high initial capital expenditure is a condition for benefiting from low running costs.Australian studies have found lower vehicle efficiency in areas of lower socio-economic status, due to more frequent ownership of old and large engine vehicles.The adoption of metrics developed in fuel poverty for use in the transport domain is tempting, but it is not without its challenges, because of the conceptual differences between fuel poverty and transport affordability.In Table 2, we identify four key components to the official English indicators of fuel poverty, along with factors of complexity and proposed solutions for developing a similar metric for transport.Elsewhere we present the results of an empirical study where a metric has been developed according to the solutions proposed in Table 2.A first factor of complexity is that, while fuel poverty is clearly a household attribute, transport and accessibility problems reside with individuals rather than the whole household - i.e. one member of a household may experience it whilst another member of the same household does not.Therefore, while income is better treated as a household attribute, transport needs should be assessed at the individual level, and this complicates the definition of a metric of transport affordability.We propose that, while transport affordability should be quantitatively assessed at the household level, complementary approaches should be developed to investigate within-household variation.A second and key characteristic of English fuel poverty metrics is that they are based on a modelled assessment of required domestic energy use, i.e. of households’ heating needs.As previous research has pointed out the adoption of this approach for transport affordability runs into extremely serious obstacles.A modelling of required transport spending would require the definition of normative standards of out-of-home activity participation, allowing for sufficient variation between different types of individuals.While fuel poverty modelling allows for just four different heating regimes, the highly individualised nature of accessibility needs results in much greater, and potentially overwhelming, complexity.For example, older people would need to access different services, and with a different frequency, than commuting parents, etc.Also, the required activity sets would be different for different household members.Moreover, while fuel poverty measures are based on expert knowledge on healthy temperature ranges, there is no comparable knowledge on standards of out-of-home activity participation, with greater scope for disagreement on what ought to be included.Given these complexities, we argue that it is not currently advisable to develop metrics of transport affordability based on the modelling of required transport spending.The definition of ‘accessibility benchmarks’ for different types of household remains an interesting area of research, and further studies into normative mobility standards could contribute to the development of better transport affordability metrics in the future.We argue, however, that the drawbacks of employing wrong models of required travel currently outweigh the benefits.We therefore suggest that it is preferable to adopt affordability metrics based on actual transport expenditure.Adopting this approach, Mattioli et al. find for example that 9.4% of UK households spend more than 9.5% of their income on costs related to running motor vehicles, while having residual income below the poverty line.The main drawback of this approach is that it does not allow the identification of ‘under-spending’ households, who spend less than they ought to because they curtail travel to essential activities.Arguably, however, this is not such a limitation, since other approaches exist to quantify the prevalence of reduced mobility and suppressed travel demand.Transport affordability metrics based on actual expenditure can complement these insights with an assessment of households spending an excessive amount of money on travel, i.e. possibly curtailing spending in other parts of the budget.Another possible limitation of this approach is that it does not exclude ‘overspending’ households, i.e. those spending more than they ‘need’ on transport.However, this issue can be mitigated by building an income threshold into the indicator, as discussed below.The third and essential component of English fuel poverty metrics is an affordability threshold.TRP and LIHC both derive this threshold by the average level of spending on domestic energy.It is clearly inappropriate to adopt the exact same thresholds for transport, particularly given the fact that on average households spend more on transport than on domestic energy.A more sensible approach is to base the threshold on figures of spending for transport.For example, Nicolas et al. propose a threshold at 18% of income spent on transport, corresponding to twice the median of actual expenditure in France.A similar approach has been adopted by further studies.Finally, while the LIHC fuel poverty indicator includes an income threshold, TRP does not.However, in practice this does not lead to major differences as domestic energy spending is regressively distributed.This is not the case for transport: in a review of ‘stylised facts about household spending on transport’ in OECD countries, Kauppila finds that the cost burden of transport increases as income increases, i.e. richer households spend proportionately more of their wealth on transport, mainly as a result of greater car ownership and use.This suggests that the TPR indicator is potentially misleading when applied to transport spending, as it may lead to include mid-to-high income households with sufficient residual income, who may be ‘overspending’ on travel for reasons including e.g. preference for distant activity destinations.We argue therefore that the LIHC approach is better suited for use in transport, as it excludes such households.While it is possible that some households below the poverty line spend more on travel than they need to, it is reasonable to assume that overspending is not so common among households whose resources are very limited.As with fuel poverty, we categorise policies to tackle transport affordability based on the three drivers.Our focus in this section is mainly on the UK context, although for illustrative purposes we also refer to examples of policies in other developed countries.With regard to income policies, demand-side subsidies such as transport vouchers and direct transfers using the welfare system have been implemented to improve the affordability of public transport in a number of countries.On the other hand, given the high degree of car dependence in many developed countries, it is often argued that financial aid should be provided to the poor to help them operate and get access to private cars.In France, the UK and the US, such measures have been implemented locally as part of welfare-to-work programmes but have not been adopted on a large scale, for reasons related to cost to the public purse, the conflict with environmental policy goals and the risk of encouraging car ownership among households who are not able to afford the associated running costs.A discussion of price policies brings to the fore hidden or little discussed realities regarding the different taxation treatment of domestic energy and energy for mobility, with the latter being generally much more highly taxed through the petrol pump.In countries such as the UK where the domestic energy market is deregulated, this arguably gives the government a greater degree of control over motor fuel prices as compared to domestic fuel.In the UK, this has been reflected in public debates about the appropriate level of taxation of motor fuels, which are often linked to concerns about transport affordability.For example, the Royal Automobile Club Foundation has argued that the rate of fuel duty should be lowered in order to alleviate ‘transport poverty’.Internationally, public transport is an area where the affordability of fares is often ensured through supply-side price subsidies.In most of the UK, early deregulation of local public transport in the 1980s has resulted in very large reductions in the level of public subsidies and in a parallel increase in fares, clearly limiting the scope for pricing policies.However, concessionary fares are offered to groups such as the disabled and older people, and funded through general taxation.The English National Concessionary Bus Travel Scheme offers free bus travel to seniors, and was introduced specifically to address affordability problems among them.It has been criticised for being very expensive and poorly targeted, as it is not restricted to low-income households, as well as for diverting resources from more effective investments.With regard to energy efficiency, improving the viability of energy- and cost-efficient transport modes, such as walking, cycling and public transport, can relieve low-income households from ‘forced car ownership’, ‘car-related economic stress’ and expensive reliance on taxis.For instance, increasing public transport services in deprived areas can bring huge cost savings to individuals who use them.Arguably, in order to tackle affordability problems, new public transport services need to be specifically targeted towards low-income areas and individuals, as it cannot be assumed that they will automatically benefit from such initiatives.In the UK deregulated public transport market, this is possible to an extent with subsidised services and agreements between operators and local authorities, although there are serious limits to what has been achieved in this area to date.Densification and ‘compact city’ policies reduce car dependence and need to travel long distances, thus reducing the household expenditure required to satisfy travel needs and increasing resilience to fuel price increases.However, living in high density areas may be associated with worse local environmental conditions and higher housing costs, and in many developed countries households’ location preferences remain strongly oriented towards low-density living.As a result, improving the transport-related energy efficiency of the urban fabric through densification is more politically controversial than retrofitting the housing stock is in the case of fuel poverty.For example, in the UK many of the spatial planning policies introduced by the Labour governments to encourage higher density development were rolled back after the change of government in 2010, while retrofitting investments are still part of the current Conservative Government’s fuel poverty policy package.In many countries, initiatives have emerged to increase households’ awareness of the transport cost consequences of residential location choices, through e.g. the development of online calculators and ‘housing and transport’ affordability indices.These aim to counter the phenomenon whereby households are attracted to car dependent areas by low housing costs, but underestimate the corresponding increase in transport costs once they are settled in these areas.Mortgage policies such as location efficient mortgages have also been developed to take into account the better repayment capacity of those buying properties in accessible areas with lower transport costs, although not in the UK.In both cases, the goal is to encourage inner-urban residential location choices and reduce household transport costs.Incentives to the purchase of low-carbon vehicles may improve transport affordability through reducing running costs, although they generally aim primarily at carbon reduction.However, these vehicles currently remain out of the reach of the majority of low-income customers, due to high upfront prices even after grants are deducted.Therefore, there is a risk that these incentive policies may worsen the mismatch between income and vehicle energy efficiency, thereby deepening inequalities in terms of transport affordability.In UK fuel poverty policy, some government grants for heating improvements are targeted specifically at low-income households, even though social gradients in the diffusion of low-carbon micro-generation technologies are a point of concern.Only a few studies have considered the synergies and the trade-offs between climate change and transport poverty policies.They conclude that both cost and income policies result in trade-offs, as reducing the cost of travel tends to result in greater emissions, while increasing costs risks pricing out the poor from access to essential opportunities.Only energy efficiency policies, such as reducing the need to travel through densification or improving the viability of alternative modes, are regarded as a win-win for both climate change mitigation and transport affordability.A very similar policy tension between social and environmental goals is highlighted in fuel poverty research.This paper started with the observation that there has been an uncritical transfer of concepts and indicators from fuel poverty into the transport field in the UK.Our comparative discussion of fuel poverty and transport affordability highlights a number of similarities and differences, which have not been clearly identified and discussed in the literature to date.In this section, we critically discuss the most important insights emerging from the comparison.These are summarised in Table 4, along with related guidelines for policy and future research.The transfer of concepts from one field to another always comes with opportunities and risks.Where similarities between fuel poverty and transport affordability exist, developing parallels and transferring concepts can be instructive.For example, our review demonstrates that the ‘triad framework’ of fuel poverty drivers can deliver useful insights when applied to transport affordability and related policy making.In this paper, we have put forward the notion of ‘mismatches’ to highlight situations where the misalignment between income, prices and energy efficiency compounds energy affordability issues.We argue that the concept provides a useful lens to look at transport affordability.From a research perspective, it draws attention to important questions such as: do low-income households live in the most car dependent areas?,Do they tend to own older, larger and less fuel-efficient vehicles?,From a policy perspective, our review shows that such mismatches are implicitly or explicitly taken into account in fuel poverty policies, e.g. through the targeting of housing retrofit to low-income areas.They have however received less attention within transport policy.Notably, we argue that the transport affordability implications of lags and gradients in the diffusion of energy- and cost-efficient vehicle technology deserve more policy and research attention, given the emphasis currently placed on the electrification of the vehicle fleet.There is a need to avoid a future in which electric vehicles are adopted by the middle and upper classes, while low-income households rely on internal combustion engine vehicles which are cheaper to buy, but more expensive to run.At the same time, the rise of electric vehicles may blur the distinction between domestic and transport energy consumption, while creating interesting interactions between the two.Here again, there is potential for mismatches, as higher-income households in detached housing may be more likely to take advantage of home ‘solar plus storage’ packages, with resulting lower electricity costs.Similar considerations apply to the vehicle-to-grid concept.Another similarity between the fuel poverty and transport poverty literatures is that both identify synergies and trade-offs between social policies to improve affordability and environmental policies to reduce carbon emissions, and argue that synergies can only be achieved if energy efficiency policies are given priority.On the other hand, however, our discussion suggests that improving energy efficiency in the transport sector is more challenging, as it involves reducing the car dependence of urban and transport systems, something which is more long-term, resource intensive and politically controversial than even large-scale retrofitting of the housing stock.Also, currently the most cost- and energy-efficient transport modes are not necessarily those providing the best levels of access to services and opportunities – a situation that has no parallel in heating.This complicates the task of reconciling social and environmental goals in transport policy.One area where the analogy can be particularly misleading is the development of empirical indicators.As discussed at length in Section 3.4, there are a number of important differences between the two problems, which make a direct transfer of indicators inappropriate.Notably, studies that have uncritically adopted the TPR metric to measure transport affordability in the UK have produced inconsistent results, with rates of incidence varying between 3% and 80%.This is problematic not just from a scientific perspective, but from a policy perspective as well.This paper puts forward specific guidelines for future research looking to adapt fuel poverty indicators for use in the transport sector, arguing notably that the LIHC approach is a better blueprint for such efforts.More broadly, it must be stressed again that ‘transport affordability’ problems are only a subset of a broader set of ‘transport poverty’ issues.In practice, this means that, while fuel poverty policy typically relies on a single metric to assess the extent of ‘lack of warmth’ in homes, it is unreasonable to expect the same for transport, as there is a wider range of non-economic factors that may result in ‘lack of access’.Therefore, any fuel-poverty-inspired metric of transport affordability would need to sit alongside a variety of different concepts and multi-layered measurement approaches in helping us to grasp the multiple facets of transport poverty.A further direction for future research emerging from our review is investigating to what extent fuel poverty and transport affordability problems affect the same types of households or areas.Initial empirical evidence from France suggests that there is only limited overlap, but it is an open question whether this applies to other countries such as the UK.Clearly, establishing a sound methodology to assess transport affordability is a crucial precondition for carrying out this kind of empirical research.There is also a need to better understand how households react to increases in domestic and transport energy costs, notably with regard to the dynamic trade-offs between the two.The findings reviewed in this paper suggest that, given the recursive relationship between transport expenditure and income generation, households tend to prioritise commuting costs over other expenses, including heating, although the evidence is not yet robust.A number of studies in the field of energy economics have modelled the cross elasticity of domestic- and transport-energy demand, although typically they do not have an explicit focus on affordability.This leaves ample scope for empirical studies to bring together and integrate these approaches.More broadly, we argue there is need for a joined-up approach to energy affordability, in contrast to the current situation where domestic and transport-related energy affordability belong to distinct academic and policy silos.The evidence suggests that households make important trade-offs across all of their expenditure areas, and spillover effects exist.From a fuel poverty perspective, it seems unreasonable to maintain a set of policies to subsidize those that are the worst off whilst simultaneously allowing them to spend disproportionate amounts of income on travel.From a transport affordability perspective, a unisectoral approach limits us to looking at travel behaviour responses to price changes, while the most serious negative outcomes might be elsewhere.Finally our review highlights a striking parallel between fuel poverty and transport affordability policies in the UK: currently, significant public resources are invested in ensuring that older people, regardless of income, receive free public transport and ‘winter fuel payments’, whilst others in similar or greater need receive no help.This is arguably a key obstacle to the effective alleviation of affordability problems, and the case for better targeting is compelling.Overall, it appears that domestic energy and transport affordability policies are currently beingaligned to indirectly improve the financial welfare of the elderly, possibly reflecting a ‘moral status’ attached to old age, or their importance as a key electoral constituency.In a context where transport and energy exhibit two parallel policy worlds, our critical review has highlighted lessons that can be learned from a systematic comparison, as well as the need for a more joined-up approach to energy affordability.At the same time, our analysis also highlights critical differences between the two sectors and how and why these matter.Notably, metrics come with a set of assumptions and their own history, and in working across sectors it is necessary to have a critical eye to where they have come from and why differences matter.As we embark on an ever closer union between our domestic energy and transport energy systems the importance of these contradictions will become increasingly evident and problematic.This work contributes to the long-term debate about how best to manage these issues in a radical energy transition that properly pays attention to issues of equity and affordability.
The notion of ‘fuel poverty’ referring to affordable warmth, underpins established research and policy agendas in the UK and has been extremely influential worldwide. In this context, British researchers, official policymaking bodies and NGOs have put forward the notion of ‘transport poverty’ building on an implicit analogy between (recognised) fuel poverty and (neglected) transport affordability issues. However, the conceptual similarities and differences between ’fuel’ and ’transport’ poverty remain largely unaddressed in the UK. This paper systematically compares and contrasts the two concepts, examining critically the assumption of a simple equivalence between them. We illustrate similarities and differences under four headings: (i) negative consequences of lack of warmth and lack of access; (ii) drivers of fuel and transport poverty; (iii) definition and measurement; (iv) policy interventions. Our review suggests that there are important conceptual and practical differences between transport and domestic energy consumption, with crucial consequences for how affordability problems amongst households are to be conceptualised and addressed. In a context where transport and energy exhibit two parallel policy worlds, the analysis in the paper and these conclusions reinforce how and why these differences matter. As we embark on an ever closer union between our domestic energy and transport energy systems the importance of these contradictions will become increasingly evident and problematic. This work contributes to the long-term debate about how best to manage these issues in a radical energy transition that properly pays attention to issues of equity and affordability.
12
Envisioning surprises: How social sciences could help models represent ‘deep uncertainty’ in future energy and water demand
Urban and regional infrastructures in the energy and water sectors tend to have a long lifespan.For this reason, strategic infrastructure-related planning has long-term consequences, shaping the systems of provision and demand patterns for decades ahead.Strategic planning is often enacted in conditions of uncertainty related to political, economic, social, technological, legal and environmental factors, commonly abbreviated as ‘PESTLE’.While uncertainty is inherent in long timeframes, a ‘paradigm shift’ is ushering in new uncertainties, with the provision of water and energy being among the resources affected .This ‘paradigm shift’ refers to a radical change in some of the PESTLE aspects over the lifetime of infrastructures, giving rise to uncertainties not considered previously.The shift is driven by a combination of dynamic factors and interactions, including uncertainties about worsening climatic impacts on resources and infrastructure, an increasing probability of tipping points in the climate system, rapid adoption of new technologies, societal responses to both climate change mitigation and impacts, and wider changes in patterns of resource use.These dynamic factors display the characteristics of ‘deep uncertainty’.In the context of long-term decision-making, the definition of deep uncertainty includes three elements: uncertainty about variables and their probability distributions; uncertainty about the interactions between those variables; and uncertainty about the consequences of alternative decisions .This definition of deep uncertainty captures some of the challenges infrastructure operators face when considering long term investments in their assets and assessing how future demand may evolve.Many future-exploring methods rely on historic trends and relationships which may no longer hold throughout this ‘paradigm shift’.Accordingly, it is important to consider how these complex systems can be explored methodically to develop integrative approaches reflecting their complexity.This paper follows Cilliers in avoiding a single definition of complexity, which would be inherently reductionist, and in identifying several attributes of complexity instead.In this paper, the emphasis is placed on studies using 10-year and longer horizons defined here as a medium to long term – merging the two temporal scales since both are important for strategic planning.These horizons are relevant to contemporary strategic planning for two main reasons.Firstly, due to lock-in , decisions on infrastructural investment made today will influence demand/supply systems several generations into the future.Energy and water infrastructures in particular can last up to a hundred years .Secondly, significant climate change impacts are likely to become more apparent towards the end of the century .The long lifespan of assets means that they need to be robust to climatic changes to avoid supply shortages .Considering future uncertainties within water resources management, there are concerns about the significant changes to the hydrological cycle and to how water resources interact with an evolving population and other social, political, cultural and technological changes .In the energy sector, resource availability and infrastructure are affected by the potential decentralisation of energy supply and decarbonisation of the fuel mix , including renewable technologies that are intermittent and weather-dependent .Both sectors are also dealing with the need to modernise decaying urban and regional infrastructure , to make traditional infrastructures resilient to a changing climate , and to develop new decentralised infrastructures such as renewable technologies or water-sensitive cities .Putting aside other elements of the nexus, this study focuses on future demand in energy and water sectors in industrialised contexts, as these two sectors share specific characteristics that shape long-term planning.For example, both are resources typically provided by public utilities; both have historically needed major network infrastructure development to meet demand; and for both, particularly at a household level, everyday practices underpinning demand intersect, such as in the case of hot water used for laundry or showering, cooking and cleaning.Consequently, when it comes to strategic planning, the issues faced by decision-makers in the two sectors share similarities and overlaps, but tend to be governed separately .With the interconnections between energy and water resources being increasingly recognised , both decarbonisation and climate change would have systemic impacts across the two sectors.For instance, the installation of carbon capture and storage on coal-fired power plants would increase overall water demand, while climate change exacerbates water stress .For both water and energy, the impending changes in demand and supply are complicated by social, economic, environmental and technological uncertainties at a range of scales: from individuals and households to international and global levels.While some degree of uncertainty is unavoidable, in the past five years there have been numerous calls for ‘nexus’ thinking to clarify these interlinked uncertainties and complexity .In the UK and internationally this is reflected in a range of conferences1 and funding calls aimed at exploring the water-energy-food nexus challenge .Connected to this programme of work and the burgeoning international profile of research on the water-energy-food nexus, the development of new interdisciplinary, cross-sectoral understanding of energy and water demand is now a strong pillar of the UK Research Councils supporting a number of dedicated multidisciplinary research centres, such as The DEMAND Centre and the Centre on Innovation and Energy Demand .The nexus approach attempts to synthesise insights across knowledge domains, for example, by integrating the areas of energy and water demand.While there is a proliferation of research funding being directed to these areas, the challenges remain for methodological innovation within the field of energy and water demand—the development of shared languages and the integration of methods across ontological divides .Through exploring the approaches currently used to understand future demand and their ability to provide insights into the challenges of the ‘paradigm shift’, this paper contributes to developing new interdisciplinary methods.This paper explores how future energy and water demand is modelled, using the term modelling to encompass both quantitative and qualitative methods of envisioning future demand, and offers ideas on improving the modelling techniques, as a basis for supporting long-term strategic planning.A range of disciplines are brought together, from across the environmental, psychological and social sciences, to develop a more sophisticated conceptualisation of demand modelling than exists currently.This aim is achieved by, first, establishing four main attributes of deep uncertainty to be captured when modelling future energy and water demand: the diversity of behaviour, stochastic events, policy interventions and the co-evolution of the variables that shape demand.Second, the paper develops a comprehensive typology of methods for exploring future energy and water demand.This new, interdisciplinary, inter-sectoral typology is used to identify and critique areas of current modelling to be improved.It uses, as the basis of discussion, the complexity highlighted by the UK Research Councils and Government.However the findings have salience beyond this national case, as the focus is on industrialised countries in general.Third, based on the conceptual areas for improvement identified in existing methods, the paper offers insights from disciplines currently under-represented in dominant modelling methods, to challenge and enrich the methodological possibilities for understanding future water and energy demand.By exploring methods of modelling future demand for water and energy, the paper seeks to identify ways of supporting long-term decision-making regarding infrastructure investments and to contribute to the nexus debate.While not all strategic decision-making in relation to demand depends on modelling future demand—decisions are often based on expert opinions or rules of thumb—it is increasingly used as a way to support planning .In this paper, modelling is framed as ways of imagining future demand.Such approaches are usually quantitative and use programmable machines, although modelling can also be qualitative in nature or take advantage of mixed methods, and they do not necessarily provide ‘one’ answer, more often producing a range of plausible representations of future demand .In the past two decades, modelling has experienced its own paradigm shift, with more powerful computing capacity and better availability of input data than in the past.Despite a proliferation of models, few studies explore the totality of modelling methods across both quantitative and qualitative disciplines within the energy-water nexus.The dominance of a particular type of economics is still evident and shapes representations of energy and water futures within policy domains .Poor representation of rapid change, of the diversity of practices and behaviour, and of societal responses to uncertainty and change highlight the need for more integrative approaches .These limitations suggest that demand-side uncertainty is particularly difficult to capture in futures studies when relying solely on mainstream economics.The attempts to deepen interdisciplinary and transdisciplinary demand modelling approaches remain a niche minority, although a range of new approaches are emerging .Since addressing the socio-technical complexities of demand-side management is increasingly seen as an essential way to promote a less resource-intensive society , this article discusses how to improve the representation of future demand in modelling to inform strategic planning in the water and energy sectors.There are many types of uncertainty, depending on whether it is categorised by its location, level or nature .The main types are parametric and structural.Much literature focuses on dealing with parametric uncertainty , whereas structural uncertainty is less well explored .Some ways of dealing with structural uncertainty are experimentation and expert input, where social science insights might be particularly relevant, leading to a truer representation of diverse socio-technical realities in models.In relation to uncertainty, literature on demand-side modelling identifies a number of limitations that models grapple with, including unusual peak consumption days , informal economies and inequalities , penetration of renewables and water quality .Additional quantitative data and field measurements would enhance the modelling of energy and water demand; however, qualitative sources of insight such as case studies, expert interviews, and social practice theory are also mentioned as essential and under-used .As methods used for exploring future energy and water demand strongly affect planning and decisions in these sectors, it is important to continue refining, expanding and improving such methods to ensure relevant and state-of-the-art insights.The methods used in this study included a literature review, a survey of academic experts and two interdisciplinary expert workshops.Firstly a non-systematic literature review was used to identify attributes of coupled human and natural systems .This was followed by a systematic review of typologies of methods for exploring future energy and water demand, to derive an aggregate typology.The expert survey informed initial development of the typology, including its preliminary structure and suggestions of further literature to consult.An updated typology was then presented to the experts at the workshops who added extra methods that they thought were absent in the typologies found in the literature.In addition, the expert workshops together with relevant literature were used to identify which of the methods were amenable to representing the attributes of coupled natural and human systems earlier selected.Finally, the section on insights from psychological and social sciences regarding these attributes was developed through non-systematic literature review across social science disciplines of sociology, psychology and human geography.Within the framework of interactions between human and natural systems, House-Peters and Chang identify the following four themes to reflect such dynamics: scale, uncertainty, non-linearity and dynamic processes.Other studies identify further themes that may need to be captured by research methods exploring longer-term future demand for energy and water: systemic change, stochastic events, path dependency, people’s behaviour, policy interventions, emergent qualities, infrastructural changes, temporal scales and spatial levels, as well as interactions between these attributes.Many of these themes are highly intertwined and insufficiently specific to be useful for improving a model’s ability to represent rapid and systemic change.To short-list attributes useful for the purposes of this paper, the following three criteria were used.Firstly, each attribute should correspond to one of the key drivers of the paradigm shift identified in the Introduction.Secondly, each selected attribute should be distinct, i.e. not have significant overlaps with the other selected attributes.Finally, an attribute should be specific enough to be usefully defined and provide variables2 that can potentially become part of a modelling environment.Each of the themes in the previous paragraph were qualitatively scored against these three criteria, and the four attributes eventually selected were given new names to distinguish them from the themes identified in the literature.The attributes were then used as an analysis framework when evaluating how well the methods in the typology integrated those attributes.Following the development of the attributes, a systematic literature review of peer-reviewed journal papers on methods of demand modelling was conducted, in order to identify those that may fulfil the attributes and work across nexus boundaries.The aim was to develop a comprehensive typology of demand-based methods across energy and water studies, with a focus on exploratory methods suitable for long timeframes.The databases and search engines accessed included Scopus, Google Scholar and Science Direct.Scopus was used as the main search engine, given that it captures the majority of peer-reviewed journals in the relevant areas.Google Scholar then helped to identify relevant grey literature e.g. reports that also presented typologies.The search keywords covered combinations of ‘typology’, ‘forecasting’, ‘forecast’, ‘prediction’, ‘predicted’, ‘demand’, ‘energy’, ‘power’, ‘electricity’, ‘water’, ‘future’, ‘behaviour’/‘behavior’ and technique names such as ‘agent-based modelling’ used with Boolean operators ‘AND’ and ‘OR’.The search keywords were determined by the research objectives of this study discussed in the introduction and literature review sections.The primary selection criterion was that each study should present a typology for exploring future demand in the area of either energy or water, rather than the application of a particular method.The studies that contained typologies or taxonomies of anything other than methods for exploring future demand were excluded.A general selection principle was saturation, representing a point at which further literature search stopped contributing new insights to the creation of a comprehensive typology of energy and water demand futures modelling methods.In total, six peer-reviewed typologies were selected and combined into an aggregate typology presented in Section 5.To this end, we applied a ‘framework synthesis’, which required the establishment of a framework in advance of synthesising the literature, while keeping the framework flexible in order to absorb new findings .In our case, this a priori framework had been based on an expert survey of 13 interdisciplinary scholars across the University of Manchester who were engaged with energy and water demand research.Participants were recruited by email, telephone and in person from across the physical and social sciences, including engineers, computer scientists, economists, human geographers, sociologists, and psychologists.The survey used the SurveyMonkey platform and had a 100% completion rate i.e. all 13 experts responded.The survey generated data on the range of techniques, methods and data sources used to understand the future energy and water demand by researchers from different disciplines.The survey included questions on the advantages, limitations and uncertainties of each of these methods alongside questions on how data was analysed and used.Particular attention was paid to the representation of ‘behaviour’ across the disciplines, and the conceptualisation of ‘demand’.The typology was then further developed and refined during two interdisciplinary expert workshops.The workshops were conducted to discuss and validate the findings of the survey and the emerging typology, and explore which methods were employed, or under-used, within non-academic water and energy sectors.The first workshop included the respondents that took part in the expert survey, whereas the second workshop engaged both academic and non-academic stakeholders who applied modelling techniques to the UK’s energy and water industries.During the first workshop, participants critiqued a survey-informed version of the typology and contributed several methods to it, such as the Delphi method, multicriteria decision analysis, and meta-analysis.The second workshop helped to identify knowledge gaps relevant for practitioners.During this workshop, the non-academic stakeholders’ contribution was in verifying whether certain methods of exploring future demand were under-used in the private sector.The stakeholders were selected based on their long-term experience and the authors’ contacts in the energy and water sectors.Finally, examples of each of the methods in the developed typology were identified through a narrative literature review.In this instance the key selection criterion was that each study should use a particular method in Fig. 1, or a combination thereof, for exploring future demand in the area of either energy or water.Recent examples of modelling methods i.e. those published since 2000 were predominantly selected, although in two instances older studies were chosen where more recent relevant work could not be identified.The ongoing paradigm shift and the need to inform planning in the medium to long term present a challenge to existing decision-support models and tools.It is essential for both demand-related research and policy that modelling reflects these uncertainties and dynamics.To achieve it, this section explores what qualities of the human and natural systems can help to represent uncertainty of future demand in planning and decision-support tools for the energy and water sectors.Based on the literature and selection criteria considered in the methods section, four attributes have been identified, named here as ‘stochastic events’, ‘diversity of behaviour’, ‘policy interventions’ and ‘co-evolution’.Table 1 summarises the sources of uncertainty captured within each of the four attributes and gives examples of variables related to those attributes that can help to explore energy and water demand under conditions of deep uncertainty.The first three attributes include examples that can be categorised as input variables for models, while ‘co-evolution’ covers key relationships between the variables ensuring that those relationships are not simplified to the extent where the reality is compromised.Relationships between the attributes include, for example, the effects that policy interventions can have on technological breakthroughs and on practices, or the ways that diversity of behaviour drives social unrest or changes in service provision.The unpredictability and randomness element relates to the first of the four selected attributes, ‘stochastic events’ – a concept that climate change science borrows from statistics.This term usually refers to climate change impacts that are difficult to predict, such as extreme weather events that may cause immediate disruptions and fluctuations in the energy and water supply.House-Peters and Chang contrast stochastic events with changes in income and demographics that tend to influence demand gradually, with the impact being spread across several years.Stochastic events can be reflected in models as system shocks whose effect may be either one-off or lasting.Examples of variables for ‘stochastic events’ to be represented in models include a stochastic representation of climate change impacts, technological breakthroughs or social unrest.The second attribute is the ‘diversity of behaviour’.This paper adopts the concept ‘diversity of behaviour’, rather than ‘individual behaviour’, with the intention to capture behavioural patterns at a systems level.Such patterns arise because particular ways of doing things are embedded within the surrounding systems i.e. infrastructures, institutions, social norms and the rule of law .This concept reflects the diversity of people’s actions in relation to the resources they consume, and why.Explanations for why people engage in particular forms of resource consumption vary substantially along theoretical and disciplinary lines.Examples of variables for ‘diversity of behaviour’ to be represented when modelling future demand include such impacts on demand as social networks exerting group/peer pressure; attitudes towards energy and water conservation; social practices related to the dynamics of everyday life; the diffusion of information and consumer classifications e.g. ‘early adopters’ of technology.‘Policy interventions’ is the third attribute identified here.This attribute is broader than the first two as it can contribute to nonlinearity, produce new system dynamics, and partly capture infrastructural change and systemic transformation.The challenge is to establish how long the effect of policy interventions would last, how it would percolate through the system, what other elements of the system would be affected, and what feedback loops would emerge.These questions are also valid for the first two attributes and are explicitly captured in the final attribute, ‘co-evolution’.Examples of variables for the ‘policy interventions’ attribute are standards for fuel and water efficiency, a feed-in tariff or a carbon tax.This paper defines the fourth attribute, ‘co-evolution’, as the way that infrastructures, technologies, institutions and practices jointly develop in a nonlinear manner over time.The concept of co-evolution is to capture key interactions, relationships and feedback loops between variables specified within the previous three attributes.In particular, a feedback loop arises when some of the information about a process is fed back to a starting point of the process, affecting that starting point; i.e. the response of a system affects inputs into that system.Socio-economic systems display various aspects of co-evolution.For example, supply and demand are said to co-evolve when increasing supply leads to a disproportionate increase in demand through raising people’s expectations of a service .These expectations exist at the societal, rather than individual, level and create multiple flow-on effects in related sectors.Similarly, the rise of showering as the dominant way of achieving bodily cleanliness in the UK reflects the co-evolution of household technologies, wider systems of infrastructure and social norms and expectations for cleanliness and comfort .With socio-economic systems being strongly coupled with biophysical systems, they constantly co-evolve and adapt to ongoing changes .At the very least, modelled variables within the ‘co-evolution’ attribute should attempt to capture interactions and feedbacks between new technologies, policy interventions, the changing climate and the diversity of emerging behaviours.Before assessing how existing methods can represent the four attributes identified so far, this subsection consolidates method typologies from the literature, the expert survey and the first workshop, to help broaden the ‘menu’ of methods available.The new comprehensive typology devised for the purposes of this paper draws on the method classifications frequently applied to studying both energy and water demand.Such classifications typically address a particular area of application within a specific time horizon; however qualitative and mixed methods are often not well represented.This research finds that studies of both future water and energy demand rely on broadly similar modelling methods and explore comparable timeframes.Typical classifications are dominated by methods from two main groups named here as ‘traditional statistical/mathematical methods’ and ‘machine-learning methods’, occasionally adding some qualitative or mixed methods, such as the Delphi technique and conceptual models.In the new typology, the integrated method classifications from the literature are augmented with complementary qualitative and mixed methods such as multicriteria decision analysis and transitions theory, as discussed by the experts on demand modelling at the workshops.The ‘Misc.’, branch of the typology has been created based on Bhattacharyya who classifies methods for energy demand forecasting into ‘simple’ and ‘advanced’.He covers both end-use and input-output modelling in the advanced group, alongside econometric models that are included here in the ‘traditional statistical/mathematical methods’.Examples of his ‘simple’ methods are trend analysis and direct surveys.The only qualitative or mixed method Bhattacharyya suggests is scenario analysis.In general, his simple-vs.-,advanced classification is insufficiently detailed for the purposes of this paper, as this classification does not reveal the principles that underlie particular modelling methods.While his classification is not adopted here, the methods discussed by Bhattacharyya are included in the typology in Fig. 1.Memon and Butler offer an alternative classification of methods for forecasting water demand, implying a direct correlation between time horizons and data intensity.They argue that long-term forecasting requires conceptual techniques and relatively little data, while short-term forecasting calls for data-intensive methods .Their short term appears to refer to hourly and daily forecasting; and, although they do not define ‘long term’ explicitly, their example of scenario ‘prediction’ refers to 2025 .Examples of methods are given that “were designed to make long-term predictions” , such as statistical methods, “scenario-based forecasting” and forecasting methods for “network operations”.Memon and Butler’s idea that exploring long-term future demand may need to go beyond quantitative modelling is consistent with the diversity of methods presented in Fig. 1.The elements of the typology are neither uniform nor clearly demarcated by scope and purpose, indicative of the complexity of managing resources at the nexus.The key groups of methods overlap substantially: regression methods draw on time-series analysis, while machine-learning methods draw on both statistical and qualitative methods.Other overlapping methods include agent-based models that combine qualitative and quantitative approaches .Generally, qualitative methods, in addition to being used in a stand-alone fashion, are often applied in combination with the other three groups .For example, some of the qualitative methods are ways of gathering data; others are systems of ideas for framing the analysis of future demand; yet others are applied to modelling tools.The term ‘scenarios’ here refers to a particularly broad concept: they are extensively used in futures studies in combination with virtually all methods listed in the typology.For example, Memon and Butler view scenarios as another forecasting technique, which is contested according to the original and commonly used definition of scenarios stating that they are non-predictive .On balance, traditional statistical methods are still the most common in futures studies, particularly forecasting, despite machine-learning methods often having more accurate predictive capacity both in short and longer terms , due to the ability of machine-learning methods to better incorporate systemic complexity and interactions.The prevalence of traditional statistical methods confirms the bias in the way research questions are asked with an in-built agenda – something that Asdal calls a “shared technical interest” to solve a problem by generating a number.The studies reviewed are not always explicit about defining the timeframe used.Their short-term time horizon varies from real-time to daily to monthly forecasting, while long term is “one to ten years, and sometimes up to several decades” .The next subsection will explore how suitable the methods summarised in Fig. 1 are for informing strategic planning, bearing in mind the infrastructure lifespan and significant climate change impacts that will be observed towards the end of the century.The Introduction discussed the dynamics and conditions of uncertainty that should underpin emerging research methods to explore future demand for energy and water.Following the translation of the uncertainties into the four attributes, this subsection analyses the ability of existing methods summarised in Fig. 1 to capture, at least to some extent, these dynamics.As a caveat, only typical rather than potential applications of the methods to the four attributes are discussed here.This is because, theoretically, machine-learning methods in particular have almost unlimited possibilities if they are applied unconventionally, further developed and/or combined with qualitative or other machine-learning methods .According to the typology literature analysed here, ‘policy interventions’ is the attribute that most methods try to incorporate , followed by ‘stochastic events’ .Among the traditional methods, it is the stochastic models that include these attributes, while causal and extrapolation methods appear unable to do so.However, even within the stochastic methods, consequences of policy interventions tend to be modelled in a linear way, with little regard for the ‘ripples’ throughout the system.Machine-learning methods are reported to be more suitable for taking into account the dynamics and nonlinearity .These qualities might make machine-based methods even more useful for strategic decision-support than for prediction , which they tend to be used for.Difficulties for both types of methods arise when capturing unintended consequences of policies, as well as when identifying whether policy interventions and stochastic events are lingering or one-off events.The diversity of behaviour is another attribute addressed with varying levels of success.Similar to policy interventions, some of the methods incorporate it in a reductionist way.For example, stochastic models use dummy variables to reflect such aspects as gender, race or age groups, whereas least-cost optimisation accounts for the diversity of behaviour via ‘rules’ and ‘preferences’.These traditional methods usually pre-set behaviour based on the ‘rational choice’ theory .By contrast, agent-based modelling is designed to let behavioural patterns emerge as a result of individuals’ interactions with each other and with the environment.The attribute least represented within the reviewed methods is co-evolution.Only machine-learning methods attempt to integrate aspects of co-evolution in the form of feedback loops and interrelationships between variables.The literature emphasises the limits of traditional modelling in application to longer-term and systemic issues.With the ongoing ‘paradigm shift’ in mind, machine-learning methods emerge as more appropriate for this purpose, owing to their ability to capture dynamic processes, nonlinear interactions and behavioural patterns .At the same time, their disadvantages include their complexity and data intensity that can compromise the transparency of the models and obscure the interpretation of results .Of particular interest are the participative methods among the ‘Qualitative or mixed methods’ category in Fig. 1, including interviews and the Delphi method; literature suggests that appropriately engaging with stakeholders opens up new ways of exploring futures .Problems may arise when specific methods claim to have a purpose they are not designed to deliver while continuing to inform both policy and practice.An example of this issue is the UK Water Industry where in the latest 25 year planning exercise stochastic modelling was used for the first time to explore supply side planning, but the demand side is still resolved with deterministic models i.e. extrapolation .In the energy sector, a similar example is optimisation-based Integrated Assessment Models used for the long-term study of energy systems .Such models are inadequate for the purpose of supporting long-term decision-making under conditions of deep uncertainty and risk misleading non-expert users of these studies.The issues discussed in this section relate to one of the main aspects of complex systems – uncertainty.Futures studies deal with uncertainty by introducing sensitivity analysis, scenarios and probability distributions, and by drawing on other disciplines and qualitative approaches, such as expert review.Some futures studies and modelling approaches regularly attempt to represent policy interventions and stochastic events.However, the ‘diversity of behaviour’ and ‘co-evolution’ perspectives are under-conceptualised in modelling current and future demand.The next section draws on disciplines that could inform the integration of these two attributes into the wider modelling literature.While demand studies in general can consider changing demand of a range of actors, here two bodies of literature are explored that have specifically focused on household demand and were highlighted during the expert workshops.Both psychological and sociological sciences emphasise the social, or demand, side that has long been neglected in favour the technologies and the supply side .At the same time, these two relatively distinct sets of literature speak in different ways to the understanding of uncertainties presented in Table 1, and to planning and managing water and energy resources.The forthcoming section is not intended as a comprehensive review, but rather tentative ideas on how mainstream and largely quantitative modelling methods can learn from other research areas.One prevailing approach to understanding demand has focused on the individual as a unit of analysis and employed models that seek to understand pro-environmental behaviour and motivations, and their impacts on energy and water demand .This approach typically explores how attitudes, beliefs and values shape human behaviour, with a focus on the individual’s agency.The rational choice model and theories of planned behaviour and reasoned action represent individuals as independent decision-makers.Others, such as the norm activation model, allocate a level of agency to social norms, as a person’s behaviour is influenced by their awareness of the consequences of their actions and their acceptance of personal responsibility .Although this literature increasingly acknowledges the attitude-behaviour gap in relation to environmental decision-making , the rational choice model has been influential in both environmental economics and policy .This view is linked to the information-deficit model: to make rational choice, individuals need to be provided with information to assist them with their decisions.This approach is useful for identifying drivers to behavioural change , exploring routines and conventions of resource use , and factoring in ‘rebound effects’ ; for instance, if a person is motivated by values rather than by money to implement an environmental measure, the rebound effects might be smaller .The elements of the MINDSPACE framework developed for influencing behaviour through policy are another example of potential inputs in decision-support modelling .Several criticisms have been levelled at these approaches to resource use and pro-environmental behaviour.Jackson argues that they assume ‘methodological individualism’, where social behaviour is understood to be the result of an aggregation of individual behaviours.The focus on individuals and the relative decline of group research has been noted historically , although organisational psychology has since started unpacking group dynamics of work teams and tracing the impacts of adopting energy-related behaviours in workplaces .However, research on resource consumption at this level of analysis is relatively limited .Therefore, the risk of modelling future demand from this perspective is that demand at a population level is seen as a multiplied effect of individual decisions, divorced from system-level constraints.For example, in relation to the attributes outlined in Table 1, these theories struggle to analyse changes in behaviour arising from stochastic events, or to explain behaviour from the perspective of the co-evolution and interaction of demand and wider factors.Demand is thus explored in a deterministic way, with the actions of the individual isolated from their cultural or socio-technical contexts.While several psychological studies include contextual determinants such as socio-demographic variables in their models , this understanding of the social and physical context is different from the interactive and relational dynamics uncovered by the ‘co-evolution’ approach.Given the criticisms directed at the psychological perspective, demand-related developments within sociology and human geography have been positioned as alternatives .These approaches have emphasised the material and social structures implicit in processes of consumption, highlighting the way that technologies, infrastructure, social norms and practices co-evolve across space and time .Such perspective provides insight into the processes underpinning historical and current demand for water, energy and other resources.This insight includes reflections on the social nature of demand, the material nature of demand, and how demand ‘is done’ in peoples’ day-to-day lives.It provides insights into the diversity of behaviour as understood at a population level.These perspectives also contribute strongly to understanding how demand co-evolves in relation to social, cultural and infrastructural elements, and as a result of policy interventions.The focus of sociological and human geographical approaches, together representing the main locus of social science research on energy and water demand, is on how context and structures give rise to practices, characterised by the co-evolution attribute described in Section 4 .In particular, such disciplinary perspectives are attentive to notions of difference and unevenness, within and across societies and space .Work on energy justice highlights the complex historical and geographically specific constructions of energy production and distribution, and how these socio-material histories “may be limiting the current conditions and choices for ethical and sustainable consumption” .The notion of ethics in this literature also extends to intergenerational equities and the gendered politics of research on demand and climate change .In relation to futures modelling, this research emphasises that the past-present-future is not an evenly shared or homogenous entity to be modelled as a singular ‘demand’ outcome .The challenge for futures studies is to understand how the water and energy supply-demand systems vary across time and space .One could consider a framework of services, for example how water and energy resources satisfy human and environmental needs .What is considered a need also changes over space and time .A focus on services enables policy and scientific discussions to shift from the supply of the resource to the effects .Thinking about services re-focuses analysis on how complex systems may co-evolve to meet and create diverse demand effects.Although the co-evolution is methodologically challenging to capture, some recent methods attempt to consider these diverse socio-material entanglements, through backcasting and transition planning of resource use , developing a population-level understanding of practice-based changes and including practice theory in agent-based modelling .Strategic planning of energy and water provision has long-term and far-reaching consequences, as the long lifespan of these infrastructures shapes patterns of demand and consumption for decades ahead.From this perspective, demand appears relatively fixed; however, the on-going major changes in climate, society and technology create an increasingly dynamic environment with manifold effects that themselves interact to drive demand in different ways.This transformation is particularly pertinent to energy and water resources, where future demand ought to be explored under conditions of rapid change and deep uncertainty .This paradigm shift requires new types of research questions and, accordingly, new ways of answering those questions .To inform strategic planning in the water and energy sectors, the interdisciplinary demand literature has called to clarify these uncertainties, both conceptually and methodologically.However, there has been little reflection on how ‘demand studies’ on water and energy represent the uncertainties, and the area has been dominated by mainstream, quantitative economics.To address this lack of reflection and to highlight a wider range of methods available, this paper has developed a comprehensive typology of methods for exploring future energy and water demand.After identifying the four attributes, the paper posits that methods should be able to represent or capture deep uncertainty; and has provided examples of how insights from psychological and social science disciplines can assist in conceptualising these uncertainties.The analysis in this paper has a number of limitations raising further questions for researchers, policy-makers and industry.In particular, questions arise as to whether these uncertainties can or should be quantified and whether there are other ways to take them into account in a way that supports more responsive and effective planning about demand/supply systems.One challenge is whether to integrate the uncertainties represented in the attribute ‘co-evolution’ in existing methods, or to keep it as part of the context.Another example is given by Kandil et al. , who warn that in fast-developing systems, such as electricity grids in industrialising economies, many events are unpredictable and not currently quantified, while existing models still cannot accommodate such information.This offers a question for future research and international decision-making as to whether demand-related models developed for industrialised countries can be applied to the context of industrialising countries with different policy and regulatory systems .The challenge for research on futures studies for the nexus of energy and water demand is two-fold: whether, and how, methodological approaches to future studies capture complexities and co-evolution, and how more sophisticated futures studies can be used by policy makers and decision makers.Demand-modelling methods are designed for various purposes and different scales and scopes .Future demand is just one element of the information taken into account by decision-makers when planning new infrastructure.Decision-makers are often presented with a range of investment options with respect to infrastructure development within the context of deep uncertainty that renders optimisation techniques that rely upon known values and probabilities impractical .Accordingly, new ways of engaging with the complexities and uncertainties are needed to prepare our supply-demand systems in a way that is ‘resilient’ to future climatic and other social/technological changes.Given that both ‘diversity of behaviour’ and ‘co-evolution’ are currently under-represented within the modelling literature, it is important to reflect on the nature of this conceptual and empirical gap and opportunities for further integration.The lack of application of these approaches in interdisciplinary research on the water-energy nexus is partly due to how the future, and the scale of analysis, is conceptualised in the majority of modelling literature.For example, co-evolution approaches consider the future to be emergent and changeable , even though history continues to have some influence on the future, such as infrastructure legacies.By contrast, modelling perspectives largely carry forward historical configurations as the baseline for the future.Histories of relationships between infrastructures, social factors and practices have shown that the precise configuration of ‘future practices’ is in itself unpredictable.The literature engaging with ideas about co-evolution does enable a different set of questions to be asked about the implication for demand of material, social and policy investments in the water and energy sectors.While they have not been applied in any systematic way to futures studies and demand modelling, it does not mean that it cannot be so.The challenge for the researchers and modellers working in the areas of energy and water demand is to experiment with new conceptual and methodological resources that accommodate such dynamic uncertainties, unevenness and complexities.The analysis of current methods highlights that no single method is able to meet all the attributes of stochastic events, diversity of behaviour, policy interventions, and co-evolution.Instead, a combination of both quantitative and qualitative methods may genuinely be able to address the four attributes of deep uncertainty.Such whole-systems approaches should ideally reflect the nexus thinking across the energy and water sectors, and be enhanced with interdisciplinary insights.This would change the balance of modelling from focusing predominantly on technology and isolated individuals towards systems thinking.For example, combining simulation models with participative methods would provide more information about people’s interactions with the system, better reflect the complexity of the real world, and potentially increase buy-in for infrastructure projects .The principal disadvantage of combining or even integrating different methods is that it can be expensive, time-consuming and logistically challenging.In addition, some insights from sociology and human geography may baffle policy-makers , as quantitative outputs of traditional models are arguably easier to convert into policies, thereby perpetuating in-built agendas of dominant methods and avoiding systemic changes.However, ultimately the aim of broadening the range of social science methods in use is to address the paradigm shift through challenging the current “shared technical interest” of both policy and methods in producing incremental change.Rather than attempting to predict the future and then plan accordingly, methods should seek to assess and challenge policy and planning options in relation to pertinent parameters with the aim to identify strategies which are robust to the uncertainties within these parameters.In particular, it is necessary to represent future demand through multiple plausible futures reflecting the unevenness of the futures and their emergent nature.The challenge for those investigating future demand under these circumstances is to capture as far as is reasonable a range of both quantitative and qualitative characteristics of demand including extremes, not solely a ‘best estimate’.The ranges of futures derived need to be regularly reviewed and adapted, as new data and circumstances arise.In summary, considering a range of futures, involving stakeholders and adaptivity are essential for improving the ability of futures studies to envision surprises and inform planning and long-term policy-making across the energy and water sectors.
Medium- and long-term planning, defined here as 10 years or longer, in the energy and water sectors is fraught with uncertainty, exacerbated by an accelerating ‘paradigm shift’. The new paradigm is characterised by a changing climate and rapid adoption of new technologies, accompanied by changes in end-use practices. Traditional methods (such as econometrics) do not incorporate these diverse and dynamic aspects and perform poorly when exploring long-term futures. This paper critiques existing methods and explores how interdisciplinary insights could provide methodological innovation for exploring future energy and water demand. The paper identifies four attributes that methods need to capture to reflect at least some of the uncertainty associated with the paradigm shift: stochastic events, the diversity of behaviour, policy interventions and the ‘co-evolution’ of the variables affecting demand. Machine-learning methods can account for some of the four identified attributes and can be further enhanced by insights from across the psychological and social sciences (human geography and sociology), incorporating rebound effect and the unevenness of demand, and acknowledging the emergent nature of demand. The findings have implications for urban and regional planning of infrastructure and contribute to current debates on nexus thinking for energy and water resource management.
13
Anti-tyrosinase, total phenolic content and antioxidant activity of selected Sudanese medicinal plants
Tyrosinase is known to be a key enzyme in melanin biosynthesis and is widely distributed in plants and mammalian cells.Tyrosinase enzyme catalyzes two different reactions: the hydroxylation of monophenols to o-diphenols and the oxidation of o-diphenols to o-quinones.o-Quinone is transformed into melanin in a series of non-enzymatic reaction.Melanogenesis is a physiological process resulting in the synthesis of melanin pigments, which plays a crucial protective role against skin photocarcinogenesis.In humans and other mammals, the biosynthesis of melanin takes place in a lineage of cells known as melanocytes, which contain the enzyme tyrosinase.Melanin synthesis inhibitors are topically used for treating such localized hyperpigmentation in humans as lentigo, nevus, ephelis, post inflammatory state and melanoma of pregnancy.Melanin formation is considered to be deleterious to the color quality of plant-derived food.Prevention of this browning reaction has always been a challenge to food scientists.Tyrosinase is also one of the most important key enzymes in the insect molting process therefore it could be used as alternative insecticide.Therefore the inhibitors of this enzyme may lead to novel skin whitening agents, anti-browning substances or compounds for insect control.Recently applications of tyrosinase-inhibiting agents are increasingly used in cosmetic products for maintaining skin whiteness.Plants and their extracts are inexpensive and rich resources of active compounds that can be utilized to inhibit tyrosinase activity as well as melanin production.The idea behind using antioxidants for skin-lightening activities lies in the hypothesis that the oxidative effect of UV-irradiation contributes to activation of melanogenesis.UV irradiation can produce reactive oxygen species in the skin that may induce melanogenesis by activating tyrosinase as the enzyme prefers superoxide anion radical.Additionally these ROS enhance the damage of DNA and may induce the proliferation of melanocytes.Therefore ROS scavenger such as antioxidants may reduce the hyperpigmentation.Total phenolic and flavonoid contents were significantly correlated with free radical-scavenging and tyrosinase-inhibiting activities.Thus, the strong free radical-scavenging and tyrosinase-inhibiting properties increased proportionally with the level of antioxidants in Sorghum distillery residue extracts.Medicinal plants represent an important component of traditional medicine in the world including in Sudan.The flora of Sudan is relatively rich in medicinal plants corresponding to a wide range of ecological habitats and vegetation zones of the country.Due to the rich plant diversity existing in Sudan, it is very encouraging to explore the potential of Sudanese plants for cosmaceutical purposes.Despite few of these medicinal plants used for skin decoration and softening the authors decided to investigate the ability of some Sudanese medicinal plants as skin whitening which it could be useful for cosmaceutical industry.The ability of different extracts of Sudanese medicinal plants to act as a skin-lightening agents was tested as their ability to inhibit tyrosinase, the rate limiting enzyme in melanogenesis, initially using a cell-free mushroom tyrosinase system, which has commonly been employed for the testing and screening of potential skin-lightening agents.In this study, fifty methanolic extracts of Sudanese medicinal plants were evaluated for their anti-tyrosinase, total phenolic content and antioxidant properties in order to find the most potential plant extracts for the skin-lightening agent.The plant materials which have medicinal values were randomly selected from Khartoum and Gadarif states of Sudan in March 2011.Identification of the plant materials was done at the University of Khartoum, Faculty of Agriculture and Faculty of Forest.Authentication voucher specimens are deposited in the Horticultural Laboratory, Department of Horticulture, Faculty of Agriculture, University of Khartoum.Dimethylsulfoxide, iron chloride hexahydrate, Folin–Ciocalteau, L-tyrosine and L-dihydroxyphenylalanine were purchased from Wako Pure Chemical Industries, Ltd.Mushroom tyrosinase, and 2,4,6-Tris-s-triazine were purchased from Sigma.Quercetin, butylated hydroxytoluene, ascorbic acid and gallic acid were purchased from Naka Lai Tesque, Inc.Other chemicals were of the highest grade commercially available.Plant materials were shade dried at room temperature and powdered before extraction; they were each extracted three times with absolute methanol for 12 h three times.The extracts were filtered and then the solvent was removed under vacuum using rotary evaporator.All extracts were stored at 4 °C prior to analysis.The tyrosinase inhibitory activity was performed by the method described by Batubara et al.Briefly, the sample was added in 96-well plate.Tyrosinase and 110 μl of substrates were added.After incubation at 37 °C for 30 min, the absorbance at 510 nm was determined using a micro plate reader.The percent inhibition of tyrosinase activity was calculated at the concentration of 125 and 500 μg/ml.The extracts showed activities up to 50% inhibition of the enzyme were expressed as.Kojic acid was used as a positive control.The total phenolics assay was performed as described previously by Ainsworth and Gillespie.Plant extract was dissolved in 50% methanol and 100 μL was transferred into test tubes, followed by 200 μL 1 N Folin Ciocalteau reagent 10%.Then they were mixed with 800 μl of sodium carbonate and maintained at room temperature for 2 h. Two hundred microlitres of samples from the assay tube was transfer to 96-well microplate and read the absorbance at 765 nm using microplate reader.Total phenolic concentrations were expressed as microgram of gallic acid equivalents.The total antioxidant potential of extracts was determined by using a ferric reducing ability of plasma assay described by Tachakittirungrod et al.Briefly, the FRAP reagent was freshly prepared.The extracts were dissolved in ethanol to a concentration of 1 mg/ml.An aliquot of 20 μL test of solution was mixed with 180 μL of FRAP reagent.The absorption of the reaction mixture was measured at 590 nm by a microplate reader.Ethanolic solutions of known Fe concentration, in the range of 50–500 μM, were used as calibration curve.The reducing power was expressed as equivalent concentration.This EC was defined as the concentration of antioxidant having a ferric reducing ability equivalent to that of 1 mM FeSO4.Quercetin, ascorbic acid and butylated hydroxytoluene were used as positive controls.The IC50 values of tyrosinase inhibitory, total phenolic content and antioxidant activities were expressed as the mean."The significant differences between extracts and positive controls were assessed by one-way analysis of variance followed by pairwise comparison of the mean with positive control using Tukey's multiple comparison test.Values were determined to be significant when p was less than 0.05.Table 1 summarizes the botanical name, family, voucher specimen and summary of traditional uses of the investigated plant species.Within these selected Sudanese medicinal plants Lawsonia inermis, Combretum hartmannianum, Acacia seyal var.fistula, Solanum dubium, Citrullus coloynthis and Acacia tortilis are used for preventing the dryness and bacterial infection of the skin in addition to other uses.The 50 plant extracts used in this research belong to 39 plant species which are distributed among 27 families from different plant parts.L-tyrosine and L-DOPA were used as the substrate to determine the monophenolase and diphenolase activities of mushroom tyrosinase.The tyrosinase inhibitory activities of all extracts are presented in Table 2 as percentage of L-tyrosine and L-DOPA inhibition at the concentration of 125 and 500 μg/ml as well as IC50 values.The study revealed that 36 and 24% of extracts presented a good tyrosinase inhibitory activity which inhibited more than 50% inhibition for both substrates.At the concentration of 500 μg/ml of extracts with L-tyrosine and L-DOPA substrates; Z. spina-christi, A. digitata , A. nilotica, K. senegalensis, G. senegalensis, T. brownii, A. seyal var.seyal, A. seyal var.fistula and P. glabrum showed inhibition levels more than 70%.In addition to that Kojic acid which was used as a positive control showed an inhibition level of 99.65 and 94.40% to L-tyrosine and L-DOPA, respectively.The IC50 values of A. nilotica, A. seyal var.seyal and kojic acid demonstrated significantly lower IC50 for L-tyrosine substrate inhibition than other extracts.Moreover A. nilotica, A. seyal var."fistula and positive control exhibited the lowest IC50 for L. DOPA substrate inhibition and it's significantly different from other extracts.Maldini et al. reported the isolation of compounds from A. nilotica pods including: galloylated catechin and gallocatechin derivatives along with galloylated glucose derivatives.Also previous studies of the bark of A. nilotica resulted in the separation of gallic acid, catechin 5-galloylester and galloylated derivatives of catechin 5-O-gallate.Salem et al. demonstrate the isolation of gallocatechin 5-O-gallate compound from A. nilotica pods exhibiting in vitro activity and selectivity towards uveal melanoma cell lines comparable to that of the known compound epigallocatechin gallate from green tea.EGCG reported as potential whitening agent by reducing melanin production in mouse melanoma cells.Sato and Toriyama clarified that the catechin group inhibited melanin synthesis in B16 melanoma cells through inhibiting melanogenic protein, namely tyrosinase.Also he suggested the catechin group as a candidate for being anti-melanogenic agent and that it might be effective in hyperpigmentation disorders.Therefore the in vitro activity of A. nilotica pods and bark against tyrosinase, it may be due to the catechin derivatives compounds.Polyphenolic compounds are commonly found in both edible and inedible plants and they have been reported to have multiple biological effects."Lin et al. mentioned that phenolics with ion reducing ability diminish the possibility of hydroxyl radical's formation path from superoxide anion radicals and additionally inhibit enzymes due to their abilities to chelate copper at the active site.Owing to this polyphenols has attracted researchers to topical skin applications uses.The amount of total phenolic content was measured by Folin–Ciocalteu method.The phenolic content varied widely in our tested extracts and ranged from 46.02 to 09.63 μg GAE/mg.The highest level of phenolic content was found in T. brownii, while the lowest was in Z. spina–christi.There were significant differences in total phenolic content between plants extracts.The highest phenolic concentrations were found in: T. brownii, A. nilotica, A. nilotica, P. glabrum, Z. spina-christi, T. brownii, C. hartmannianum, K. senegalensis, S. dubium, A. seyal var.seyal, T. laxiflora, G. senegalensis, A. seyal var.fistula, A. precatorius and A. maritima extracts.Our findingagrees with Cock, and Sulaiman et al., who stated that Terminala and Acacia species were rich in plant phenolic compounds respectively.Comparatively the moderate phenolic concentration was found in : L. sativum, A. digitata, C. carvi, A. tortilis, X. brasilicum, C. hartmannianum, Z. Spina-christi, H. sabdariffa, A. seyal var.seyal, T. indica, A. tortilis, H. thebaica, S. persica, M. oleifera, A. visnaga, L. inermis, S. argel, B. aegyptiaca, H. tuberculatum, P. aculeate, K. africana and F. cretica extracts.G. tenax, B. aegyptiaca, M. oblongifolia, A.bracteolata, S. persica, A. pannosum, N. sativa, C. coloynthis, A. seyal var.fistula and C. decidus extracts, demonstrated relatively low level of phenolic content.Whereas, B. aegyptiaca and Z. spina-christi revealed quite low phenolic level.Karou et al. found that aerial parts of B. aegyptiaca had low content of phenolics.Although phenolic compounds are a diverse and ubiquitous group of secondary metabolites in the plant kingdom, their distribution and concentration vary across and within species.The ferric reducing ability of plasma assay was used to evaluate the antioxidant potential of extracts of the selected Sudanese medicinal plants.The FRAP assay depends upon the reduction of ferric tripyridyltriazine to ferrous tripyridyltriazine at low pH. The complex has an intensive blue color and can be monitored at 590 nm.Ferric reducing ability of plasma assay is used in many studies because it is quick, simple to perform and reaction is reproducible and linearly related to the molar concentration of the antioxidant present.The reducing power property indicates that the antioxidant compounds are electron donors and can reduce the oxidized intermediates of the oxidative damage process, so that they can act as primary and secondary antioxidants.As shown in Table 3, there were significant differences in total antioxidant capacity between the plant species and the part used of the plant extracts.The FRAP values can be divided in four groups: very low FRAP n = 29; low FRAP n = 4; good FRAP n = 6 and high FRAP n = 11.Quercetin, ascorbic acid and BHI as positive controls demonstrated antioxidant values of 3.96, 3.79 and 2.84 mM/mg respectively.The strongest antioxidant properties were observed on; T. laxiflora, A. nilotica, T. brownii, A. seyal var.seyal, K. senegalensis, T. brownii, C. hartmannianum, P. glabrum, Z. spina-christi and G. senegalensis extracts.This result is similar to the findings of Muddathir and Mitsunaga, Siddiqui and Patil and Hassan et al.They demonstrated that, A. nilotica, T. laxiflora, C. hartmannianum, K. senegalensis, A. seyal var.seyal, G. senegalensi and Z. spp. showed an excellent antioxidant activities when measured by 1,1-diphenyl-2-picrylhydrazyl radical scavenging assay.Similarly, high antioxidant activities measured by trolox equivalent antioxidant content assay have also been reported for the genus Acacia and Terminalia which seem to be correlated with their phenolic contents.The extracts showed high EC values, it could be considered that compounds in these extracts were good electron donors and could terminate oxidation chain reactions by reducing the oxidized intermediates into the stable form.It is worth to be mentioned that a significant relationship is observed between total phenolic content and antioxidant components of the extracts that showed potent anti-tyrosinase activity.This result is in agreement with Katalinic et al. who confirmed a significant linear correlation between total phenolic and related FRAP of medicinal plant extracts.The greater number of hydroxyl groups in the phenolics, the higher antioxidant activity.The inhibition of tyrosinase activity might depend on the hydroxyl groups of the phenolic compounds of the extracts that could form a hydrogen bond to a site of the enzyme, leading to a lower enzymatic activity.Some tyrosinase inhibitors act through hydroxyl groups that bind to the active site on tyrosinase, resulting in steric hindrance or changed conformation.The antioxidant activity mechanism may also be one of the important reasons for tyrosinase inhibition activity.Ma et al. suggested that ROS scavenger such as antioxidants may reduce the hyperpigmentation, therefore we proposed that the extracts with high EC value could be useful to reduce the hyperpigmention process.According to this study, a number of Sudanese medicinal plants revealed anti-tyrosinase activity, high phenolic content and antioxidant capacity.A. nilotica expressed a higher potency very promising for further research; since it has high tyrosinase inhibitory activity, antioxidant activity and a good phenolic content.Consequently these findings could be applied in the industry for the obtainment of natural antioxidants and tyrosinase inhibitors.However, the results obtained need further investigation in pigment cell assays and in clinical studies.
The flora of Sudan is relatively rich in medicinal plants and represents an important component of traditional medicine. Fifty methanolic extracts of selected Sudanese medicinal plants were evaluated for their in vitro tyrosinase inhibitory effect, antioxidant activity and total phenolic content (TPC). The standard method of antioxidant evaluation, ferric reducing ability of plasma (FRAP), was employed to determine the antioxidant activity while the enzyme based tyrosinase inhibition was used for the anti-tyrosinase activity. Acacia nilotica (pods, bark) and Acacia seyal var. seyal (wood) demonstrated comparable anti-tyrosinase inhibitory activity using L-tyrosine as substrate (08.61, 10.47 and 10.77 μg/ml respectively) to Kojic acid (10.02 μg/ml) which was used as a positive control. A. nilotica (bark) and Acacia seyal var. fistula (bark) exhibited good tyrosinase inhibitory activity using L-DOPA as substrate (IC50: 31.93, 36.32 μg/ml) compared to positive control (IC50: 37.63 μg/ml). The results revealed significant differences in TPC between plants extracts. The highest level of phenolic content was found in Terminalia brownii (bark; 46.02 μg GAE/mg) while the lowest was in Ziziphus spina–christi (fruits; 09.63 μg GAE/mg). The study indicated significant differences in total antioxidant capacity between the extracts. Terminalia laxiflora (wood), A. nilotica (pods, bark), T. brownii (bark), A. seyal var. seyal (bark), Khaya senegalensis (bark), T. brownii (wood) Combretum hartmannianum (bark), Polygonum glabrum (leaves), Z. Spina-christi (bark) and Guiera senegalensis (leaves) extracts displayed the high antioxidant equivalent concentration (EC) values. A. nilotica (pods, bark) expressed promising activity that warrant further research since it has high tyrosinase inhibitory activity, antioxidant activity and could be a good source of phenolic compounds. To the best of our knowledge, this is the first data presenting comprehensive data on anti-tyrosinase, TPC, antioxidant activity of the Sudanese medicinal plants.
14
Cofactor specificity engineering of streptococcus mutans NADH oxidase 2 for nad(p)+ regeneration in biocatalytic oxidations
Enzyme catalyzed oxidation reactions have gained increasing interest in biocatalysis recently, reflected also by a number of excellent reviews on this topic published in the last years .Oxidoreductases constitute an important group of biocatalysts as they facilitate not only the widely used stereoselective reduction of aldehydes and ketones but also the less well exploited oxidation of alcohols and amines.Oxidoreductases catalyzed oxidations are also used for production of chiral alcohols and amines by deracemization .Oxidoreductases, especially aldo-keto-reductases and dehydrogenases, act on the substrate by the transfer of electrons from or to a cofactor, mostly the nicotinamide-based nucleotides NAD and NADP.As nicotinamide cofactors are expensive, regeneration of cofactors is necessary for economically feasible biocatalytic processes.While for the regeneration of the reduced cofactors NADH and NADPH several systems are well established and widely used, universal regeneration systems for the oxidized forms NAD+ and NADP+ are less well developed.Enzyme based, electrochemical, chemical, and photochemical regeneration methods are known.Coupled substrate or coupled enzyme systems constitute two possibilities for enzymatic NAD+ recycling.In these reaction set-ups the cofactor is regenerated via the reduction of a carbonyl group of a cosubstrate, catalyzed either by the production enzyme itself or by an additionally added dehydrogenase.Carbonyl cosubstrate reductions by dehydrogenases normally provide little thermodynamic driving force for mostly energetically unfavorable biocatalytic alcohol oxidations.Generally, it is therefore necessary to supply the cosubstrate in excess to achieve high substrate conversion rates.In recent studies several smart concepts have been introduced to reduce the need for cosubstrate.The use of one-way cosubstrates or cofactor regeneration as an integral part of a redox neutral multi-enzyme network was reported.Several cofactor regeneration systems benefit from the high driving force of molecular oxygen as hydrogen acceptor.One example therefore is a 9,10 phenantrenequinone/xylose reductase system where the quinone is auto-reoxidized by oxygen ."O2 reduction also drives cofactor regeneration via mediators as ABTS or Meldola's blue, which are reoxidized by a laccase under H2O formation .Instead of using laccase the mediator reoxidation can also be achieved by electrochemical means, albeit at moderate turnover numbers.To overcome the still rather low productivity of electrochemical regeneration processes careful reaction and cell design is necessary .In pure chemical regeneration processes the chemical agent directly reoxidizes the cofactor without biocatalyst.Often Ruthenium complexes are used as oxidants .The direct regeneration of NAD+ via FMN was found to be strongly accelerated by light-induced excitation of FMN .A very promising NAD+ regeneration method is the application of soluble NADH oxidases from bacteria or archaea which use molecular oxygen as oxidant.This regeneration method has the advantage of being cheap as no cosubstrate or mediator is needed.Straightforward downstream processing is possible as only hydrogen peroxide or water is formed as byproduct.Moreover, the high redox potential of oxygen results in a high thermodynamic driving force.The electron and hydrogen transfer from NADH to oxygen is catalyzed by known soluble NADH oxidases via a two electron transfer producing hydrogen peroxide or a four electron transfer producing water .The four electron transferring oxidases are the preferred choice for cofactor regeneration as they only form water as byproduct.In case of H2O2 production, catalase has to be added to the system to prevent enzyme damage by the peroxide.Water-forming NADH oxidases have been studied from several bacteria as Streptococcus , Lactobacillus , Lactococcus , Clostridium , Serpulina , Leuconostoc and Bacillus .NOX from P. furiosus and T. kodakarensis produce H2O and a significant level of H2O2.NADH oxidases belong to the pyridine nucleotide disulfide oxidoreductases together with, among others, glutathione reductase and CoA-disulfide reductase .NOX enzymes contain a single conserved redox-active cysteine that circulates between the thiol/thiolate and the sulfenic acid state during catalysis.Overoxidation of the cysteine leads to enzyme deactivation.Several NADH oxidases need FADH or DTT addition for optimal performance .Enzymes with high specific activities were recently reported from L. sanfranciscensis, L. plantarum, L. rhamnosus and S. pyogenes .A drawback in using NADH oxidases for cofactor regeneration is that almost all water-forming NADH oxidases are specific for NADH.In wild type form only two water forming NOXs and one hydrogen peroxide forming NOX show activity with NADPHH oxidase which is universally applicable for regeneration of NADH as well as NADPH.As starting point NADH specific water-forming S. mutans NADH oxidase 2 was chosen.SmNOX was chosen as it was well characterized to be stable, highly active, not dependent on FADH or DTT addition , and had already been expressed in E. coli before .Moreover, a crystal structure of a closely related enzyme from S. pyogenes was available which enabled us to model the SmNOX structure.In a thorough mutation study of the cofactor binding site a SmNOX mutant with matched activities with NADH and NADPH was generated.Mutants with increased NADPH/NADH activity ratios were identified by SmNOX library screening.The conversion of 2-heptanol to 2-heptanone with NADPH regeneration by engineered SmNOX was shown.Enzyme assay reaction mixtures with 1 mM of NADH were fully converted with purified SmNOX wild type or variant 193R194H.The assay solution or glucose standard solutions were diluted 1+1 with an o-dianisidine / glucose oxidase / horse radish peroxidase mixture and absorption was measured at 460 nm.SmNOX wild type gene and variant 193R194H were recloned in pEHISTEV vector via EcoRV/HindIII restriction sites.A pEHISTEV version was used in which a second EcoRV restriction site was eliminated by introduction of a silent mutation.Plasmid preparations checked for correct sequence were transformed into E. coli BL21 Star.Expression and cell free extract preparation were done as described above but in LB/Kanamycin medium.The cells were harvested by centrifugation.The pellet was resuspended in 50 mM KPi pH 7.0 and disrupted by ultrasonication.The cell free extract was applied to a 5 mL Ni-Sepharose 6 Fast Flow column.The tagged enzymes were obtained by a one-step purification using the buffers recommended in the manual.After purification, the enzyme buffer was exchanged to 50 mM KPi and enzymes were concentrated to protein concentrations above 5 mg/mL by Vivaspin 20 tubes with 10 kDa molecular weight cutoff before storage at −20 °C.The 6xHis tag was cleaved off from 1 mg NOX by incubation with tobacco etch virus protease in a reaction mixture containing 0.2 mM EDTA, and 1 mM DTT in 50 mM Tris/HCl buffer, pH 8.0 by overnight incubation at 4 °C.10 µg TEV protease were used per mg NOX.The mixture was applied on the Ni-Sepharose 6 Fast Flow column and washed through with 15 mL of 30 mM sodium phosphate buffer containing 0.3 M NaCl and 20 mM imidazole, pH 7.5.The flow through was collected.Buffer exchange to 50 mM KPi samples and concentration to > 1 mg/mL protein was done in Vivaspin 20 centrifugal concentrators, aliquots of concentrated NOX solutions were stored at −20 °C.A sequence saturation mutagenesis library of SmNOX gene with random mutations was bought at SeSaM Biotech.The library was based on mutant SmNOX 194H200K."The library was cloned into pMS470 vector and transformed into E. coli TOP10F'.Transformants were picked into 60 µL LB/Ampicillin media in 384 well plates, grown overnight at 37 °C and 60 % of humidity and stored as 15 % glycerol stocks at −80 °C.Cultivation of the expression library was done in 96 well plate format.Preculture plates with 150 µL of LB/Ampicillin media per well were inoculated from glycerol stock plates and cultivated at 37 °C at 60 % humidity for at least 12 hours.Main culture plates with V-shaped bottom contained 80 µL of LB/Ampicillin media and were inoculated from the preculture plates.After 8 hours of growth at 37 °C and 60 % humidity SmNOX2 expression was induced by addition of 20 µL of a 0.5 mM IPTG solution in LB/Ampicillin media.The plates were kept at 28 °C and 60 % humidity for 16 hours.Cells were harvested by 15 minutes of centrifugation at 2500 g. Supernatant was decanted and the cell pellets were frozen at −20 °C for at least two hours.Screening assays were carried out in 96 well plates.After thawing cell lysis was accomplished by addition of 100 µL lysis buffer and an 1 h incubation at 28 °C at 600 rpm.Cell debris was separated by centrifugation at 2500 g for 15 min at 4 °C.The supernatant was diluted 1+1 with 50 mM KPi, pH 7.0 and used for screening assays.140 µL of 50 mM KPi, pH 7.0, were added to 10 µL of diluted supernatant in two plates in parallel.Reactions were started by addition of 50 µL of a 0.8 mM NADH or NADPH solution.Initial rates of NADH and NADPH conversion were measured by detection of decrease in absorption at 340 nm over three minutes.Activity with NADH and NADPH was compared for each well.2800 clones were screened, 480 thereof were chosen for a re-screen and the best 40 thereof were measured in a re-re-screen.Best variants were finally cultivated in shake flasks.From best variants plasmid DNA was isolated with Gene JetTM Plasmid Miniprep Kit and sent for sequencing.Conversion experiments were set up in 1.5 mL reaction tubes.The reaction mixture contained NADP+ and 2-heptanol in potassium phosphate buffer in a total volume of 500 µL.Sphingobium yanoikuyae ADH was applied as crude E. coli lysate and SmNOX 193R194H was applied as purified enzyme in amounts to give 1 U/mL.After 12 hours at 25 °C and 600 rpm 100 µL of n-butanol was added as internal standard for GC analysis and the mixture was extracted with ethyl acetate.Substrate conversion was determined by GC-analysis on a Varian CP7503 gas chromatograph equipped with an FID detector and a Phenomenex ZB-FFAP column with a Restek Hydroguard MT precolumn.H2 was used as carrier gas.The following temperature program was used: 65 °C to 110 °C, 9 °C/min; 110 °C to 160 °C, 25 °C/min.The retention time for 2-heptanol was 2.48 min and for 2-heptanone 3.44 min.Water-forming Streptococcus mutans NADH oxidase 2 is a monomeric 50 kDa enzyme which is NADH specific .We intended to establish SmNOX as universal NAD+ regeneration system by engineering the NADH specific wild type towards the effective usage of both cofactors, NADH and NADPH.Ideally, the created variant should have comparable characteristics for both cofactors to simplify application for cofactor regeneration in industrial processes with varying oxidizing enzymes.In NOX enzymes the nicotinamide cofactor is bound in the well described Rossmann fold manner .In Rossmann fold enzymes often an acidic residue, typically an aspartate at the C-terminus of the second β-strand of the alternating βαβαβ-regions plays a key role in NAD binding by forming hydrogen bonds to the 2′-OH and 3′-OH of the adenine ribose.In contrast, NADP specific Rossmann fold enzymes typically miss this acidic residue and instead carry a basic residue at the following amino acid position.Positive charges in the cofactor binding site facilitate the binding of the negatively charged phosphate group present in NADP but not in NAD.The relevance of the described positions for cofactor specificity has first been shown in a mutation study of glutathione reductase by Scrutton et al. .NADH oxidase from Lactobacillus sanfranciscensis exhibits a limited NADPH activity in parallel to NADH activity .An alignment of the SmNOX sequence with the glutathione reductase, B. anthracis coenzyme A-disulfide reductase and LsNOX sequence indicates several positions of possible importance for NADPH activity.SmNOX and LsNOX show an aspartate at the expected position indicative for NADH activity.The positive charge important for NADPH activity is missing at the following amino acid in both enzymes but in LsNOX at the +2 position counted from the aspartate a positively charged histidine is found.Position 196 and 200 are also occupied by positively charged residues.Like LsNOX, B. anthracis coenzyme A-disulfide reductase with dual cofactor specificity shows the negatively charged residue in combination with a positively charged residue.The subtle side chain rearrangements and reformation of hydrogen bonds enabling the dual cofactor specificity of B. anthracis coenzyme A-disulfide reductase were recently described .For SmNOX we built a structural model based on the X-ray structure of NADH oxidase from S. pyogenes.A comparison of the cofactor binding site of SmNOX, LsNOX, and E. coli glutathione reductase indicated that amino acid residues at SmNOX position 192–194 and 199–200 might possibly interact with the cofactor, while residues at position 195–198 are turned away from the cofactor.Positions 192–194, 199 and 200 were chosen for rational engineering of SmNOX with the main aim of introducing positively charged residues to enable NADPH activity.In addition, single mutations were combined to double, triple, and quadruple mutants.For recombinant expression and engineering, a synthetic S. mutans NOX 2 gene was inserted into pMS470Δ8 vector to give the vector pMSsN1Wt.Site directed mutagenesis of the S. mutans NOX 2 gene was performed following the Stratagene Quikchange protocol.Wild type and mutants were expressed in E. coli BL 21 Gold cells.Moreover, LsNOX was expressed under the same conditions from vector pMSsN2, which is identical to pMSsN1Wt, except that it carries a synthetic gene for LsNOX instead of the SmNOX gene.SDS-PAGE gel analysis of cell free extracts indicated NOX over-expression for all variants with a strong band migrating to the expected position for 50 kDa.The band was not detected in the cell free extract of an E. coli BL21 strain without plasmid.In Figure 3 the SDS-PAGE analysis of cell free extracts for SmNOX wild type and single mutants are shown.For SmNOX and LsNOX wild type and fourteen SmNOX variants initial rate data for the oxidation of NADH or NADPH in cell free extracts at air saturated oxygen levels were recorded.Maximal specific activities and apparent Km values were calculated by fitting velocities measured over a cofactor concentration range of up to 5–10 fold the Km value to equation 3.Results are listed in Table 2.All mutants were cultivated under identical conditions and SDS-PAGE analysis showed a comparable expression level for all variants.Ratios of NADPH and NADH activities and efficiencies measured from one cell free extract in parallel clearly indicated cofactor specificity changes between wild type and the variants.As expected, wild type SmNOX showed only marginal activity with NADPH, while LsNOX wild type showed NADPH oxidation, as reported .In SmNOX Mut1 and Mut2 the negatively charged aspartate, which is known to be a key residue for NADH binding, is missing.In absence of the negative charge the activity with NADH was slightly reduced and higher NADH Km values were detected.Remarkably, only by the exchange of the aspartate to non-polar or positively charged residues NADPH conversion rates increased to levels comparable to NADH rates, albeit with 10 fold higher Km values.The introduction of a positively charged residue next to the aspartate in Mut3 and Mut4 without removal of the aspartate also enabled activity with NADPH although to a lower extent than the aspartate removal.NADH activity was not reduced in Mut3 and Mut4 compared to wild type SmNOX containing extracts.The introduction of a positive charge was even more effective at position 200.NADH activity stayed unchanged compared to wild type for a Mut6 variant, while NADPH activity already reached around 80 % of the activity with NADH.A positively charged arginine at position 199 in Mut5 could not increase NADPH activity.While single mutations already introduced quite high levels of NADPH activity, only the combination of mutations further decreased NADPH Km values to values lower than 10 µM.The best combination for creating a mutant with high and matched NADH and NADPH activities at low Km values is mutant Mut10.Quadruple mutant Mut14 showed the highest NADPH to NADH activity ratio of all mutants with a maximal NADPH activity three times as high as the NADH activity.Sequence comparison of H2O2 and H2O forming NADH oxidase from T. kodakarensis and SmNOX showed that TkNOX features an arginine at positions equivalent to SmNOX positions 194 and 199 and a lysine at the position equivalent to SmNOX position 200 .The results from the SmNOX mutation study indicate that these positively charged residues might be responsible for making TkNOX the only known bacterial wild type NOX showing higher activity with NADPH than with NADH.Since Scrutton et al. reported the first NADH specificity engineering of an enzyme, a vast number of enzymes have been mutated to alter cofactor specificities.The outcome of these mutation studies provided evidence that the effects obtained by site directed mutagenesis of positions known to be relevant for cofactor specificity, especially the aspartate or glutamate at the end of the second β-sheet, vary tremendously.Only in very few cases the catalytic efficiency of reactions with the originally disfavored cofactor could be increased to values of the same order of magnitude as for reactions with the originally used cofactor in the unmutated enzyme .For mutants in which the conserved aspartate or glutamate was exchanged to smaller non-polar residues, the efficiency with NADP+ ranged from 1/4000 to 1/3 of the efficiency of the unmutated enzymes with NAD+ .Especially rare are examples of increased efficiency with one cofactor while keeping also the efficiency with the other cofactor high .In the outstanding case of B. subtilis lactate dehydrogenase the mutation of valine, the residue equivalent to SmNOX V193, into arginine led to a 140 fold increased NADPH kcat value.The increase in NADPH kcat did not lead to a decrease but even to a four-fold increase in NADH kcat.Km values stayed at wild type NADH level for both cofactors for the variant.In NADH oxidase from L. plantarum the introduction of positively charged residues next to the conserved aspartate enabled kcat values for NADPH of up to 69 % of wild type level with NADH but decreased kcat for NADH to 58 % in variant G178R and to 16 % in G178V/L179R.Interestingly, the single mutation G178R led to a low NADH Km of 6 µM compared to 50 µM in the wild type form.NADPH Km of the same variant was 490 µM and could be decreased to 9 µM in the G178V/L179R double mutant.In summary, also in comparison to other enzymes, SmNOX turned out to be an excellent choice for the generation of an NADH oxidase with comparable kinetic characteristics for both nicotinamide cofactors without drastic loss of activity or increase in Km compared to the wild type enzyme.Around 3000 variants of a sequence saturation mutagenesis library built on SmNOX 194H200K were screened for increase in NADPH/NADH activity ratios.We chose a starting point for the library without the well-studied mutation 193R as we rather aimed at finding new promising mutation combinations that were so far unknown to give high activities with NADPH.No variant with significantly increased NADPH activity without loss in NADH activity compared to SmNOX 194H200K could be detected in the library screening.However, three mutants were identified with clearly increased NADPH/NADH activity ratio albeit with concomitant decrease in NADH activity.Figure 4 shows NADH and NADPH activity levels of crude lysates of mutant 194H200K202N, 194H200K202C and 194H200K340V388T compared to variant SmNOX 194H200K.A 194H200K340V variant without 388T mutation was constructed to check for the influence of the mutation at position 340 without mutation at position 388.Variant 194H200K340V showed the same NADH/NADPH activity ratio as variant 194H200K340V388T.NADH activity was around 60 % of NADPH activity in both cases.We conclude that mutation of position 340 has a high impact on cofactor specificity while mutation 388T probably has no influence on cofactor binding.Position 202 is located at the end of the loop which connects cofactor binding site strand βB and helix αB and starts after aspartate 192.The modeled SmNOX structure indicates that residue 202 is positioned too far away from bound NADH in order to allow direct interaction with the cofactor.Strikingly, position 340 is located at the beginning of a β-sheet in close vicinity to the before described loop end.The SmNOX wild type model indicates a potential hydrogen bond between Y202 and N340.In LsNOX the equivalent positions are occupied by a tyrosine and a valine, which cannot form the hydrogen bond.We speculate that a presumably higher flexibility of the loop caused by removal of the hydrogen bond hampers the activity with the more structurally demanding NADPH less than activity with NADH.SmNOX wild type and variant 193R194H were recloned in vector pEHISTEV for expression with an N-terminal 6xHis tag.The enzymes were purified to apparent electrophoretic homogeneity by Ni-affinity chromatography as demonstrated in Figure 5.After purification the tag was cleaved off by treatment with TEV protease.Apparent kinetic values for NADH oxidation were determined in air saturated buffer as described in the methods section.kcat and Km values for SmNOX wild type and mutant 193R194H are shown in Table 3.The SmNOX wild type kcat value corresponded to a specific activity of 44 U/mg.Higuchi et al. reported a specific activity of water-forming S. mutans NOX 2 of 100 U/mg.The lower specific activity measured here was not unexpected due to a lower assay temperature than in the previous study.However, this lower activity could also be caused by enzyme deactivation during the tag cleavage procedure which included an overnight incubation at 4 °C.With SmNOX mutant 193R194H now an efficient NADH oxidase is available which has very similar kinetic values for oxidation of NADH as well as NADPH.Possible H2O2 formation was analyzed by detecting H2O2 values after SmNOX catalyzed total oxidation of up to 1 mM of NADH with o-dianisidine and HRP.For SmNOX wild type with NADH 2.7 % of the catalytic conversions led to H2O2 formation, for SmNOX variant 193R194H with NADH in 3.3 % of conversions and for NADPH in 4 % of conversions H2O2 was formed.Other NOX enzymes were reported to lose FAD during purification .Concentrated purified SmNOX was clearly yellow, indicating that FAD was still bound.Initial rate measurements after 30 min pre-incubation in 25 µm or 250 µM and with addition of the same concentration of FAD to the assay showed an insignificant increase of activity for SmNOX wild type and an insignificant decrease of activity for variant 193R194H.Several NADH oxidases have also been shown to be activated by addition of DTT , probably by preventing the overoxidation of the catalytically active cysteine.SmNOX activity was only slightly increased by addition of 5 mM DTT to the initial rate measurements.Application of water-forming NADH oxidases as cofactor recycling system coupled to a biocatalytic oxidation has so far been demonstrated for L. sanfranciscensis NOX , L. brevis NOX and L. rhamnosus NOX .Here, we demonstrate the NADP+ regeneration by cofactor specificity engineered SmNOX for 2-heptanol oxidation to 2-heptanone.SmNOX was applied in a coupled enzyme system with alcohol dehydrogenase from Sphingobium yanoikuyae.10 mM of 2-heptanol were converted to 2-heptanone in the presence of 100 µM of NADP+ with and without SmNOX added to the assay.SyADH exhibits a low enantioselectivity for 2-heptanol and therefore allows complete conversion if enough cofactor is supplied.NADP+ is cheaper than NADPH and therefore the preferred starting compound for cofactor recycling.Conversion rates are shown in Table 5.Nearly complete conversion could be achieved within 12 hours by addition of SmNOX.Without SmNOX, conversion rates around 1 % as expected for a stoichiometric conversion of the 100 µM cofactor were found.The NADH specific S. mutans NADH oxidase 2 belongs to the few NADH oxidases that produce water instead of hydrogen peroxide and is therefore well-suited to be used as cofactor recycling system.For general applicability SmNOX was engineered towards efficient usage of both cofactors, NADH and NADPH.The dual cofactor specificity was achieved by introducing positively charged amino acid residues for increased NADPH binding, while still retaining the aspartate which facilitates NADH binding.SmNOX variant V193R/V194H showed comparable high kcat and Km values for NADH and NADPH oxidation with only slight decrease in activity compared to SmNOX wild type.NADPH specificity was also found to be increased by the mutation of Y202 or N340.These two residues form a hydrogen bond between the end of the nucleotide binding loop and a near-by positioned β-sheet in SmNOX wild type, which is destroyed by the mutations.SmNOX variants were shown to be well active without a need to add FAD or DTT.Cofactor engineered S. mutans NADH oxidase 2 is therefore well suited for application as a versatile NAD+ regeneration system, as was demonstrated for combination with S. yanoikuyae ADH for oxidation of 2-heptanol to 2-heptanone.What is the advantage to you of publishing in Computational and Structural Biotechnology Journal?,Easy 5 step online submission system & online manuscript tracking,Fastest turnaround time with thorough peer review,Inclusion in scholarly databases,Low Article Processing Charges,Open access, available to anyone in the world to download for free, "E. coli TOP10F' was originally bought from Invitrogen, E. coli BL21-Gold was from Stratagene.NADH was from Roche Diagnostics GmbH or Roth.Materials for cloning were from Fermentas if not stated otherwise.All other chemicals were purchased from Sigma-Aldrich, Fluka or Roth if not stated otherwise.Homology modeling for Streptococcus mutans NOX 2 was based on an X-ray structure of NADH oxidase from Streptococcus pyogenes.The sequence identity between target and template was 77.5 %.The homology model was created with the automated protein structure homology-modeling server SWISS-MODEL developed by the Protein Structure Bioinformatics group at the SIB – Swiss Institute of Bioinformatics and the Biozentrum University of Basel .A synthetic S. mutans NOX 2 gene was ordered at DNA2.0 and ligated into a NdeI/HindIII cut pMS470Δ8 vector downstream of the tac-promoter to give the vector pMSsN1Wt.Site directed mutagenesis of the S. mutans NOX 2 gene in vector pMSsN1Wt was performed following the Stratagene Quikchange Site-directed Mutagenesis Kit instruction.Primers used for site directed mutagenesis are listed in Table 1.PCR reaction mixtures contained template plasmid, primers, dNTPs and 10 × reaction buffer supplied with the polymerase.Pfu turbo polymerase was added to each tube.The amplification protocol comprised 30 seconds of initial denaturation at 95 °C, 18 cycles of denaturation, annealing and extension, and a final 7 minutes extension period at 68 °C."After 1 h of DpnI digestion competent E. coli TOP10F' cells were electrotransformed with the reaction mixture.Successful incorporation of the desired mutations was verified by dideoxy sequencing.Electrocompetent E. coli BL21 Gold cells were transformed with plasmid pMSsN1Wt or one of 15 variants thereof.Additionally, a plasmid pMSsN2 was transformed into an E. coli BL21 giving a strain overexpressing Lactobacillus sanfranciscensis NOX.pMSsN2 is identical to pMSsN1 except that it carries the gene coding for LsNOX instead of the SmNOX.Precultures of all resulting E. coli strains were cultivated in LB media containing Ampicillin in baffled shake flasks at 37 °C and 130 rpm overnight.Main cultures were inoculated to an OD of 0.05 in the described medium and cultivated at 37 °C and 130 rpm.NOX production was induced at OD 0.8-1 by addition of IPTG.Cells were harvested after an overnight induction period by centrifugation.Cell pellets were diluted in potassium phosphate buffer to a final volume of 25 mL.Cell breakage was achieved by ultrasonication with a Branson sonifier 250 for 6 minutes at 50 W with continuous cooling, pulsed with one 700 ms pulse per second with a 1 cm diameter tip.Cell free lysates were prepared by collecting the supernatant of centrifugation at 36000 rcf for 45 minutes and concentrating it to half the volume via Vivaspin 20 centrifugal concentration tubes with 30 kDa molecular weight cutoff.Cell free extracts from an E. coli strain expressing S. yanoikuyae ADH from the pEamTA based plasmid pEam_SyADH was prepared following the same protocol.The protein content was determined with bichinonic acid protein assay kit using BSA as standard.For SDS-PAGE NuPAGE® 4–12 % Bis-Tris Gels, 1.0 mm, from Invitrogen, were used with a NuPAGE MOPS SDS Running Buffer for Bis-Tris Gels.All strains were stored as glycerol stocks at −80 °C.Cell free extracts were stored in aliquots at −20 °C.Initial rate data of NADH oxidation were acquired measuring the decrease in NADH absorption at 340 nm in potassium phosphate buffer at 25 °C.The tempered buffer was vortexed for saturation with oxygen before mixing with the enzyme and cofactor solution.Absorption measurements were performed on a Spectramax Plus 384 or on a Synergy MX in UV-star micro titer plates.The total reaction volume was 200 µL, reactions were started by addition of NADH. Data collection was started after 5 seconds of mixing.For activity measurements with FAD, enzyme preparations first were pre-incubated in 10 fold concentration in 50 mM KPi containing 25 µM or 250 µM FAD and then were diluted 1:10 in the same buffer for initial rate detection after addition of NADH.In case of DTT addition the assay buffer contained 5 mM of DTT.Apparent kinetic parameters were obtained from initial rate measurements at air saturation oxygen level with eight cofactor concentrations varying over a concentration range of 5 to 10 times the apparent Km or to a maximum NADH concentration of 1 mM.Enzymes were applied as crude lysates in dilutions chosen to give rates between 0.001 and 0.05 ΔAbs/min and rates were constant for ≥ 1 minute.Appropriate controls containing crude lysate without overexpressed NOX verified that blank rates were insignificant for all conditions used.Results from initial rate measurements were fitted to a Michaelis-Menten type equation using unweighted least-squares regression analysis performed with Sigmaplot program version 11.v is the initial rate, vmax is the apparent maximum rate, A the cofactor concentration and Km the apparent Michaelis constant for NADH at air saturation oxygen levels.
Soluble water-forming NAD(P)H oxidases constitute a promising NAD(P) + regeneration method as they only need oxygen as cosubstrate and produce water as sole byproduct. Moreover, the thermodynamic equilibrium of O2 reduction is a valuable driving force for mostly energetically unfavorable biocatalytic oxidations. Here, we present the generation of an NAD(P)H oxidase with high activity for both cofactors, NADH and NADPH. Starting from the strictly NADH specific waterforming Streptococcus mutans NADH oxidase 2 several rationally designed cofactor binding site mutants were created and kinetic values for NADH and NADPH conversion were determined. Double mutant 93R 94H showed comparable high rates and low Km values for NADPH (kcat 20 s -1, Km 6 μM) and NADH (kcat 25 s-1, Km 9 μM) with retention of 70 % of wild type activity towards NADH. Moreover, by screening of a SeSaM library S. mutans NADH oxidase 2 variants showing predominantly NADPH activity were found, giving further insight into cofactor binding site architecture. Applicability for cofactor regeneration is shown for coupling with alcohol dehydrogenase from Sphyngobium yanoikuyae for 2-heptanone production.
15
A Closer Look at the Design of Cutterheads for Hard Rock Tunnel-Boring Machines
A tunnel-boring machine is a “tunnel-production factory”; as such, all parts of the production line should be functional in order to make the final product, which is the next meter of excavated tunnel.TBMs have existed since the mid-19th century, both in concept and in reality, and have been an integral part of the tunneling industry since the 1950s.The continuous improvement of TBMs and their capabilities since their introduction, especially in the past two decades, has made them the method of choice in many tunneling projects longer than ∼1.5 km.Of course, other issues related to the tunnel application or ground conditions may change this choice, and may require the use of competing systems such as drill and blast and/or the use of the sequential excavation method, also known as the new Austrian tunneling method, which primarily uses roadheaders.Although the selection and choice of TBM specifications appear to be straightforward, this seemingly simple task has proven to be challenging in several projects .Problematic situations include deep tunnels, where shield machines can be used but risk getting trapped, and mixed ground conditions, where the choice of open-type machines for higher cutting speed has resulted in dramatic setbacks.In any case, the choice of machine type and specifications overshadows the operation of the machine and its performance during tunnel construction.Thus, it is critical to understand the implications of the choice of various machine types and related specifications when estimating the potential performance of tunneling machines.Although the choice of machine type is very important to the success of an operation, the design of the cutterhead is the single most critical part of the TBM operation, irrespective of the type of machine.This is because the TBM cutterhead is the “business end” of the machine—the place where the cutting tools meet the rock for the first time.Designing the cutterhead involves the following factors: the choice of the cutter type, spacing of the cutters for the given geology along the tunnel, cutterhead shape and profile, balance of the head, efficient mucking, position and design of the muck buckets, access to the face and allotted space for letting miners reach the face, consideration for the structural joints and assembly of the head, and cutting clearance for the cutters and the body of the TBM.Each of these design parameters has some implication for the efficiency of the cutting process as well as the maintenance of the cutters, cutterhead, and cutterhead support.Another issue with the design of the head is the smooth operation and balance of the head, which allows for better steering of the machine, especially in mixed face conditions.Despite the importance of the cutterhead design of a TBM, the amount of published literature on this subject is very limited .This is because cutterhead design is mainly performed by the machine manufacturers, and the end-users often do not get involved with this level of detail.There has been limited academic interest on this topic due to a lack of opportunity to perform tests or follow normal procedures to validate hypotheses or obtain results.As a result, it is difficult to design different cutterheads and try them on an equal basis in order to assess their field performance or compare their design implications.Miniaturization of the head to assess its performance is not very attractive because rock excavation is widely viewed as not being scalable.On a large and full scale, it is very rare for a project to allow significant changes or modifications to the cutterhead design, unless something drastic happens.This is because it is very expensive and time-consuming to change the cutterhead in the field, so alterations are often limited to structural repairs and minor modifications of the mucking system.Some activity on this topic has taken place in recent years, as the TBM market seems to be growing in Asia.Research on this topic has mainly taken place in the state key laboratories in China, and has also been done by researchers in Turkey and Korea .The focus of these activities has been to make the machines more effective, primarily to address the dire need and pressure to improve the speed of tunneling and increase efficiency.However, some of the work in the past has focused on modeling without a discussion of design steps , while other work has looked at the design from a purely mechanical engineering point of view, without an in-depth discussion of rock behavior as it pertains to cutterhead design and machine operation .This paper is intended to shed some light on the topic and to cover some basic principles of the cutterhead design procedure for hard rock TBMs.The content is not intended to be a discussion of a specific research project; rather, it is a reflection on the experiences of the primary author in cutterhead design during the past two decades.This section offers an overview of cutterhead design in terms of simple steps to allow the reader to understand the process and be able to evaluate the critical design issues when dealing with the acquisition of a new rock TBM or the refurbishment of an existing machine for a given tunnel geology.The first step in the process of cutterhead design and in the evaluation of a TBM for a project with a given geology is cutter selection.More information and a general guide on cutter selection for rock-cutting applications can be found in a paper by Rostami .In addition, a discussion on various disk cutters and general trends in the application of disk cutters can be found in other publications .The trend in the industry has been to use 432 mm diameter constant cross-section disk cutters as the base choice in various applications, especially on hard rock TBMs.An exception has been the use of larger 483 mm disk cutters on TBMs working on very hard and abrasive rock, in order to minimize the need for cutter replacement.Another exception has been the use of >500 mm disk cutters on TBMs larger than 10.5 m in diameter .Smaller cutters, such as 150 mm, 300 mm, and 365 mm cutters, are used for smaller cutterheads.The implications of the disk cutter size are as follows: Cutter load capacity.This determines the depth of penetration.The typical load capacities of the 432 mm and 483 mm cutters are 250 kN and 310 kN, respectively. Required cutting forces.These increase with the size of the cutter for the same rock type. Cutter velocity limit.This is imposed by the maximum allowed rotational speed of the bearings.The typical velocity limits are 165 m·min−1 and 200 m·min−1 for 432 mm and 483 mm disk cutters, respectively.The cutter tip width, T, is another parameter to be selected; this controls the cutting forces, F, in an almost linear fashion.The typical tip width varies from 12.5 mm to 25 mm.The higher the capacity of the cutter and the higher the strength and abrasivity of the rock, the higher tip width is needed.The second step in cutterhead design involves the selection of the cutting geometry, including the spacing and location of the cutters on the profile.Selection of the spacing and penetration is a function of the cutting forces.Although the allowable cutter load is the first parameter to check when selecting the cutting geometry, it is necessary to keep in mind that an overall check of the TBM thrust, torque, and power may be needed in order to verify the assumption of the penetration at the end of the design cycle.Optimum spacing is a concept that has been discussed in the literature; it refers to the spacing at which the required energy of rock cutting/excavation is minimized for a given depth of penetration .The most common measure of optimization is the use of specific energy, which is the amount of energy required to excavate a unit volume of rock.SE is typically expressed in hp·h·cyd−1, hp·h·ton−1, kW·h·m−3, or in similar units that express energy per volume or weight of excavated rock.It has been proven that the magnitude of SE is minimized when plotted against the spacing-to-penetration ratio.The range of S/P ratios that require a minimum SE, or a so-called optimum S/P ratio for disk cutters, is typically within 10–20, although it has been reported to be as low as 6 and as high as 40.The optimum range of S/P ratio is a function of rock type; it increases with rock brittleness and can change slightly with varying penetration.However, for the most part and for practical design, an S/P ratio of 10–20 is often used in order to select the optimum spacing for a given range of penetration.For example, if the anticipated penetration is about 5 mm·r−1, which is typical for granitic rock, the range of optimum spacing is between 50 mm and 100 mm.In general, however, in order to avoid ridge buildup in high-strength and tough rocks, a spacing of 75–100 mm is selected for most cutterhead designs.It should be noted that the cut spacing should be selected based on the hardest/strongest rock on the alignment.Other approaches for the selection of cut spacing exist , which involve direct measurement of forces and experiments.Individual cutting forces can be estimated as follows: normal force FN = FTcosβ, and rolling force FR = FTsinβ, where β = φ/2 and the cutting/rolling coefficient is the ratio of the rolling to normal forces, or RC = FR/FN = tanβ.The estimated forces can be used as a measure to find the maximum penetration into the rock within the cutter load capacity for the selected disc, and hence the spacing from the abovementioned S/P ratio.Users can use other formulas for estimating cutting forces as per Refs. .TBM cutterheads can have a cone, dome, or flat shape.Cone and dome shape cutterheads have gradually been phased out, and new machines primarily use flat-profile cutterheads.The flat-profile cutterhead has proven to be more efficient, easier, and more convenient to maintain; it also accommodates back-loading cutters for cutter change from within the cutterhead.The end of the head is curved in order to allow the gage cutters to cut clearance for their hub and the cutterhead support/shield.A detailed cutterhead design starts with the development of the cutterhead profile.The profile is the cross-section of the face where the cutters excavate the rock and leave marks of their tracks.An example of a TBM cutterhead profile is given in Fig. 2.Developing a cutterhead profile simply means that the location of the cutters on a half cut of the face is defined and quantitatively expressed.This involves providing the coordinates of the tip of the cutters using a Cartesian coordinate system.In addition to the location of the cutter tips, the orientation or tilt angle of the cutters must be defined.Angle α, or the tilt angle, is the angle between the direction of the disk cutter centerline and the tunnel axis.As such, α = 0° for the cutters that are perpendicular to the flat face.This is typically the case for the cutters at the center of the cutterhead and the face.As the transition and gage cutters start and the profile enters a curvature, α typically increases to 65°–70°.The purpose of the tilt angle is twofold: ① For cutters that are at the outer gage, the tilt angle cuts a clearance for the hub and cutter mounting assembly; and ② for the rest of the cutters in the gage area, the tilt angle ensures that the cutters are perpendicular to the face at the point of contact.The second requirement ensures the endurance of the cutters, since full-scale laboratory testing has shown that the side force that is acting on the cutter is minimized when the cutter is perpendicular to the face it is cutting at the point of contact, and increases when the cutter has an angle relative to the surface that it cuts.This is shown in Fig. 4.In practice, the first four disk cutters are combined in a set called the “center quad.,This is because of the lack of space at the center, where there is no room for the installation of individual cutters, and because the mounting assembly for the cutters does not allow the placement of the cutters in such a way that the desired spacing can be reached.Fig. 5 shows a picture of a center quad along with a schematic example of center quad positioning in which reasonable spacing is achieved.The distance between the blades in the quad set is typically fixed; by allocating one of the inner cutters at a certain distance from the center, the others will automatically assume a spacing and thus radius from the center.For example, if the distance between the center quad disks is 200 mm, when one of the inner blades is positioned at a radius of 50 mm, the second blade automatically assumes a radius of 150 mm from the center, which means a spacing of 100 mm from the first one.The third will be located at a distance of 250 mm from the center, which implies a spacing of 100 mm from the second cutter track, and the fourth cutter will be located at a radius of 350 mm from the center, which means a spacing of 100 mm from the third cutter track.This takes care of the first four cutters and the first three spacings.Of course, for harder rock, the spacing of the cutters can be reduced by 10–15 mm in the center quad, which will reduce the spacing of the center cutters to 85–90 mm.There are other arrangements for the center in which six cutters are placed together; however, the overall arrangement is the same as that of the quad, except that there are six cutters instead of four.Other cutters can be allocated along the line of the profile based on the assigned spacings.This means that when a quad is allocated, the fifth cutter can assume a radius of about 450 mm.Given the clearances of the cutter housing, the cutter can be allocated to the area adjacent to the center quad without much interference.The same applies for the sixth cutter and onward.Thus, these cutters in the so-called face area can be allocated to the profile without much of a problem, until they reach the transition and gage area.The cutter tilt angles start at the transition area, and the offset from the face also increases.Some of the new flat-type heads have a very small transition area, meaning that only one or two cutters are present in the transition, and then the gage curve starts.To allocate the cutters in the gage area, once the curvature of the head is established, cutters can be assigned to follow this curvature at an angle of about αmax = 65°–70°.As noted before, the typical curvature radius of a flat cutterhead is 450–550 mm.This provides sufficient curvature to allow for a gradual transition to gage cutters and to cut clearance for the cutterhead and cutter mounting assemblies.The cutters in the gage area are placed on the curvature at line spacings that are progressively smaller than the line spacing at the face.For example, the line spacing of 100 mm at the face will be gradually reduced by 5–4 mm in every iteration = SLk − 5).For every position, the cutter should be tilted to match the curvature at the point of contact.Allocating the cutters along the curvature means that the radial spacing will decrease at a faster rate.Fig. 6 shows the profile for the given cutterhead, which has nearly 30 cutters and a diameter of about 4400 mm in this case.Additional cutters can be placed on the gage, and particularly at the last position.These cutters are called “copy cutters” and are effectively placed on the profile at the location of the last cutter to provide relief to the last cutter, thus ensuring that the diameter of the tunnel is not reduced due to wear on the gage cutters.Although machines designed for softer and less abrasive rocks do not typically have copy cutters, TBMs used in harder, more abrasive rocks have one copy cutter—that is, there are two cutters per position in the last spot.It is a common practice to use cutters with a wider tip or disks with a carbide insert in the gage cutters at the last ∼2–3 positions, in order to ensure that the cutting diameter is not compromised.Some machines also have one or two cutters that are mounted on an assembly and that can be extruded by 10–25 mm beyond the nominal profile.These cutters can be used for overcut on single or double shield machines to avoid jamming due to ground convergence.They can also cut a relief slot for the gage cutter in case of excessive wear on the gage disks, which can result in a reduced tunnel diameter.In such cases, this slot is needed to avoid overloading the gage cutters in the first few rotations after the changing of the old cutter, or even to make room for the installation of the old cutters, which otherwise cannot be secured in place, especially in back-loading cutterheads.With the profile of the head selected and the position of the cutters defined, the next question is how to spread the cutters around the head for a uniform distribution.The implications of the distribution of the cutters around the head are the balance of the cutterhead in uniform material and, more importantly, the balance of the resultant forces in mixed ground conditions and the magnitude of side loading on the disk cutters.For a given profile, if the cutters are clustered in a certain location on the head, they can cause unbalanced and eccentric forces.In these cases, the resultant force is away from the center of the cutterhead, resulting in non-uniform loading of the main bearing.Eccentric forces are caused by the summation or superimposition of the cutting forces of various disk cutters.If the cutterhead is fully balanced, it will ideally create a resultant force that is parallel to the tunnel/TBM axis and at the center.If there is a shift in the resultant force causing it to move away from the axis of rotation, or if the resultant force is at an angle to the machine axis, it causes moments relative to the X and Y axes that are undesirable and detrimental to the main bearing.Fig. 7 shows a schematic drawing of a TBM and the global coordinate system that defines the axis of the tunnel/machine, the plane of the cutterhead, the resultant force FZ, and the eccentricity DE, which is defined as the distance of the resultant force from the center of the cutterhead.A good cutterhead design and cutter distribution avoids clustering of the cutters in any area of the head, and thus avoids eccentric forces and moments.Fig. 8 shows a normal and an exaggerated cutterhead with cutters clustered in the first quadrangle.The best and easiest approach to assign the location of the cutters on the cutterhead is to use the concept of angular spacing.This refers to using a polar coordinate system to allocate cutters using the radius from the center and an angle relative to a reference line.The radius from the center is already defined by the profile, and the angle can be defined relative to an axis.Thus, the location coordinates for a cutter will be in two-dimensional space or on a plane, or in three-dimensional space or on a cylindrical coordinate system, as shown in Fig. 9.Given these parameters, it is possible to develop an algorithm for cutter distribution around the head using a program.That is, it is possible to define θi+1 = f; for example, θi+1 = θi + θs, where θs is the angular spacing.Using this methodology allows the distribution of the cutters on the head to be controlled.To avoid unbalanced cutter distribution around the head, the angular spacing used in the design should permit the optimal distribution of cutters around the head .Another advantage of using this system is that it is possible to define this algorithm in a program in order to help the designer visualize the cutterhead design and various arrangements on the head.General principles for good and optimized cutter distribution for the cutterhead design are as follows:A cutterhead should have a uniform distribution of cutters around the head.For example, if the cutterhead is broken into q sections, the number of cutters in each section should ideally be the same.If this trend continues as q increases, there is a better distribution of cutters on the head.Of course, there are other limits on where to allocate the cutters on the head, which will be discussed later.The easiest way to achieve a good distribution is to try to maintain cutterhead symmetry as much as possible.This is easier to maintain when the number of cutters is an even number.Then for cutter i, there is a cutter i + 1 across the cutterhead and at the θi+1 = θi + 180° angular position.It is preferable to avoid placing cutters over muck buckets, cutterhead joints, and cutterhead structure, if it is known.It is important to be cognizant of the minimum space required to fit a cutter on the head.In other words, the cutters should be able to physically fit the prescribed pattern.Although the designer attempts to create a uniform distribution and maintain symmetry, it is nearly impossible to obtain a fully symmetrical design and perfectly uniform distribution of cutters due to practical reasons.In such cases, the designer can use gage cutters to try to maintain the balance of the head and minimize the eccentric forces.With these guidelines in mind, it is possible to either design a cutterhead or be able to check the balance of a given design.The available patterns for cutterhead designs can be divided into three categories as follows: Spiral design.Here, θi ∼ Ri, meaning that as the radius increases, the angular position will increase as well.†,A double-/multi-spiral design can be developed using this algorithm but using angular spacing on every other cutter.An example is the double spiral θi+1 = θi + 180° and θi+2 = θi + θs. Spoke or star design.Here, the cutters are aligned along radial lines at equal angular distances; for example: 3 spoke/star, 4 spoke, 6 spoke, 8 spoke, …, where the cutters are placed on lines at a position angle of 120°, 90°, 60°, 45°, …, respectively, from the reference line. Random or asymmetrical design.Here, the cutters are allocated based on the availability of the space, and do not follow a particular pattern.Figs. 11 and 12 show some examples of these design types.Once the cutterhead design type is selected, the cutter allocation can be defined.Next, once the cutter allocation in a pattern is identified, the designer can check for other constraints such as joints in the cutterhead, interference with buckets, and so forth, and make minor adjustments.One important note to keep in mind is that the design of the cutterhead and the cutter allocation are not purely mathematical exercises as indicated in some publications, and that the result may be somewhat asymmetrical and unbalanced.At this stage, the design of the cutterhead is an interactive task between the selection of the number and location of the buckets and the adjustment of the location of the gage cutters to prevent interference with the buckets.This is done by manually changing the angular position of the cutters in this area to place them between the buckets or within the allowed space.The same logic applies to the cutterhead joints, where the cutterhead may be split into pieces to accommodate a certain size requirement for assembly, for transfer into the shaft or starter tunnel, or to contain the weight of the cutterhead in larger sized machines).The selection of the number, size, and allocation of the buckets is an integral part of the cutterhead design.The number and size of the buckets are proportional to the anticipated volume of material excavated, and increase with the expected ROP of the TBM in softer rocks.This is to accommodate efficient mucking and removal of the cut material from the face in order to avoid erosion of the face plate, wear of the cutters, and accumulation of muck and fines in the invert, the latter of which can cause excessive load on the gage cutters and premature failure.Another issue is the size of the opening of the buckets, which is somewhat controlled by the expected sizing of the muck and is selected to allow certain size blocks into the muck chutes.The typical range of material that is allowed to enter the chute is about 100 mm × 100 mm or 100 mm × 150 mm, as the upper limit of the size; blocks larger than this range are kept in the face to be broken by the disks.This is done by the face plate or face shield holding such blocks in the face.Once the number of buckets is known, the buckets are systematically and evenly spread around the head; thus, their angular position will be determined as 360°/NBuckets.This is to make sure the volume of muck picked up from the invert is uniform.In addition, there are some cases in which buckets of different lengths have been used.In these cases, some longer buckets were placed in between regular buckets.The most common number of buckets is four for small cutterheads in very hard rock, six to eight for medium-sized machines, and more than 12 for machines larger than 9 m.The buckets are designed with respect to the softest formation along the tunnel in order to accommodate efficient mucking in the highest flow of material, whereas the cutter allocation and profile are selected with respect to the hardest formation along the alignment in order to ensure that the spacing of the cuts is not excessive, which would create a ridge between the cuts.Some examples of programs using an algorithm for cutter distribution around the head are presented here.The basis for cutterhead modeling and for related spreadsheets is discussed elsewhere .For this purpose, a 7.23 m diameter TBM that was studied for a project featuring 54 cutters is used to show the impact of various values of θs on the design of a double-spiral layout.It is interesting to see that even though the design is for a double spiral, it can be configured into a multi-star arrangement when reaching certain values of θs.In this example, θs is varied from 0, which involves theoretically lining up the cutters along the same line, to different values that will show the spread of the cutters around the head.The first six cutters are arranged in a cluster.The cutter angular position starts from cutter 7, which is set to sit at 90°, and cutter 8, which is set to be across the center, at 270°.The other cutters will be shifted by θs, as can be seen in Fig. 13.A closer examination of the angles shows a repeating pattern at certain values of θs.An interesting setting is the distribution of the cutters at θs = 30°, 45°, 60°, and 90°, which corresponds with a 12, 8, 6, and 4 spoke cutterhead design pattern, respectively.Some examples of angular spacing forming a spoke pattern for 45° and 60° are shown in Fig. 13 and.Similarly, it is interesting to observe that the pattern can be completely uniform and symmetrical, that is, if θs = 40° or 50°, as shown in Fig. 13.The algorithm permits fine-tuning of the cutterhead design to achieve the best distribution.A quick look at the design shows that in many patterns, buckets can be easily allocated without interference with the cutters.This is one of the advantages of using a fully symmetrical design.One of the spots close to the transition or near outer flanks of the face cutters can be designated as the location of the access or manhole for entry to the cutterhead.The location of the manhole is not prescribed, since it can be literally anywhere that a 0.5–0.6 m radius hole can be placed.The difference in the performance of TBMs using cutterheads with different patterns in various conditions can be seen by cutterhead modeling, which will be discussed in the following section.Meanwhile, it is important to note that since the cutterhead is rotating, when the cutters are lined up in a spoke pattern, it is likely that quite a few of them will enter or exit a certain formation together, especially if the contact surfaces of different rocks are at the center of the cutterhead.This creates huge variations in the required forces and torques on the cutterhead, significant eccentric resultant force, and uneven loading of the main bearing.The procedure permits the identification of potential problem areas where a cutter could be overloaded due to the lacing pattern, and can thus provide a warning.Overloading of a particular cutter can happen despite the fact that the overall thrust and corresponding estimated average cutterloads are well within the thrust limits and nominal capacity of the cutters, as set by the machine manufacturer.In the model, the cutting forces are estimated, a full vector analysis of the forces is performed, and the amount of eccentric forces and moments can be determined.This modeling system also allows for full rotation of the head relative to a reference line in the face, and provides a powerful tool for cutterhead optimization.The model runs for full rotation of the head by changing the value of ψ, and records the estimated cutterhead thrust, torque, power, and eccentric forces and moments.The ideal situation and best cutterhead design are when the amount of eccentric forces are zero and the only resultant force and moment are in line with the Z or tunnel/machine axis.This situation is best for the main bearing and cutterhead support, while indicating a smooth operation with better alignment control.This situation is an ideal one, however; in reality, there are some levels of eccentricity in the forces, due to many factors.These include: the properties of dissimilar rock types at the face, joints, and fallouts; different wear patterns on the disks; and the accumulation of muck at the invert.However, a well-balanced cutterhead lacing can minimize these problems and provide better chances of survival for the main bearing as well as improved cutter life due to true tracking.The main bearing is typically designed to take 10%–15% of the nominal total thrust as eccentric force.Cutterhead balancing at this stage is performed by evenly distributing the cutters around the head in order to achieve minimum eccentric forces; this is often achieved using cutterhead symmetry.For this purpose, cutterhead simulation allows fine tuning of the location of the gage cutters to achieve a balanced cutterhead in case of any interference with muck buckets or cutterhead joints.Detailed cutterhead modeling permits the objective evaluation of various designs and head patterns.It allows quantitative comparison between different designs in any given geological setting.Although the variation of forces for a well-balanced cutterhead in a uniform face is minimal, the variability of forces and moments in mixed ground conditions could be significant.A major advantage of cutterhead modeling is its simulation of mixed ground conditions, in which dissimilar materials are present at the face.Programming the individual cutters allows the cutting forces in each rock type to be estimated, and thus provides the designer with actual forces in each section of the face.The highest contrast can be observed when the face is split between two formations at the center.In this situation, the components of the eccentric forces and moments are at their maximum.An example of such a condition is given in Figs. 15 and 16.Accurate estimation and quantification of these parameters are essential to evaluate the potential of imbalanced forces on the main bearing, as these can inflict major damage on the main bearing and cutterhead.A quick look at the figures shows that the lacing can impact the magnitude of the eccentric forces and moments, especially when dissimilar materials are cut at the face.In reality, it is very common to have some dissimilarity in the material at the face due to different lithologies, different locations of a joint or a joint set, variability in the strength of the rock, directional properties, anisotropy, and so forth.A comparison of Figs. 15 and 16 shows the impact of an even distribution of the cutters on the eccentric forces and moments, even in a fully symmetrical cutterhead design.Lower out-of-center forces and moments result in better loading conditions on the cutterhead and main bearing.Thus, a comparison of the magnitude of the forces and moments permits a quantitative evaluation of the performance of various designs.This paper is a summary of the principal concepts involved in the design of cutterheads for hard rock TBMs.The general approach for developing an optimum design has been described in a step-by-step manner.Some design patterns were presented and their implications shown using various examples.It is necessary to keep in mind that the cutterhead will experience unbalanced forces and moments irrespective of the head design; however, uniform distribution of the cutters will minimize the variation of the eccentric forces and out-of-axis moments.An optimum design of the cutterhead will reduce the out-of-axis loading of the bearing, reduce the side forces on the cutters, and generally improve the performance of the machine; it will also reduce the maintenance requirements of the cutters, cutterhead, and drive system.The importance of cutterhead balance is paramount, and design optimization can be done using computer models that allow for variation of the design and evaluation of the forces and moments acting on the cutterhead.These models permit the simulation of various cutting scenarios and their impact on the forces, torque, power, and cutterloads.They can be used to compare various cutterhead design patterns for application under certain working conditions, and to identify possible modifications.These models also allow the estimation of the anticipated forces acting on individual cutters as well as examination of the forces and moments acting on the entire cutterhead or main bearing under various conditions.The result of a well-designed cutterhead is improved machine performance through higher ROP, low cutter and cutterhead maintenance, and higher machine utilization.Jamal Rostami and Soo-Ho Chang declare that they have no conflict of interest or financial conflicts to disclose.
The success of a tunnel-boring machine (TBM) in a given project depends on the functionality of all components of the system, from the cutters to the backup system, and on the entire rolling stock. However, no part of the machine plays a more crucial role in the efficient operation of the machine than its cutterhead. The design of the cutterhead impacts the efficiency of cutting, the balance of the head, the life of the cutters, the maintenance of the main bearing/gearbox, and the effectiveness of the mucking along with its effects on the wear of the face and gage cutters/muck buckets. Overall, cutterhead design heavily impacts the rate of penetration (ROP), rate of machine utilization (U), and daily advance rate (AR). Although there has been some discussion in commonly available publications regarding disk cutters, cutting forces, and some design features of the head, there is limited literature on this subject because the design of cutterheads is mainly handled by machine manufacturers. Most of the design process involves proprietary algorithms by the manufacturers, and despite recent attention on the subject, the design of rock TBMs has been somewhat of a mystery to most end-users. This paper is an attempt to demystify the basic concepts in design. Although it may not be sufficient for a full-fledged design by the readers, this paper allows engineers and contractors to understand the thought process in the design steps, what to look for in a proper design, and the implications of the head design on machine operation and life cycle.
16
Draft genome of the brown-rot fungus Fomitopsis pinicola GR9-4
We present the draft genome assembly and gene prediction of the fungus Fomitopsis pinicola, an important and ubiquitous brown-rot fungus of boreal forests causing a cubical brown-rot in both softwood and hardwood .The broad host range and phenotypic differences with respect to color and form, already observed by Fries , led to speculation that cryptic species might be present.Early findings based on crossings of single spore isolates of F. pinicola confirmed intersterility among lineages within North America and between North America and Europe .Recently a multi-locus phylogenetic study confirmed that F. pinicola is a species complex encompassed of four well-supported phylogenetic species, three in North America and one in Eurasia .The genome of F. pinicola GR9-4 was sequenced in order to provide a basis for population genomics, transcriptomics and genome-wide association studies.The final assembly contained 1920 contigs larger than 1000 bp, with a largest length of 788,230 bp, which were assembled into 1613 scaffolds larger than 1000 bp in size and the maximum length being of 1,100,126 bp.In total, the genome sequence adds up to a size of 45 Mb, with a GC-content of 56%.The sequencing read coverage depth of the total assembly was 127-fold.The gene prediction resulted in 13,888 gene models.The draft genome assembly information of F. pinicola compares well to other sequenced genomes within the Agarimycotina .Genes with predicted functions for plant polysaccharide degradation involved in the breakdown or modification of glycoconjugates were identified in the genome.We found 403 carbohydrate-active enzymes using the pipeline dbCAN with E-values below 1e-4.These were divided into 209 Glycoside Hydrolases, 81 Glycosyl Transferases, 5 Polysaccharide Lyases and 108 Carbohydrate Esterases.In addition to the CAZymes, 51 redox enzymes that act in conjunction with CAZymes and 48 Carbohydrate-Binding Modules were found.The CAZymes-coding genes profiles for F. pinicola GR9-4 was similar to the profile of the North American isolate F. pinicola FP-58527 SS1, but their secondary metabolite profiles differed substantially as analyzed by the antiSMASH 3.0 genome mining of biosynthetic gene clusters .The next generation sequencing data is available from the NCBI under GenBank assembly accession GCA_001931775.1.We also applied a maximum likelihood analysis to infer the phylogeny of the three nuclear genes ITS, EF1A and RPB2 respectively from a representative collection of isolates within the species complex, using Daedalea quercina as an outgroup , analyzed in MEGA 7 .This analysis was performed in order to reveal the phylogenetic position of the two sequenced F. pinicola genomes within the four well supported clades; European clade-C which is distinctly separated from North American clades-A, B & D.Our sequenced isolate, GR9-4, belongs to F. pinicola clade C while the previously sequenced North American isolate, F. pinicola FP-58527 SS1 belongs to the distinctly different F. pinicola clade D.This difference will permit future estimations of positive selection to the divergence between species.Furthermore, information about the draft genome sequence of F. pinicola GR9-4 will be helpful for future studies in population genomics and genome-wide association in order to reveal the wood decay mechanisms of this brown-rot fungus.Basidiospores were restrained on a glass slide placed below living sporocarps for overnight.Serially diluted spores were spread on Hagem-agar petri dishes - sterilized medium containing of 0.5 g each of NH4NO3, KH2PO4 and MgSO4·7H2O, 5 g of glucose, 5 g of malt and 10 g of agar in 1 L of deionized water.Three to five days after inoculation, between 1 and 20 single germinated spores were collected under dissecting microscope and cultured on fresh media to produce single monokaryon isolates.Monokaryon isolates were cultured on liquid Hagem medium for 3 weeks.DNA was extracted by standard CTAB protocol.3 μg DNA was used for sequencing on the Illumina platform and 5 μg was used for mate-pair library from ABI SOLiD.The raw sequencing data produced is shown in Table 1.The obtained reads were assembled, scaffolded, gaps were filled and validated, by using ABySS/1.3.6 , SSPACE-LongRead , GapCloser-bin-v1.12-r6 and BWA-mem v0.7.4 .The draft genome sequence was annotated using the MAKER pipeline using transcripts and gene catalogues from other basidiomycete fungi as evidence.
Basidiomycete brown-rot fungi have a huge importance for wood decomposition and thus the global carbon cycle. Here, we present the genome sequence of Fomitopsis pinicola GR9-4 which represent different F. pinicola clade than the previously sequenced North American isolate FP-58527 SS1. The genome was sequenced by using a paired-end sequence library of Illumina and a 2.5k and 5k mate-pair library (ABI SOLiD). The final assembly adds up to a size of 45 Mb (including gaps between contigs), with a GC-content of 56%. The gene prediction resulted in 13,888 gene models. The genome sequence will be used as a basis for understanding population genomics, genome-wide association studies and wood decay mechanisms of this brown-rot fungus.
17
Antibody H3 Structure Prediction
Antibodies are proteins that bind to foreign objects that find their way into an organism, preventing them from causing harm and marking them for removal.A huge number of different antibodies can be produced – estimates vary, but it is thought that humans have the potential to of produce up to 1013 different antibodies – making them capable of binding to a huge range of substances, ranging from proteins on the cell surface of bacteria to non-biological small molecules .The substance that an antibody binds to is known as an antigen, and the specific region of the antigen to which the antibody binds is called the epitope.Mature antibodies bind with high affinity and are specific, meaning that they bind to other epitopes only very weakly, or not at all .The ability of antibodies to bind with high affinity and specificity to their targets means that they are good candidates for therapeutic and diagnostic applications.Since the first antibody treatment, muromonab, was approved in 1986 for the prevention of transplant rejection, the market has grown rapidly .By 2012, antibody therapies accounted for over a third of the total sales in the biopharmaceutical sector in the US, and they are currently the biggest-selling class of biopharmaceuticals .Although molecules from biological sources tend to be larger, more complex and far more difficult to characterise than traditional small molecule drugs , they are promising as therapeutic agents .Antibodies have been used for many disease areas: some currently on the market include infliximab and adalimumab for the treatment of rheumatoid arthritis; trastuzumab and bevacizumab for cancer; and alemtuzumab for multiple sclerosis ."Knowledge of an antibody's structure is extremely useful when developing a novel therapeutic, allowing it to be engineered more rationally.This knowledge can be used to increase binding affinity by guiding residues to be mutated, through the use of computational techniques such as binding affinity prediction , epitope and paratope prediction , stability measurements , and docking .Computational tools have already been used successfully to increase the binding affinity of antibodies .However, since experimental structure determination is time-consuming and expensive, the ability to computationally build accurate models of antibody structures from their sequences is highly desirable.This has become even more important as next-generation sequencing data for antibodies has become available .Antibodies vary from large, multi-chain and multi-domain complexes, like those found in humans, to small, single domain molecules, such as nanobodies .However, binding always occurs in a similar fashion, through interactions between the antigen and a number of loops on the antibody called complementarity determining regions.In standard mammalian antibodies, there are six of these loops; three on the heavy chain and three on the light chain.In contrast, for camelid antibodies, which lack a light chain, there are only three.The CDRs are the most variable parts of the whole antibody structure, and they govern the majority of the antigen-binding properties of an antibody.The conformational diversity of five of the six CDRs is thought to be limited.For these CDRs, only a small number of different shapes have been observed, forming a set of discrete conformational classes known as canonical structures .Since its proposal in 1987 , the idea has been reinvestigated many times as the number of known antibody structures has increased .These studies have led to the identification of particular amino acids at certain positions that are thought to be structure-determining; the canonical class of a CDR of unknown structure can therefore be predicted from its sequence with high accuracy.The least diverse CDR is L2, with around 99% of known structures belonging to the same class .Unlike the other five CDRs, the H3 loop has not been classified into canonical forms; a huge range of structures have been observed.This is due to how antibody sequences are encoded in the genome.The complete nucleotide sequence coding for an antibody heavy chain is created by combining gene segments from different locationsJ recombination, after the ‘variable’, ‘diversity’, and ‘joining’ segments).The DNA encoding the H3 loop is found at the join between the V, D and J gene segments, which, with the addition of a process called junctional diversification, leads to a huge range of possible sequences.H3 loops vary widely in length: most are between 3 and 20 residues but they are occasionally far longer.Bovine antibodies, for example, have H3s that are 50 or even 60 residues in length .For comparison, the canonical CDRs each have a most 8 different lengths, and are normally far shorter - the longest canonical form is 17 residues long, but there are few examples of these five loops with lengths over 15 .The ‘torso’ of H3 loops has been observed to adopt one of two conformations, labelled kinked or extended.The majority of H3 loops are kinked .Proposals have been made about why this is the case, such as the interaction of a basic residue in theC-anchor with an asparagine located within the loop, which have led to the development of rules that aim to predict which conformation will be adopted .However, as more antibody structures have become available, these guidelines have been revisited and found to fail in some cases ."It is the H3 loop that is thought to contribute the most to an antibody's antigen-binding properties .It is located in the centre of the binding site, and normally forms the most contacts with the antigen .It has also been shown to have the greatest effect on the energetics of binding , and to be the part of the antibody structure that changes the most upon binding .Due to its location, the H3 loop contributes largely to the topography of the binding site — long H3s can create finger-like protrusions, and short H3s create cavities in the antibody surface, with a specific shape that only allows certain antigens with smaller or protruding epitopes to bind .Knowledge of H3 structures is therefore extremely useful, enabling predictions to be made about antibody binding properties .H3 structure prediction is a specific case of protein loop modelling.The starting point of a loop modelling problem is a series of missing residues in a protein structure, where the sequence of the missing segment is known but the three-dimensional structure of those residues is not.The protein structure used as input may be an experimentally-determined one, or a model.Predicting the structure of the loop requires three main steps: decoy generation, filtering, and ranking.In a similar way to the prediction of whole protein structures, where methods can be template-based or template-free, loop modelling algorithms can be divided into two categories depending on whether known structures are used in the decoy generation step.These categories are known as knowledge-based and ab initio.When predicting any loop structure, the first step is to generate a set of candidate conformations, or decoys, that connect the residues on either side of the gap in the protein structure.These neighbouring residues are termed the anchors; specifically the N-anchor for the one closest to the N-terminus of the sequence and C-anchor for the one nearest the C-terminus.As previously stated, methods for predicting protein loop structures are divided into two categories, knowledge-based or ab initio, depending on how they generate possible conformations.Knowledge-based methods rely upon databases of previously observed protein structure fragments.Structures are selected according to certain criteria such as fragment length, fragment-target sequence similarity and how closely the anchor geometry of the fragment matches that of the target loop.Methods of this type are fast, and can be very accurate when the structure of the target loop is similar to one previously observed .However, there is not currently enough structural data to cover the conformational space, especially for long loops .When a similar loop structure has not been observed previously, knowledge-based methods either give poor predictions or fail to return a prediction at all.Examples of this type of algorithm include FREAD , SuperLooper , LoopWeaver and LoopIng .Ab initio methods do not rely on previously observed structures; instead, decoys are produced computationally.Ab initio methods work by exploring the possible conformational space, for example by randomly sampling the ϕ and ψ dihedral angles of the loop.While this allows novel structures to be generated, like knowledge-based methods ab initio algorithms have their limitations: they are computationally expensive, since many decoys must be generated to sample the conformational space sufficiently; and their prediction accuracy decreases with loop length.Ab initio algorithms include PLOP , Modeller , Loopy , LoopBuilder , LEAP , and the loop modelling routine within Rosetta .The idea of a hybrid loop modelling algorithm, combining knowledge-based and ab initio approaches, has been explored.CODA generates decoys using a knowledge-based method and an ab initio method separately, then combines the two decoy sets and makes a consensus prediction.Martin et al. , Whitelegg and Rees , and Fasnacht et al. have used similar approaches, and applied it to modelling H3 loops — initial conformations are selected from a database of structures, and the middle section is then remodelled using ab initio techniques.An alternative approach using Rosetta is described by Rohl et al. — this used a Monte Carlo-based fragment assembly method, in conjunction with a minimisation protocol.Depending on how the loops are built, the continuity of the protein backbone may need to be enforced through the implementation of a closure algorithm.Alternatively, a minimisation step may be introduced, where the energy function has a term that penalises an ‘open’ loop.Three types of loop closure algorithm exist: analytical, iterative or build-up.Analytical methods calculate the values of particular degrees of freedom that are required to produce a continuous backbone.This approach was first introduced by Go and Scheraga — they showed that the ϕ/ψ values necessary to close a loop can be solved mathematically for up to six angles.This approach is used to maintain loop closure in the loop modelling routine within Rosetta, in the algorithm called kinematic closure or KIC ."Similar algorithms are used in robotics, to move multi-jointed ‘arms' to specific locations in space .Iterative methods normally start with an open conformation, and gradually enforce its closure through a series of steps.A key example of this type is cyclic coordinate descent, or CCD — starting at one end of the loop, each ϕ or ψ angle is altered so that the distance between the free end of the loop and the fixed anchor is minimised.This continues iteratively, until the distance between the two ends is low enough to consider the loop closed.The change in angle required is calculated analytically; CCD can therefore be thought of as both an analytical and an iterative method.Build-up methods attempt to guide loop building such that a closed loop conformation is automatically generated.RAPPER, for example, builds loops starting from the N-anchor, and places restraints on each Cα atom added to the structure, limiting the distance they are allowed to be from the C-anchor .Loop closure is enforced by making the restriction gradually tighter as more residues are added.Some of the decoys generated will not be physically possible.For example, ϕ/ψ angles of the structure may be in the disallowed regions of the Ramachandran plot, or atoms may be too close together.A filtering step is therefore required to remove these structures.This step may be combined with the other parts of the loop modelling process; for example some algorithms combine it with decoy generation itself.The Direct Tweak loop closure method, for example, enforces a continuous backbone while monitoring the loop for clashes .Once all decoys have been generated, a ranking system is needed to select a final prediction; i.e. the one that is predicted to be closest to the true structure of the target.This is a vital step; even if decoys close to the native structure have been generated at a previous stage, an ineffective ranking system means that the structure chosen as the final prediction will be inaccurate.For knowledge-based methods, the ranking system may use properties of the decoy/fragment structure — for example the similarity between the target sequence and the decoy sequence, or between the geometry of the decoy anchors and the anchors of the target.FREAD, for example, ranks the fragments selected from a database by the root mean square deviation between the atomic positions of the target and fragment anchor residues .More commonly, especially for ab initio methods, an energy function is used to predict which structures are lower in energy and therefore more likely to be near-native.There are two main types of energy function: physics-based force fields and statistical potentials .Force-fields are equations with separate terms for the contribution of different properties to the energetics of a system."These include bonded interactions, such as bond lengths, bond angles, and dihedral angles; and non-bonded interactions, like electrostatics and van der Waals' forces .Further terms must also be added that consider the effect of solvation; this can be done using either an implicit model, which treats the solvent as a continuous medium, or the water can be treated explicitly, meaning that individual water molecules are added to the system.The terms are parameterised using empirical evidence.Some examples of force fields are AMBER , CHARMM and OPLS .Statistical potentials use pre-observed structures to infer the relative energy of a protein, based on the assumption that the distributions of particular structural features seen in nature reflect energetics .For example, the carbon to oxygen bond in the carbonyl of the protein backbone is regularly observed in experimentally-determined structures to have a length of 1.23 Å — a decoy with a C–O length of around this value is therefore likely to be more energetically stable than one with a C–O distance of 2 Å.Statistical potentials are attractive because the protein energetics do not necessarily need to be deciphered — these functions incorporate unknown or poorly-understood interaction terms into their calculation without having to explicitly include them .In addition, they are often faster to run than force field calculations, automatically include solvation and the potential can be smoother — small changes in conformation do not lead to huge differences in energy.Examples of statistical potentials include DFIRE , DOPE , GOAP and SOAP-Loop .Due to its structural diversity, structure prediction of the H3 loop is challenging.However, it is possibly the most important part of the structure to model correctly, since it is thought to be mainly responsible for the antigen-binding properties of an antibody .While some algorithms exist that do not treat it any differently to the other CDRs, this is not usual and a special approach is normally implemented, using a knowledge-based or ab initio approach, or some combination of the two.Some algorithms have tried to use the presence of a kinked or extended conformation to guide H3 loop modelling, by using a series of rules to predict which conformation is adopted from sequence.This information can then be used to either pre-filter a database of solved structures in the case of knowledge-based methods , or limit the conformational search of an ab initio algorithm .The current accuracy of antibody structure prediction is monitored through a CASP-style blind prediction test called the Antibody Modelling Assessment.The first AMA was conducted in 2011 — participants were given the sequences of several unpublished, high-resolution antibody structures and asked to model them.More recently, the results of the second assessment were published .AMA-II featured two rounds: the first entailed modelling the entire variable region from its sequence; in the second, the accuracy of H3 structure prediction was tested in isolation by giving the participants the native structures of the Fv regions with the H3 loop residues missing.Any errors introduced into the H3 model caused by inaccuracies in the framework structure are therefore eliminated, giving an impression of the current accuracy of H3 prediction.Each group was required to submit five predictions for each of ten H3 targets, with loop lengths ranging from 8 to 14 residues.The group that achieved the best results was Schrödinger, using the commercial Prime software.The loop modelling algorithm is freely available under the name PLOP .For eight of the ten targets, Prime produced the most accurate model.However, as is the case for all the groups, once all five of the predictions are taken into account instead of only the best, average RMSDs become far worse.There are several possible reasons for this: the set of loop models generated may only contain a couple of good models; or the ranking system used to select good models is inadequate.Alternatively, the five predictions for each target may have been purposefully chosen so that they cover a larger conformational space between them, preventing the submission of five very similar but incorrect models.This indicates that the ranking method used cannot consistently choose the best conformations.The results obtained during AMA-II, as well as some other H3 prediction studies, are shown in Table 1.Reasonable accuracies are currently being achieved for short H3 loops, but predictions become far worse for loops beyond that length.There is an appreciable difference in accuracies achieved modelling H3 loops compared to the other CDRs.For example, the knowledge-based method FREAD has been shown to produce sub-ångström predictions for the five canonical CDRs, while the average accuracy for H3 loops is 2.25 Å .RosettaAntibody also produces sub-ångström predictions for the other CDR loops, while the accuracy of H3 prediction ranges between 1.6 Å and 6.0 Å depending on length.The following sections provide details of the algorithms whose accuracy is reported in Table 1.Although the algorithms described are all H3-specific, or have been used to model H3 loops, they give an overview of the methodologies used for loop modelling in general.ABGEN is an antibody modelling tool published by Mandal et al. .There are two parts to the algorithm: ABalign, which selects a template structure for each part of the structure by sequence similarity; and ABbuild, which is responsible for generating the three-dimensional structure.The CDRs are modelled using a knowledge-based approach, and the H3 loop is not treated any differently — candidate templates are found from known antibody structures and selected based on sequence and length.If no loop exists of the same length, then the closest is selected.The loops are grafted onto the framework structure by superimposing the anchor residues.Residue mis-matches between the template and target are then dealt with by replacing the sidechains.Clashes are avoided by iteratively changing the sidechain torsion angles.Prediction of the whole antibody structure is reported to take around 5 min .Accelrys is a software company that has produced an antibody prediction tool for commercial use.Its performance was evaluated during AMA-II .Three different methods are used to predict the H3 loop:During the second round of AMA-II, method 2 was used.The final decoy selection is carried out based on clustering — all conformations are grouped by structural similarity, and the clusters are ranked according to the energy of its members.The lowest energy model from the top-ranked cluster is given as the final prediction.On average, the algorithm takes 30 min to produce a prediction .The protocol used by the, or CCG is a knowledge-based algorithm, used in conjunction with molecular dynamics .Its performance was evaluated during AMA-II.Known H3 structures are scored based on backbone topology, bond lengths and angles, probability of ϕ/ψ angles, crystallographic occupancies, and temperature factors.After clustering, the member of the cluster with the highest score is put into a database.This database is enriched by running molecular dynamics simulations on these structures.Possible structures are selected from the database depending on anchor RMSD, using a tight cutoff of 0.25 Å, and a final prediction is made using a score that takes into account H3-specific properties, such as surface-accessible surface area, ϕ/ψ angles, and the interaction of the loop to the rest of the Fv.During the AMA-II, the production of each full antibody model took around 30 min .FREAD is a knowledge-based method that selects possible loop conformations from a database of experimentally determined protein fragments .It is freely available to use online.It is used as the CDR structure prediction method within the ABodyBuilder antibody modelling software .Loops are initially selected as potential predictions according to the separation of their anchor residues compared to that of the target and their sequence similarity to the target loop.The fragments are filtered depending on whether their insertion into the protein structure would cause clashes.These fragments are then ranked by the RMSD between the anchor residues of the fragment and those of the target loop; the loop with the lowest RMSD is assumed to have the most similar structure to the target and is hence returned as the final prediction.If no suitable fragments are found, however, FREAD does not produce a prediction.The computational time required varies with the size of the database being searched, but is normally around 1–2 min."Research into improving FREAD's ability to predict H3 loops led to a new version with an additional filter that considers the contact profiles of the fragments within the database .Each residue of each fragment in the database is annotated with a number depending on the contacts it forms in its native environment: 0 for no contacts; 1 for external contacts; 2 for internal contacts; and 3 for both internal and external contacts.The actual contact profile of the fragment is then compared to its profile when inserted into the target structure — only fragments with matching pairs are retained.The final prediction was chosen in the same way as in the original FREAD algorithm.While this led to an increase in prediction accuracy, coverage was significantly lower.H3Loopred is a knowledge-based method that uses machine learning to predict which of a set of H3 structures is closest to the desired target structure.The software is available for download from biocomputing.it/H3Loopred.A Random Forest model was developed that uses several features to predict the similarity of a known loop structure to the structure of the target, using a measure called the TM score .The features used are a mixture of general and H3-specific properties: loop sequence, the canonical classes and lengths of the other CDRs in the antibody, source organism, germline family, and the similarity scores for each residue and the whole loop.If the structure from the database that is predicted to be the best has a predicted TM score of less than 0.5, then this loop is returned as the final prediction.Otherwise, the top 50 templates are ranked using a score that considers contacts.The average computation time required is 5 min per target .Kotai Antibody Builder is a simplified and automatic version of the software used in AMA-II by the joint Osaka University Astellas team .An online server is available at kotaiab.org.In the second round of AMA-II, H3 decoys were generated using a combination of a knowledge-based approach with molecular dynamics simulations.Spanner selects fragments from a database, filtering them using sequence similarity, secondary structure similarity, a clash score, the geometry of the anchor residues, and the predicted kinked/extended conformation of the loop.Minimisation of these structures is carried out using the OSCAR-loop energy function , and the top 20 structures are used as initial conformations for a series of MD simulations.Snapshots of the simulations are then grouped into five clusters, with the final set of predictions including one structure from each.BioLuminate and Prime are software packages produced by the Schrödinger company; their performance was evaluated in the AMA-II .Prime is the commercial version of the loop modelling algorithm PLOP.BioLuminate models antibodies using homology; CDRs are modelled by selecting templates from a database.Prime, on the other hand, is an ab initio algorithm.For stage 1 of AMA-II, where H3 predictions were made onto model frameworks, the three submitted models were generated in different ways: a straightforward template selection based on sequence similarity; template selection after clustering known H3 structures, taking the structure with the highest sequence similarity from the largest cluster; and ab initio prediction using Prime.In the second AMA-II round, the ab initio approach was used exclusively, but the target loop was extended by one residue on each side to make the terminal residues flexible.Prime uses a hierarchical approach to model protein loops: a ‘full’ prediction job is made up of many ‘standard’ jobs.Like many other ab initio methods, in a standard job loops are built by choosing random ϕ/ψ angles from Ramachandran distributions.However, instead of building loops by adding all residues onto one of the anchors with subsequent closure, they are built in two segments — half onto the N-anchor and half onto the C-anchor.Many structures are created for each half of the loop.All N-anchor segments are then compared against all C-anchor segments to find pairs that meet in the centre, thereby forming a complete loop structure.Decoys that have unrealistic dihedral angles or clash with the rest of the protein are filtered out, and all remaining loop structures are then clustered, and the representative structures undergo energy minimisation.A full prediction job is then made up of a series of standard jobs, with the conformational search space becoming narrower at each stage.By using the predicted best structures generated during previous stages to constrain the conformational search, the algorithm is guided towards creating structures of low energy.The final step is the ranking of all loop structures that were generated from all steps, according to their calculated energy; the loop with the lowest energy is returned as the final prediction.RosettaAntibody, which was one of the algorithms used during AMA-II , models the H3 loop using an ab initio approach."Loop modelling in the Rosetta protein modelling software is carried out using a kinematic closure protocol, made up of ‘KIC moves' .Prediction begins with the generation of a random loop structure."During a KIC move, three Cα atoms of the loop segment are chosen as ‘pivots, leaving the remaining Cα atoms as ‘non-pivots'.Dihedral angles of the non-pivots are sampled from Ramachandran distributions.The dihedral angle changes of the pivots required to maintain loop closure are then calculated analytically.The full protocol, which includes sidechain optimisation and backbone energy minimisation, involves carrying out KIC moves iteratively, with different pivot atoms each time.The lowest scoring model, according to the statistical Rosetta scoring function, is reported as the final loop prediction.‘Next-generation KIC’ is an new version of this algorithm .This improved protocol includes the sampling of ω dihedral angles, neighbour-dependent ϕ/ψ sampling, and annealing.For the first stage of AMA-II, models were generated using next-generation KIC without using any neighbour dependence during ϕ/ψ sampling.In stage 2, both this approach and ‘legacy KIC’ were used, and for those targets predicted to have a kinked conformation, constraints were added to enforce it.A more recent paper has explored this idea further, and has shown that the addition of the kink constraint improves sampling and therefore overall prediction accuracy .SmrtMolAntibody, the commercial antibody modelling software developed by Macromoltek, was also tested during AMA-II .An ab initio approach is used to model the H3 loop.Firstly, the first and final three residues of the loop are modelled according to their predicted kinked/extended conformation.The remaining residues are then added as dimers, where the ϕ/ψ angles of the two residues have been observed together in nature.After all decoys are generated, the structures are filtered, by checking each trimer for non-physical neighbouring dihedral angles, and finally ranked using a statistical potential.The reported time required to produce a full antibody model is 30 min .WAM uses different approaches to model H3 loops depending on their length .If the loop is shorter than eight residues, then a traditional knowledge-based algorithm is used.Specific databases are used depending on whether the loop is predicted to have a kinked or extended conformation.For loops of eight residues or more, the database search is followed by the remodelling of the middle five residues of the loop using an ab initio method, CONGEN .CONGEN produces decoys by calculating ϕ/ψ angles that form a closed structure, using the work of Go and Scheraga .The decoys undergo minimisation, and are clustered to remove duplicate conformations.The final prediction is selected from the pool of decoys using a score that considers surface accessibility, the RMSD of the decoy to known kinked H3 structures, and the calculated energy.Sphinx is a recently-developed method that integrates knowledge-based and ab initio approaches .An H3-specific version is freely available for use at opig.stats.ox.ac.uk/webapps/sabdab-sabpred/SphinxH3.php.The algorithm starts with a database search; loop fragments that are shorter than the target loop are extracted based on their sequence similarity to the target."The structural information within a fragment is then used to build decoys according to the alignment of its sequence to the target's — i.e. for residues in the target loop that are matched with a fragment residue in the alignment, the residue is modelled using the bond lengths, angles and dihedral angles of the matching fragment residue.If a target residue is not matched with a fragment residue, then the necessary information is drawn at random from relevant distributions, as in a straightforward ab initio algorithm.Loop closure is enforced using the CCD algorithm .Each selected fragment is used to generate 100 decoys, leading to a large set of possible conformations.Using a knowledge-based energy function, the number is reduced to 500, and these are subsequently minimised using Rosetta and ranked using the statistical potential SOAP-Loop .While the accuracy of H3 structure prediction has improved in recent years, as evidenced by the results of the two Antibody Modelling Assessments , the modelling of H3 loops remains the biggest challenge in producing accurate and useful antibody models.There remains a marked difference between the accuracy of H3 prediction compared to that of the canonical CDRs: these five loops are regularly predicted with sub-ångström accuracy while H3 prediction accuracy is much more variable, typically with an RMSD of between 1.5 and 3 Å, but often worse, in particular in the non-native environment.Since overall, the aim of this research area is to produce accurate models that can assist in the rational design of antibody therapeutics, the key results are those reported for H3 prediction in the non-native environment — it is obvious from results reported in the literature so far that improvements must still be made to enable the production of useful antibody models.An aspect of H3 prediction that is particularly challenging, identified as difficult by the organisers of AMA-II , is the accurate scoring of decoy structures — even if good conformations are made during decoy generation, it is often the case these decoys are not selected as final predictions due to poor ranking.Further developments in this area, along with continuing improvements to the accuracy of framework and canonical CDR modelling, would be of great benefit.The type of algorithms that were used in the second Antibody Modelling Assessment, considered to be the state-of-the-art, imply that there is a general movement away from purely knowledge-based loop modelling approaches when it comes to H3 structure prediction."Only one of the six algorithms examined could be classified as a knowledge-based approach, and this, along with the results shown in Table 1, are an indication that H3 structures are too diverse to be consistently modelled accurately at the current time using only previously-observed structures.By using more restrictive selection parameters, the performance of a knowledge-based algorithm can be improved, however coverage must be compromised and some other method must be used to model loops for which no close structural match can be found in the PDB.Using an ab initio method alone, on the other hand, means that any useful structural information that is available is ignored.The next logical step, then, is a hybrid method which takes advantage of both approaches.The development of such methods has already begun, with the prediction software of Accelrys Tools and KotaiAntibodyBuilder being assessed during AMA-II, the latest Rosetta algorithm, which uses knowledge of H3 structures to constrain the torso of the loop into a kinked conformation , and the more recent Sphinx algorithm .Further investigations into how the two approaches may be integrated should lead to more accurate, and hence more useful, antibody models.
Antibodies are proteins of the immune system that are able to bind to a huge variety of different substances, making them attractive candidates for therapeutic applications. Antibody structures have the potential to be useful during drug development, allowing the implementation of rational design procedures. The most challenging part of the antibody structure to experimentally determine or model is the H3 loop, which in addition is often the most important region in an antibody's binding site. This review summarises the approaches used so far in the pursuit of accurate computational H3 structure prediction.
18
Business car owners are less physically active than other adults: A cross-sectional study
There is broad consensus that physical activity has a positive influence on health.Not only vigorous-intensity activities but also moderate-intensity activities are regarded to be effective, assuming that duration and frequency are sufficient.In addition, recent research showed that sedentary behavior -- i.e. any waking activity characterized by an energy expenditure ≤ 1.5 in sitting or reclining posture -- is a health risk in itself.Although high amounts of moderate-intensity physical activity seem to decrease the risk of death associated with prolonged sitting, there is by now a growing concern about the health risks of SB.Considering both PA and SB is particularly relevant for the situation in the Netherlands, as European comparison studies showed that the Dutch population not only had the highest proportion of respondents who participated in moderate physical activity but they also reported the highest mean sitting time on weekdays compared to respondents of the other European countries.Active transport contributes to daily PA and has positive health effects.Research showed that 26% of the Dutch commuter cyclists met the international PA guideline for adults merely by cycling to work.This PA guideline requires moderate-intensity aerobic physical activity for a minimum of 30 min on five days each week or vigorous-intensity aerobic physical activity for a minimum of 20 min on three days each week, or a combination of both moderate- and vigorous-intensity activities.In more recent research on Dutch mobility data, the physical load of the mode of transport was converted into MET-minutes.The authors calculated that 38% of the Dutch adult population complied with the MET-hour standard, solely through active travel -- either with or without the additional usage of public transport.This standard was derived from the requirement of engaging in PA at a weekly basis for at least 150 min at an intensity of 4 METs.The latter study on Dutch mobility also revealed that the mere availability of a personal car was associated with less PA during transport, thereby complementing other research showing that transport by car was associated with less total daily PA.In the Netherlands, in 2015, 11.8% of all passenger cars were owned by a company, of which 615,000 cars were being leased).About 8% of the Dutch households had access to a business car.Car ownership was positively associated with income, age, household size and male gender.Business car owners -- i.e. main users of a car that has been made available by their company for an agreed period of time -- had a higher income than private car owners.Besides, compared to private car owners, business car were younger, were more frequently male, had a higher level of education, lived in bigger HH and had to cover more distance for commuting.Business car owners have to pay extra income tax for the private use of the business car and sometimes also accept a fixed salary reduction to compensate the company for a more expensive car than standard and/or for fuel costs for private trips.The consequence is that, after having accepted those fixed costs, the variable costs for the actual use of the business car are usually very low.Literature on the relation between the availability of a business car, active commuting and PA or SB is scarce.A study in the United Kingdom suggested that access to a business car was negatively associated with active commuting, but no measurement of PA or SB was reported in that study.Other preliminary evidence comes from a pilot study, that used the same monitoring survey and method as the current study, but data were slightly older and were collected over a shorter period.In that pilot study, business car owners showed a lower rate of compliance with the Dutch MVPA guideline compared to private car owners: 26% and 46%, respectively.The Dutch MVPA guideline requires at least 30 min of at least moderate-intensity PA on at least five days per week, during both summer and winter.We explored whether business car owners comply less with the Dutch MVPA guideline and are more sedentary than private car owners and/or persons without a car in the HH.We hypothesized that business car owners -- and possibly also those with the availability of a business car in the HH -- have a higher risk for less daily PA than private car owners, since the actual use of a business car is cheap and almost free in case the employer also pays the fuel expenses for private trips, which is common in the Netherlands.The almost free actual car use might result in more frequent use, or habitual use, of the business car, including short distance trips which otherwise possibly would have been covered by bicycle or on foot.Because habitual behavior might be more important than reasoned behavior in the choice for mode of transport, the availability of a business car could work as an incentive or ‘nudge’ in the direction of less PA.Besides, symbolic and affective motives are associated with car use and possibly these motives play an even bigger role in.business car users, because they often drive in rather large and latest car models.In addition, we hypothesized that the expected reduced PA of business car owners as a result of less active transport would not, or not fully, be compensated by more PA during work or leisure time.The reasoning behind this assumption is that business car ownership is positively associated with income and usually, higher income is associated with a physically less demanding job.Although sports participation was presumed to be more prevalent in business car owners -- as in the Netherlands sports participation is positively associated with income -- we expected that the frequency of sports participation of business car owners would be insufficient to fully compensate their reduced PA during transport and working hours.Furthermore, since business cars in the Netherlands cover more than twice the distance of private cars, 2012), it is likely that business car owners spend more time sitting during transport than private car owners.Because we also expected that business car owners have less physically demanding jobs, we hypothesized that their total ST during workdays exceeds the ST of private car owners, co-users of a car and persons without a car in the HH.Finally, these associations with reduced PA and increased ST might also be present in co-users of a business car in the HH, so this relation was explored in this study too.Data of the Dutch monitor ‘Injuries and Physical Activity in the Netherlands’ were used.OBiN was a continuous survey among the Dutch population that ran until 2015 to monitor injuries, sports injuries, sports participation and PA.Yearly, a net sample of approximately 10,000 respondents, randomly selected from a panel, responded to the questionnaire.Data sampling within the age group of 18–65 years old was online or with computer aided telephonic interviewing, according to the preference of the panel member.PA was assessed using questions that were specially developed for OBiN in order to estimate the proportion of respondents meeting the Dutch PA guidelines: ‘For the questions below, please think about physical activities, such as walking, cycling, gardening, sports or exercise at work or at school.This involves all activities with intensities that are at least equal to walking at a firm pace or cycling: 1) On how many days per week do you engage in such activities for at least 30 minutes per day during the summer?,Please report the average number of days for a regular week’, 2)’And on how many days per week do you engage in such activities for at least 30 minutes per day during the winter?,Please report the average number of days in a regular week’.The results of these questions were compared with the data of an a combined heart rate monitor and accelerometer, which showed in 46% of the participants full agreement in assessing them in three categories of compliance with the Dutch MVPA guideline -- i.e. ‘norm-active’, ‘semi-active’ and ‘inactive’ -- and in 0.5% of the participants disagreement of more than one category.Furthermore, respondents were asked whether they engaged in sports activities during the past 12 months and if they did so, how many times a week or how many times a year.Respondents were considered to be a ‘sports participant’ if they engaged in sports activities at least 40 times a year, regardless of the type of sport.A random subsample of approximately 25% of the participants was asked about their ST during work/school days: 1) ‘Can you estimate the number of hours that you spend sitting on a regular school-/workday at school/work, including transport to and from school/work?’,and 2) ‘Can you estimate the number of hours that you spend sitting/lying after school/work time on a regular school-/workday, including the evening but excluding sleep time?’.Total ST at a regular school-/workday was calculated as the sum of ST at school-/work and ST after school/work.The 25%-subsample was also asked about the type of their job: ‘Do you need to walk a lot at your work, or are you mainly standing or sitting?’, ‘much walking’, 2) ‘standing’, 3) ‘sitting’).Finally, questions on age, gender, highest level of education, employment status and complaints of chronic disease,were included.For the purpose of this study, from October 2011 until September 2012 the following questions were added to the OBiN survey to be answered by all respondents: 1)’How many cars are present in your household, including possible business car?’,2) ‘Is that car a business car?’,3) ‘Are you the main user of the business car in your household?’,4) ‘Are you the main user of the private car in your household?’,5) ‘On average, how often do you use a car?’,.Questions 1, 3 and 4 were derived from the Dutch national transport survey, 2016).Respondents who indicated to be the main user of a car in the HH were coded as ‘owner’, the others as ‘co-user’ or ‘no car in HH’.During the sampling period, 6185 panel members in the age of 18 to 64 years old responded to the online questionnaire, of which 4660 completed the questions to assess the compliance with the MVPA guideline, the availability and ‘ownership’ of a car in the HH, car use frequency and sports participation.In the 25%-subsample, 741 out of 1201 respondents completed the questions to assess ST on work/school days and the question about their type of job.To analyze the data in the total sample as well as in the 25%-subsample, six mutually exclusive groups of respondents were formed: 1) business car owners, 2) co-users of a business car in the HH, 3) private car owners with also a business car in the HH, 4) private car owners 5) co-users of a private car in the HH, and 6) respondents without a car in the HH.Group 3 was formed, as it was not clear beforehand to classify these respondents as a ‘co-user of a business car’ or as a ‘private car owner’.Differences in personal characteristics, car use frequency and ‘sports participant’ between the six groups were analyzed with chi-square tests and Kruskal-Wallis tests.Both chi-square tests and Kruskal-Wallis tests were also used to assess differences between the personal characteristics of those respondents belonging to the 25%-subsample and those who were not.To assess differences between the six groups in complying with the MVPA guideline, a multiple linear regression analysis was performed on the total sample, with compliance with the MVPA guideline as the dependent variable.This analysis was repeated several times with different groups as the reference category to test the found differences on statistical significance.The categorical variables were dummy-coded before including them in the analysis.Personal characteristics, car use frequency and sports participant were included in the model as independent variables to perform adjusted analysis.A separate multiple linear regression analysis was performed to assess the difference in complying with the MVPA guideline between business car owners and all other respondents.This analysis was repeated in male and female respondents separately.Although complying with the MVPA guideline is a binary variable, we used linear regression because it presents absolute differences in percent points, which facilitates the interpretation for policy and practice.The use of linear regression is justified if the absolute percentage of the dependent variable in the analyzed groups is between approximately 20% and 80%.This assumption was checked and the results of these linear analyses were checked with multiple logistic regression.To assess differences between the six groups in ST during school/workdays, a multiple linear regression analysis was performed using the 25%-subsample.The independent variable ‘type of job’ was included in the model to adjust for any effect of ‘type of job’.The variable ‘paid job’ was removed, because this was used as a selection criterion for the 25%-subsample.A separate multiple linear regression analysis was performed to assess the difference in ST during school/workdays between business car owners and all other respondents in the 25%-subsample.The 25%-subsample was also used in a multiple linear regression analysis to assess the difference in compliance with the MVPA guideline of business car owners versus all other respondents of the 25%-subsample, to adjust for type of job.For the statistical calculations SPSS version 22 was used.Significance level was set at p<0.05.In the total sample, almost all characteristics significantly differed between the six groups of car ownership.The percentage of males in the group ‘business car owner’ was much higher than in the other groups, both in the total sample and the 25%-subsample.In the 25%-subsample we also found statistically significant differences between groups for the personal characteristics.Subsequently, we compared the 25%-subsample with the remaining respondents.We found that the 25%-subsample the contained a higher percentage of business car owners, business car co-users and respondents with no car in the HH compared to the remaining respondents, whereas the percentage of private car co-users was lower in the 25%-subsample.The 25%-subsample significantly differed from the remaining respondents: they were younger, included more men, had a higher education and reported a higher level of car use frequency.Table 2 shows that business car owners reported the lowest compliance with the MVPA guideline in the total sample.The difference with the other groups remained significant after adjusting for confounding variables, i.e. personal characteristics, car use frequency and sports participant.In a separate linear regression analysis, the group ‘private car owner & business car in the HH’ did comply less with the MVPA guideline compared to ‘business car co-users’, ‘private car co-users’ and respondents without a car in the HH, but only the difference with ‘business car co-users’ remained significant after adjusting for confounding variables.In the total sample, the difference in compliance with the MVPA guideline of business car owners compared to all other respondents was −15.8 percent points, which was statistically significant.The difference changed after adjustment for personal characteristics, car use frequency and sport participant to −11.6 percent points, which was still statistically significant.In men, 27.4% of the business car owners complied with the MVPA guideline, whereas 44.4% of all other male respondents complied with the MVPA guideline.After adjusting for all other studied variables except type of job, the difference changed to −14.4 percent points, which was still statistically significant.In women, 35.2% of the business car owners and 44.8% of all other female respondents complied with the MVPA guideline.This is a 9.6 percent points lower compliance, compared to all other female respondents,t.After adjusting for all other studied variables except type of job, the difference remained non-significant.Table 3 reports the results of the multiple linear regression, using the 25%-subsample, including also the assessment of the influence of the independent variable type of job on the difference in compliance with the MVPA guideline of business car owners compared to all other respondents.The crude difference in the 25%-subsample was −16.1 percent points.After adjusting for personal characteristics, car use frequency, sports participant, and also including type of job, the difference changed to −10.6 percent points, which was no longer statistically significant.The results of this linear regression indicated that car use frequency and type of job could partly explain the lower compliance with the MVPA guideline in business car owners, but personal characteristics and the variable sports participant could not.In the 25%-subsample the number of female business car owners was too low for reliable further analyses by gender.Table 2 also reports the differences between the six groups in ST on work/school days, as measured in the 25%-subsample.Business car owners reported significantly more ST during work/school days compared to private car owners and private car co-users.The group ‘private car owner & business car in the HH’ did not report more ST during work/school days compared to respondents without a business car in the HH.ST during work/school days differed significantly between business car owners and all other respondents combined.After adjusting for personal characteristics, the difference was no longer significant.In the 25%-subsample, the analyses by gender were limited due to the low number of female business car owners.Male business car owners reported 10.6 h ST during work/school days and all other male respondents 9.6 h.Female business car owners reported 2.4 h more ST during work/school days than the other female respondents in the 25%-sample.This is the first study to report on the relation between the availability of a business car and compliance with the MVPA guideline and SB.Only 28.8% of business car owners complied with the MVPA guideline, which is significantly lower compared to private car owners, adult co-users of a car in the HH and adult respondents with no car in the HH.Adult household members of business car owners also tended to comply less with the MVPA guideline, but only those who owned a private car themselves.Furthermore, business car owners reported about 1.5 h more ST during work/school days compared to all other respondents, but after adjusting for covariates, this difference was not significant.The ST of HH members of business car owners did not significantly differ from the other groups.The lower compliance with the MVPA guideline of business car users is relevant for public health reasons, since the prevalence of business cars in the Netherlands is more than 900,000 in a population of approximately 17 million inhabitants, and government policy in the Netherlands aimed at 5 percent points increase of compliance with the MVPA guideline in the population.As they also reported longer ST during work/school days, business car owners form a potential target group for future interventions to stimulate PA and to reduce SB.Although not unexpectedly, it is remarkable that over 80% of the business car owners were men.Nevertheless, female business car owners also tended to have a lower compliance with the MVPA guideline and to report more ST during work/school days compared to all other female respondents.However, because the number of female business car owners was too low for further analysis, this conclusion should be confirmed by future research.The findings in this study support the hypothesis that business car ownership is associated with lower compliance with the MVPA guideline compared to ownership of a private car.The finding that a relevant of the lower compliance with the MVPA guideline of business car owners appeared to be associated with their high car use frequency, and that their lower compliance with the MVPA guideline could not be explained by differences in personal characteristics, further supports the hypothesis.The finding that the differences in age, gender, level of education, paid job and complaints of chronic disease hardly influenced the compliance with the MVPA guideline in business car owners is remarkable as these variables are associated with compliance with the MVPA guideline in monitoring data in the Netherlands.Our hypothesis that the expected lower compliance with the MVPA guideline of business car owners would not be compensated by more PA at work, was supported in this study, as they reported a high prevalence of jobs with much sitting.This was accountable of approximately 1.5 percent points of their lower compliance with the MVPA guideline.Although business car owners, as expected, tended to report a higher rate of sports participation than other respondents, adjusting for sports participation led to an increase of their compliance with the MVPA guideline with no more than approximately one percent point.This indicates that their higher frequency of sports participation hardly compensates their reduced compliance with the MVPA guideline resulting from other activities.Although we did not measure time spent on transport, the significant lower compliance with the MVPA guideline of business car owners compared to private car owners, and the association of high car use frequency with low compliance with the MVPA guideline support the hypothesis that the availability of a business car, i.e. easy access and mostly free of charge use of a car, is a risk factor for reduced PA.Unexpectedly, the high ST of business car owners during workdays was not associated with their high car use frequency.An explanation for this finding might be that the amount of ST while traveling for work is relatively low compared to the total ST spent on workdays.Another explanation might be that frequency of car use, as assessed in the questionnaire, insufficiently correlates with time spent sitting in a car.The finding that co-users of a business car, who don’t own a private car, reported the highest rate of compliance with the MVPA guideline does not support our hypothesis.It indicates that the mere availability of a business car in the HH is not a relevant risk factor in itself for low compliance with the MVPA guideline.The rather high number of representative respondents of the Dutch population is a strength of this study.By adding questions on car availability and car use to a continuous national survey, a high number of respondents representative of the Dutch population could be included in this study.However, using this already existing monitoring survey also had a limitation, because there was no room for more detailed questions on e.g. mode of travel, trip purpose, transport time, environment and land use.Besides, questions on ST and type of job were answered by a smaller group of respondents.In several answering categories to these questions, the number of respondents was too low to be able to perform reliable analyses.Furthermore, the data were collected using a questionnaire, which is subject to bias.Finally, as this is a cross sectional study, no statements can be made on causality in the found relationships.Although research on specific measures to reduce this risk of a lower compliance to the MVPA guideline and/or increased SB in business car owners is lacking, results from research on transport policies, on practices to increase PA and on perceived barriers and facilitators for cycling or walking could be informative to develop effective interventions for this risk group.First of all, the possible health risk of a reduced compliance to the MVPA guideline and an increase of SB in business car owners could be addressed by informing employers as well as employees.As a result, employers might consider this risk when deciding on travel arrangements for their employees, e.g. to offer them also a free public transport debit card for business trips that partly or completely can be made by train.Besides, increased awareness of the health risk in employees is very important if you want them to change their attitude concerning the use of a business car.Furthermore, business car owners might be advised to engage more frequently in sports and fitness activities, and also on workdays, to compensate their lower PA resulting from active transport and work, especially since a recent study showed that high levels of moderate intensity PA can possibly -- at least to some extent -- attenuate the negative health effects of prolonged sitting.As more than 80% of the business car owners reported to have a job where they mainly sit during their working hours, interventions in the organization and culture at the work place could be also considered, e.g. less use of e-mail to colleagues nearby, alternating between sitting tasks and tasks that require to walk or stand for some time as well as the use of furniture that allows to stand for a period to interrupt sitting.Policy makers in transport and fiscal arrangements involving transport, should be informed about the possible enhanced health risk of business car owners as a result of lower compliance with the MVPA guidelines.Finally, car lease companies should also be informed, as they could consider a more differentiated offer which makes it easier for the client to choose, when possible, other modes of transport than solely the car.The results of this Dutch study might be applicable for other countries, regions or cities, as fiscal incentives for business car use are prevalent in other countries as well.They might also be relevant for countries where the prevalence of business cars is low, but where other financial incentives for car ownership and use are present.Also in regions or cities with a low bike mode share in daily transport the findings might be relevant, provided that there is a realistic opportunity for cycling.Although the distances and the infrastructure for cycling and walking may not be comparable with the situation in the Netherlands, cities in many countries nowadays develop policies to promote cycling.In regions which are less suited for cycling, also a shift from transport by cars to public transport can be favorable for daily PA, as shown in other research.Before advising comprehensive measures to decrease the availability and use of business cars, more research is needed to confirm the current results, preferably with a longitudinal design and using an objective instrument to measure PA and SB and to assess that business car ownership is associated with reduced active transport.In addition, besides the known determinants of the choice for a mode of transport in commuting and other trips, insight in the determinants of PA and SB of business car owners is needed.Qualitative research among business car owners could possibly identify other, more specific determinants.Finally, it would be worthwhile to include questions in surveys determining the compliance with the MVPA guideline as well as in transport surveys addressing the following issues: the availability of a business car or other company arrangements for car use, the mode of commuting in relation to the availability of private and business cars, and the number of bicycles in the HH.To summarize, the results of this study suggest that business car owners have a significant higher risk of not complying with the MVPA guideline than owners of a private car, adult co-users of a car and adults with no car in the HH.In addition, they tend to spend more hours sitting during workdays than other adults.This may be, based upon evidence from health research on PA and SB, negatively affect the health of business car users.Hence, we should consider to inform business car owners, employers and car lease companies about the health risks of reduced compliance with the MVPA guideline associated with business car ownership.
Active transport contributes to increased daily physical activity (PA). Car ownership is associated with less frequent active transport and less PA. For business car ownership this relation is unknown. Therefore, we explored whether business car owners and their adult household members comply less with the Dutch moderate to vigorous physical activity (MVPA) guideline and are more sedentary than private car owners and persons without a car. From October 2011 to September 2012 questions about use and availability of cars in the household were included in the survey Injuries and Physical Activity in the Netherlands. Multiple linear regression was used to compare six mutual exclusive groups of ownership and availability of (business and/or private) cars in the household. Business car owners complied less (15.8 percent points) with the MVPA guideline than the other respondents. They also reported 1.5 h more sitting time during workdays than the other respondents, but after adjusting for covariates, this difference was no longer significant. We concluded that owners of a business car in the Netherlands are at higher risk of not complying with the MVPA guideline and tend to spend more hours sitting during workdays than other adults. Further research in this group, e.g. with objective instruments to measure physical activity and sedentary behavior, is recommended. Policy makers on transport and fiscal arrangements, employers, employees, occupational health professionals and car lease companies should be aware of this possible health risk.
19
Construction of a versatile SNP array for pyramiding useful genes of rice
By 2050, the world population is expected to surpass 9 billion.To address the monumental challenge of feeding 9 billion, a 70% increase in global agricultural production has to be reached , http://faostat.fao.org/default.aspx].Rice is one of the most important food crops, providing up to 76% of the caloric intake in Southeast Asia and up to 23% of the caloric needs worldwide .Increasing rice production, therefore, would play a key role in efforts to secure the world food supply.Superior rice varieties with high yield, good eating quality, pest and disease resistance and other good agronomic characteristics have been developed by traditional breeding.However, there is still a continuing demand for new varieties that would further improve rice production.To this end, gene pyramiding has been an efficient breeding strategy that allows incorporation of multiples genes of agronomic importance into a single variety .Currently, a number of useful genes or quantitative trait loci that are related to agronomical traits have been identified.Genes controlling yield-related traits such as panicle or seed shape , as well as biotic stress resistance genes that can be used to rationally manage rice pests and diseases have been identified and cloned.DNA markers are now available to precisely select useful alleles for these traits and pyramid them in rice.With the development of DNA techniques, marker-assisted selection has become an indispensable component of breeding.MAS not only eliminated the extensive trait evaluation involved in gene pyramiding but also reduced the breeding duration for a variety.DNA markers such as restriction fragment length polymorphism, simple sequence repeat, and single nucleotide polymorphism have been used in MAS, with SSRs being the most common marker of choice due to their abundance in the genome, robustness, reproducibility, and low cost.SNPs, on the other hand, have not been used as extensively in breeding programs because of the difficulty in their detection.Thus far, the application of SNPs in marker-aided breeding has entailed laborious and complicated techniques such as allele-specific PCR or restriction-enzyme based methods like cleaved amplified polymorphic sequence .In spite of this, the utilization of SNPs as molecular markers for breeding is becoming a real possibility.SNPs make up the largest amount of DNA polymorphism in the eukaryotic genome and hence can be used more rationally in marker-based breeding .In rice, 1.7 million SNPs have been detected by comparative analysis of the draft genomic sequences of cv.Nipponbare and 93-11 .A vast amount of information on 160,000 high-quality rice SNPs is also now available in OryzaSNP and in newer databases that serve as repositories for the vast amount of information generated from the 3000 rice genomes project .As for SNP detection, modern SNP genotyping techniques are enabling automated, multi-loci allele calling in many samples at a time .In recent years, there has been a shift from the use of the microarray platform towards the use of Illumina’s Golden Gate technology for SNP detection .The Golden Gate technology is based on allele-specific extension and ligation, and can genotype 1536 SNPs on 96 samples in its original format .This gives the advantage of an efficient, high-throughput marker system with lower cost per data point unlike the microarray-based method, which might be advantageous in detecting a large number of SNPs, but is too expensive for breeding purposes.The highly automated Golden Gate-based method also requires only a minimum quantity of DNA, allowing selection in the seedling stage before transplanting.More importantly, the Golden Gate system offers the advantage of haplotype-based selection because it can simultaneously detect appropriate numbers of multiple SNPs for MAS.In the present study, we developed a MAS system based on SNPs."As a first step, the whole genome SNP information for 31 rice lines that were selected for the authors' breeding project was collected using the Affymetrix’s SNP microarray . "The SNPs within the genomic regions that are linked to 16 target genes were then converted to Illumina's Golden Gate Veracode oligo pool assay.The quality of SNP detection and the application of the SNP array for haplotype-based MAS were confirmed using 24 parental lines out of the 31 varieties, and early breeding populations.Thirty-one rice varieties that were selected for the authors’ breeding project, Wonder Rice Initiative for Food Security and Health were used in this study.WISH is a breeding program that aims to improve the yield and biotic stress resistance of existing rice varieties that are preferentially grown by farmers for their inherent adaptation to a wide range of environments.The 31 varieties contain the donors of 16 target genes and potential recipient varieties, which are leading varieties, high-biomass varieties, or varieties with abiotic stress tolerances, and 1 variety with unidentified insect resistance gene.A total of 16 target genes controlling yield-related traits and pest resistance are listed in Table 2.The map positions of the cloned genes were assigned using the Nipponbare genomic sequence as reference .The genomic positions of the genes that are not yet cloned were determined based on available public information or data obtained by the authors.Alleles for GW2 and GS3 were discriminated by using PCR-based method.All of the 31 varieties carry small allele of GW2, and 7 varieties carry long-allele of GS3.Functional alleles of qSW5 were known in not all varieties.For reducing redundancy, GW2, GS3 and qSW5 are not indicated in Table 1.Rice genomic DNA was extracted from young, green leaf tissue using the DNeasy Plant Mini Kit.Hybridization and signal generation using the GeneChip Rice 44k SNP Genotyping Array were conducted following the methods of .Fluorescence intensity was detected using the GeneChip Scanner 3000 7G.“.CEL” files containing signal intensities were obtained.The genotypes of the 31 varieties were called using the ALCHEMY program following the methods of .To select the SNP markers to be converted into Golden Gate assay, graphical genotypes based on the microarray data were first made visible using the Flapjack program .The haplotype information from approximately 200 kb upstream and downstream regions of the target genes was then collected.SNPs with low quality, i.e. low call rate were removed and haplotypes that were as unique as possible to the useful alleles of the target genes were visually selected, taking into consideration the distance to the gene positions and balance of the minor allele frequencies.Up to a total of 22 SNP loci were chosen for each target gene.The positions of the selected SNP sites were corrected based on the reference sequence available at IRGSP1 and flanking sequences on the microarray for each SNP sites were verified using the information available from the Rice Diversity project website .Additional flanking sequences were obtained from the reference Nipponbare genome and were used for designing the Golden Gate OPA.Functional nucleotide polymorphism for the genes, GW2, GS3 and pi21), were directly used to design the OPA.The assay design tool provided by Illumina was used to design the custom OPA.A total of 143 SNPs comprise the final version of the custom OPA.To verify the preciseness of the custom OPA, the 24 varieties which covered all of the 16 genes and LG10, the donor of GW2, were genotyped with the Golden Gate assay using the Illumina BeadXpress instrument following the manufacturer’s instruction.The same DNA samples used in the microarray analysis were used.Genotypes were called using the Genome Studio Genotyping Module version 1.8.4.SNPs with low dispersion in genotype clusters were removed.To confirm the feasibility of the custom Golden Gate OPA, 3 F2 populations from the crosses ST12 x IRBB4/5/13/21, T65BPH25/26 x ST12 and ST6 x T65GRH2/4/6 were used.DNAs of 12, 15 and 15 plants, respectively, from each of the 3 F2 populations were extracted and genotyped using the custom SNP array.Using the same DNA samples, genotypes at the GN1indel and RM5493 loci of all samples were determined by PCR and electrophoresis of PCR amplicons on 4% agarose gel.GN1indel is located at 5275477- 5275606 bp within the coding region of Grain Number 1a on rice chromosome 1, whereas RM5493 is approximately 750 kbp downstream of the Wealthy Farmer’s Panicle locus on chromosome 8.Concordance between the genotypes of samples generated using the custom SNP array and the two PCR markers were compared to verify the accuracy of the SNP array.Whole genome SNP analysis of the 31 parental varieties that were previously selected for the authors’ breeding program was conducted using the Rice 44k SNP Genotyping Array which can detect 1 SNP per 5 kb genomic DNA .Because the Affymetrix’s genuine BRLMM-P basecaller program was not applicable for rice and not appropriate for this small population size, the authors employed the ALCHEMY program .Using the same parameters for the informatics, a comparable number of high-quality SNPs were obtained in this study.Linkage disequilibrium of SNPs within the subspecies groups of Oryza sativa has been reported to be greater than 500 kb in template japonica, 150 kb in tropical japonica, and 75 kbp in indica .The rice genome is approximately 389 Mb and linkage maps typically contain approximately 1500 cm .Therefore, 1 cm in rice is considered to be approximately 260 kb.Based on this consideration, the authors defined the 200 kb region upstream and downstream of the target gene as the haplotype block to allow selection for target alleles with sufficient precision.For example, a total of 10 SNP markers representing the haplotype of the WFP-ST12 allele were selected.In the same manner, the SNP markers for 4 other cloned genes, Gn1a, qSW5, APO1 and Xa21 were selected by graphical genotyping.Because the causal mutations for GW2, GS3, and pi21 are already known , these sites were directly used to design the custom array.For genes that are not yet cloned, linkage map information was used.For example, Ovc is located near the RFLP marker R1954 on rice chromosome 6 .Therefore, the haplotype surrounding R1954 was used for the custom array.SNP sites of the remaining genes were considered based on publicly available literature and additional mapping information obtained by the authors.Using linkage, the accuracy of SNP selection was estimated to be higher than 95%.Selected SNPs from the haplotype blocks of each of the 16 target genes were regarded as first step candidates for SNP markers.ADT scoring was conducted using the first step candidate markers and markers with low ADT score were replaced with other possible SNPs.A total of 143 sites including 142 SNPs and 1 InDel were converted to Golden Gate custom OPA.In the SNP microarray analysis, genotype calls using ALCHEMY gave an output in an format.The AA indicates an allele that is identical with that of the reference Nipponbare genome.In the present study however, the data obtained from Nipponbare contained BB.This may be due to diversity within the variety probably because the sample was maintained at Nagoya University.The genotype calls using the Genome Studio software were converted to format before the results from microarray and Golden Gate array were compared.The call rate of the parent varieties by the custom OPA was 98.5%.The comparison of haplotypes obtained for the WFP gene is shown in Fig. 2.In the 24 varieties tested, and when considering 139 SNPs designed from the Affymetrix SNP microarray, the rate of genotype concordance between SNP microarray and Golden Gate array was 95.2%.This reduced rate is because of the presence of low quality SNPs in the custom OPA, 23 out of the 139 SNPs contained missing data greater than 10% among the 24 varieties.This is remarkable in the SNPs covering Xa4 gene, 7 out of the 8 SNPs showed consistency in less than 16 out of the 24 varieties.If the SNPs for Xa4 were removed, the consistency was improved to 97.6%.The 2 FNPs for GW2 and one FNP for GS3 was precisely detected and corresponded to genotypes obtained based on PCR.The FNP for pi21 was not successful in the custom OPA.On the whole, the results indicate that genotype data obtained from the custom Golden Gate OPA is consistent with that of SNP microarray.The custom OPA enabled haplotype-based or FNP-based MAS in a total 14 genes out of the 16 target genes.Because Gn1a and WFP significantly change the panicle shape and suitable for haplotype-based selection because causal mutations are not known, three F2 populations that segregated these 2 genes were picked up and used for the verification of the SNP array in actual segregating populations.The genotypes of the 3 F2 populations were summarized in Fig. 3.The 4 SNP markers, id1004195 to id1004310, co-segregated with GN1indel.One out of the 15 F2 plants from T65BPH25/26 x ST12 showed recombination between id1004146 and id1004198.One recombination was observed between id1004230 and id1004310 in the ST6 x IRBB4/5/13/21 F2 population.Three out of the 27 plants possessed recombination between the likely WFP genotypes and RM5493, and 1 recombination was observed within the SNP markers id8006925 and id8006944.We thus conclude that the SNP markers detected using the custom OPA showed a good co-segregation to PCR-based markers and sufficient for the use of MAS.MAS is a common technique used in rice breeding programs.To this day, PCR-based marker systems are the most commonly used for MAS.However, the PCR-based markers, in combination with electrophoresis have the disadvantage of low throughput.Only several markers at most, could be detected in one PCR reaction and conventional electrophoresis lane.This limits the application of the marker system for “foreground” selection only of a few target genes.The practical use of multiplex marker has long been awaited, but its realization has been prevented by the lack of technology that would cut both costs and processing time.Recently, “breeders” SNP arrays have been developed for rice .These arrays are designed to cover a wide range of cross combinations in rice however, they are only suitable for “background” selection and are not targeted for specific genes.Unlike the previously reported arrays , the SNP array developed in this study contains a SNP set targeting specific genes of interest, and first provide a platform for foreground MAS for multiple genes.The 95.2% match between the genotypic data obtained using microarray and the custom Golden Gate array is comparable with the previously reported 82% match .SNP genotyping using segregating F2 populations confirmed the accuracy of the custom array.The approach used in the present study can be used and applied to future breeding projects that require a new set of SNPs.The sequence diversity of rice is being intensively analyzed by various projects.Therefore, the basic SNP information that is similar to those generated in this study using microarray will be available through public databases in the future.Elimination of additional experimentation for SNP discovery will allow faster design and fabrication of custom arrays.In this study, functional nucleotide polymorphisms for GW2, GS3 and pi21 were directly included in the custom array.The FNPs for GW2 and GS3 can be detected in the custom array.However, FNPs for most of the target genes, even in the cloned genes, have not been identified as in the case of WFP.WFP was cloned as a gene controlling the number of primary branches per panicle, but the FNP responsible for increased number of primary branching remains unknown .In this case, it is difficult to find a DNA marker that would segregate with the useful allele.The haplotype selection approach is useful in such a case because the useful allele can be monitored using a set of SNP markers encompassing the target loci.Multiplexed SNP markers are considered to be suitable for this purpose, enabling a versatile SNP array work in various cross combinations.The SNP array constructed in the present study is useful in constructing a set of NILs that are suitable for trait evaluation because the SNPs can be used to select useful alleles in a wide range of genetic backgrounds.The tightly linked set of SNPs can also be used for dissecting “linkage drag”.In the present study, some recombinants within the haplotype regions were detected.This indicates that the haplotypes around the target genes can be utilized for the fine genetic dissection of regions near the target genes.A typical case of a linkage between a useful gene and a harmful gene was reported in .If a large segregating population is available, the SNP array can be readily used to construct an NIL with a very small introgressed chromosome segment from the donor parent.The simultaneous detection of useful alleles potentially allows a dramatic decrease in labor and time required to develop pyramid lines.Development of DNA markers has allowed gene pyramiding.In the initial concept of gene pyramiding, near-isogenic lines for each of the target genes must be generated before the construction of pyramiding lines .This requires the laborious, repeated selections and crossings to generate NILs.Additional effort is also necessary to combine multiple genes into one line.The SNP array in this study has the potential to cut the time required to pyramid multiple genes into a single variety by direct-crossing of gene donors and MAS in the progeny.For example, a multi-parent advanced generation inter-cross population can be developed using relatively simple ways of crossing.From a MAGIC population, candidate lines with multiple useful genes can be selected using the SNP array.The low frequency of the desirable genotype can be overcome by the use of a large population.In addition, this method of direct pyramiding of useful genes will also contribute in increasing the genetic diversity in breeding materials.However, cost for SNP detection is still expensive for most of the rice breeders.Currently the BeadXpress platform can process up to 96 samples at a time and the cost is still expensive for use in a breeding project.These limitations are potentially problematic when handling a large segregating population.The disadvantage of the GoldenGate system is in its high initial cost and low flexibility.Although the cost for 1 genotype per sample is comparable to PCR-based methods, the one purchase of OPA becomes quite expensive and not acceptable for most of the local breeders.However, the potential of SNP markers for automated and simultaneous detection must be advantageous and will be expanded.It is expected that above limitation will be cleared by the use of next-generation sequencers.However, the targeted SNP detection with NGS still needs a complicated sample preparation method .On the other hand, one of the NGS-based genotyping methods, genotyping by sequencing is becoming common.GBS enables multiplexing in both samples and markers but is not suitable for targeting useful SNPs.It is still necessary for geneticists and breeders to choose a suitable genotyping platform in their breeding project.Wide use of the GoldenGate array might be limited because of the cost problem, so the SNP array will be used for monitoring whether the newly developed lines possesses some of useful allele or not, which is informative for further gene pyramiding and MAS.To enhance rice production by new varieties, development of actual plant materials such as NILs is required to verify the effects of yield-improving genes."It should be considered that the effect of yield-enhancing genes such as Gn1a , WFP or APO1 , are not confirmed yet in other genetic background and actual farmer's field.Therefore, these potential yield-improving genes should be tested in various genetic backgrounds.The “New Plant Type” approach was based on the concept of increasing the “sink size”, and the aforementioned yield enhancing genes are considered to be associated with sink size.Furthermore, a gene for increasing rice “source” ability have been recently reported .Currently, scientists have a good technology for MAS, but efforts on actual material development are limited."The breeding project Wonder Rice Initiative for Food Security and Health was launched by the authors' group, as an effort to provide pre-varieties to rice scientists and breeders worldwide.In this project, the authors envision incorporating multiple useful genes to recipient varieties and generating pre-varieties for distribution.The anticipated pre-varieties contain two types of materials: breeding materials with uniform backgrounds that provide more convenience to rice breeders because they can evaluate the effect of incorporated genes on their own breeding sites, single line with many useful genes that will be conveniently used because multiple genes can be incorporated to target varieties with a single cross, and the SNP information for useful genes will be available.The versatile SNP array developed in this study will largely contribute in facilitating the breeding activity.Both plant materials and SNP arrays will be made available to help improve varieties that are adapted to various regions in the world.
DNA marker-assisted selection (MAS) has become an indispensable component of breeding. Single nucleotide polymorphisms (SNP) are the most frequent polymorphism in the rice genome. However, SNP markers are not readily employed in MAS because of limitations in genotyping platforms. Here the authors report a Golden Gate SNP array that targets specific genes controlling yield-related traits and biotic stress resistance in rice. As a first step, the SNP genotypes were surveyed in 31 parental varieties using the Affymetrix Rice 44K SNP microarray. The haplotype information for 16 target genes was then converted to the Golden Gate platform with 143-plex markers. Haplotypes for the 14 useful allele are unique and can discriminate among all other varieties. The genotyping consistency between the Affymetrix microarray and the Golden Gate array was 92.8%, and the accuracy of the Golden Gate array was confirmed in 3 F2 segregating populations. The concept of the haplotype-based selection by using the constructed SNP array was proofed.
20
A new gamma spectroscopy methodology based on probabilistic uncertainty estimation and conservative approach
The gamma spectroscopy technique is commonly used to assess the activity of gamma emitters for the disposal of radioactive waste which needs to be conservatively evaluated, i.e. reasonably overestimating the radionuclide activity results.A framework has been previously developed to quantify and reduce uncertainties of the efficiency curves generated with the ISOCS software which originate from the geometry model.The aim of the present paper is to study the influence of the geometry parameters on the determination of the efficiency calibration uncertainties for the radiological characterization of material.This study is also applicable to other activated material and therefore, contributes to the basis which is necessary for a scientifically sound uncertainty- and associated risk evaluation of the spectroscopy workflow used in gamma spectroscopy laboratories."Gamma spectroscopy is today's standard technique to quantify the residual activity of gamma emitters in various items, ranging from small scale samples in a laboratory to large items such as waste containers.In addition to the activity values of the radionuclide inventory, an uncertainty estimation is also needed to fully understand the quality of the measurement, including the uncertainty of the efficiency calibration of the instrument.Based on the hypothesis of having appropriate measurement conditions the uncertainty of the efficiency calibration is the dominant component to the total propagated uncertainty for many types of measurements such as complex geometries and low energy gamma rays conditions.To properly interpret the quality of a gamma spectroscopy measurement, the uncertainty estimation of the geometry model including the activity distribution, object dimensions, material type and composition as well as material densities is necessary.Any deviation of the modelled calibration geometry from the actual one contributes to the total uncertainty.In the following sections, we will analyse the effect of varying geometry model parameters on the efficiency calibration curves and, as a result, on the final estimation of the activity."The first section presents an overview of the tools used at CERN's gamma spectroscopy laboratory. "Then, a sensitivity and uncertainty analysis is performed on a sample originating from historical waste of CERN's Large Electron Positron collider.We study the impact of varying the aforementioned parameters on the assessed activity values.Finally, we investigate a curious change of trend on the efficiency curves that is observed when varying the dimensions of the object."Since 2016, nearly 1300 tons of CERN's historical waste have been considered as candidates for clearance in Switzerland due to their very low activation level.Consequently, four large-scale projects have been initiated to carry out this process for various types of equipment or material.One of these projects, called CLELIA, aimed to eliminate 149 tons of the old LEP aluminium/lead vacuum chambers.The vacuum chambers had been stored since 2000, the date of the decommissioning of LEP to allow for reusing the tunnel and associated infrastructures for the well-known Large Hadron Collider.For material coming from controlled or supervised areas, Switzerland provides the possibility of clearance from regulatory control if three criteria are met.The first criterion is based on surface contamination for which the values must be below surface contamination limits defined in the Swiss Radiation Protection Ordinance, ORaP.At CERN the measurements are performed via direct measurement using a certified mobile contamination monitor with a plastic scintillation detector to measure α and β/γ contamination.In addition, smear tests are performed and controlled by a low-background α/β counting system.The second criterion concerns the ambient dose equivalent rate H*.In order to be considered for clearance, the material must show an H* value below 0.1 μSv/h at a distance of 10 cm after subtraction of natural background."For the clearance of CERN's historical waste, the measurements are performed using a certified 6150 AD61 portable Geiger Muller counter coupled to a γ/X probe, a plastic scintillator coated with ZnS and associated with a photomultiplier.This probe allows us to measure very low H* values in the range of 10 nSv/h.Equation: Summation rule for a mixture of radionuclides.When the activity result is below the MDA, then the activity value is assumed to be equal to the MDA value in order to evaluate the summation rule conservatively.In order to fulfil this requirement, samples are measured by gamma spectroscopy performed at the CERN radio-analytical laboratory using High Purity Germanium coaxial detectors together with the ISOCS calibration software for absolute efficiency calibration allowing for producing accurate quantitative gamma assays of almost any sample type and size.However, samples taken from historical waste can have quite complex geometries.In order to satisfy the need of ensuring conservative specific activity measurements when characterizing material coming from controlled or supervised areas, a specific sample with a complex geometry taken from the CLELIA project was selected as a candidate for this study.This sample consisted of the extremities of supports of a so-called “non-evaporable getter” used in the final pumping stage to reach the vacuum level required to obtain a beam lifetime of 20 h.These devices, made of copper and stainless steel, were located inside most of the LEP vacuum chambers.According to the current working procedures of the gamma spectroscopy laboratory, the samples are measured with the detector facing the larger surface of the sample.The efficiency calibration is performed using ISOCS/LabSOCS.A more detailed description is provided in the next section.For complex geometries, as shown in Fig. 1, the procedure is to model them with a box or cylindrical envelop.Hence, the modelled volume is larger than the actual physical volume of the sample.The next section aims at determining the uncertainties related to the envelop geometry as well as the other parameters that affect the efficiency calibration curve such as source distribution, material composition, etc.The present systematic study is based on the sample described above and composed of three extremities taken from three LEP vacuum chamber getter strip supports that had been identified as candidates for clearance within the CLELIA project.The physical characteristics of the sample are shown in Table 1 and the corresponding ISOCS/LabSOCS model is shown in Fig. 2."The gamma spectroscopy measurements were carried out at CERN's radio-analytical laboratory.The sample was measured with a MIRION HPGe detector 10 GX5019."The detector's relative efficiency at 1332 keV is 57% and the Full Width at Half Maximum is of 1.25 keV at 122 keV and 1.93 keV at 1.33 MeV.The gamma radiation detector utilizes a high purity germanium crystal for high resolution and high efficiency gamma radiation detection.To create the reference ISOCS calibration file, one needs to know the physical parameters of the object, such as the dimensions of the source of radiation and its container, as well as the material composition of the sample itself and its container.The model based on these reference parameters will be denoted as “Reference model” in the following discussion.The sample shown in Fig. 2, was modelled using the ISOCS/LabSOCS 3D geometry composer with the “Simplified Box” template, of which the parameters are shown in Fig. 3.The gamma spectroscopy measurement was performed on May 28th, 2018 with a live time of 10 000 s.The acquired spectrum is shown in Fig. 4 and the two peaks of Co-60 are highlighted in the spectrum.Consistent activity values were derived from the two gamma lines at 1173 keV and 1332 keV and Fig. 5 illustrates the resulting activity values.Equation: Activity result of the sample using the efficiency calibration of the reference model.The uncertainties are reported at 2 sigma.The activity uncertainties are composed of the following contributors:"Live-time correction due to the detector's dead time,",Systematic uncertainties of the peak area fitting,Geometry model uncertainties due to speculative parameters and,ISOCS inherent uncertainties including detector validation,Hence, in the following sections of the document, we will focus on the relative change of the efficiency curve as a result of varying the geometry parameters."Due to the large range of variable sample geometries that are commonly encountered, CERN's gamma spectroscopy laboratory uses ISOCS and LabSOCS from MIRION for creating efficiency calibration curves.The ISOCS/LabSOCS software overcomes the limitations of traditional efficiency calibration techniques, and allows for practical modelling and accurate assay of almost any object which needs to be measured via gamma spectroscopy.The creation of the ISOCS/LabSOCS efficiency calibration curves is performed by the creation of a geometric model of the sample to be measured.The geometric parameters are for example the dimensions, the composition, the activity distribution inside the measured sample.These parameters are not always precisely known.The ISOCS Uncertainty Estimator tool, developed by MIRION, allows for the stochastic perturbation of these parameters to quantify the effect on the efficiency calibration curves.By creating a set of perturbed models, IUE generates the associated efficiency calibration curves for the sample to be measured.Consequently, uncertainty and sensitivity analysis can be done with IUE."An example of IUE's interface is provided in Fig. 6.The output of IUE is a collection of model efficiencies and parameters in four separate files: “.GIS,.ECC,.UGS, and UEC”.In order to fully exploit the results, we developed a Data Analyzer framework named “GURU”.For the sake of this study, it is needed to associate the model parameters with the efficiency values."When gamma spectroscopy measurements are performed on samples, the knowledge of the geometry's description, including dimensions, position with respect to the detector, material composition, hot-spots or relative concentration of activity is often uncertain, especially for the last two parameters.GURU offers the possibility to find a model that matches the actual scenario best, based on the compilation of different gamma spectroscopy results.The size of the sample given as distance d1.2 in Fig. 7 was varied randomly using a uniform distribution between 200 mm and 300 mm.The associated reference dimension was 250 mm.The mass of the sample is kept constant during the variation of the models.Hence, a corresponding material density is calculated for each model.For our case, the results show that around 65 keV, overestimating the length d1.2 of the sample causes an underestimation of the efficiency and hence an overestimation of the activities.Consequently, overestimating the length d1.2 leads to more conservative activity results for radionuclides whose gamma line emissions are above the value listed here."An activity uncertainty caused by the variation of the parameter d1.2 can also be calculated by associating the standard deviation of the efficiency curves values as an additional uncertainty to the reference model's uncertainties.The envelop efficiency can vary within ±8%.For our case, around 65 keV an inversion of the efficiency trend can be seen as the sign is changing:Below 65 keV, a negative perturbation of the width induces a negative perturbation of the efficiency.Moreover, at 60 keV, a positive perturbation of the width in the range of induces an increase of efficiency leading to a sensitivity maximized around +5%.Then, a perturbation of more than 5% of the width induces a decrease in the efficiency,Above 65 keV, a positive perturbation of the width induces a negative perturbation of the efficiency.Similarly to the previous case, the sensitivity at 70 keV reaches a maximum for a perturbation of −5% of the width.Then, for variations below −5% of the width, the efficiency decreases.These effects will need further investigations as they could depend on many correlated parameters such as:the detector field of view, or,the competition between the effects of the sample auto-attenuation and the change of the geometrical efficiency due to the increase of the width of the sample and thus, a resulting change of the solid angle between the sample and the detector.All the effects of the various perturbations are summarized in Table 8.The maximum standard deviation and the maximum values of the efficiency curves’ envelop, across the energy range of interest, are given in percent.From these values, one can deduce that the most impacting modelling parameters are:the source distribution, i.e., the representation of the heterogeneous distribution of the activity concentration inside the sample,the sample dimensions, and,the material type and composition.Equation: Calculation of standard deviation and envelops from the independent variations of all the parameters.Correlations are not considered.Moreover, the bias identified in Fig. 18 is mainly due to the material type and source distribution variations.For instance, the depth and height variations cause a null bias approximately as they compensate together.The distance d1.4 was varied uniformly between 20 mm and 100 mm with the reference value equalling 60 mm.The mass of the sample is kept constant during the variation of the models.Hence, a corresponding apparent density is calculated for each model.For all energies, the results show that overestimating the distance d1.4 leads to a more conservative activity result.An uncertainty caused by the variation of the parameter d1.4 can also be calculated by associating the standard deviation of the efficiency curves values as an additional uncertainty to the reference model uncertainties.The distance d5.1 was varied uniformly from 140 mm to 180 mm with the reference value being 160 mm.The mass of the sample is kept constant during the variation of the models.Hence, a corresponding material density is calculated for each model.The results show that above around 100 keV, overestimating the distance d5.1 of the sample causes an underestimation of the efficiency and hence an overestimation of the activities.Hence, overestimating the length d5.1 leads to a more conservative activity results for radionuclides whose gamma line emissions are above the value listed here.An uncertainty caused by the variation of the parameter d5.1 can also be calculated by associating the standard deviation of the efficiency curves values as an additional uncertainty to the reference model uncertainties.The same effect as in paragraph 4.1 is observed for the efficiency trend in the range from 80 to 150 keV as shown in Fig. 11.Note that in this case, the inflexion point is centred around 100 keV for the height perturbation.We note that the energy value of the inflexion point increases when more attenuation material is present in the sample.This can easily be seen in the case of the perturbation of the width.The material composition of the sample was varied uniformly to include the following materials: 90%SS-10%Cu, 80%SS-20%Cu, 70%SS-30%Cu, 60%SS-40%Cu, 50%SS-50%Cu.The reference material is made of 50% of stainless steel and 50% of copper.The dependence of the efficiency curve on variations of the material composition is shown in Table 5 and Fig. 13.From these results, we conclude that the material composition has a more significant effect at low energy and is insignificant at higher energies because of similar behaviour of mass attenuation effects.In this section we study the example of an activity that is heterogeneously distributed in the sample.The hot-spot parameters were varied uniformly as shown in Fig. 14.The volume of the reference hot-spot and its position inside the sample are chosen arbitrarily.Without choosing an arbitrary hot-spot, IUE is not able to construct perturbed models containing hot-spots.The activity concentration of the hot-spot was varied from 1 to 2.5 times with respect to the activity of the remaining sample."As it can be seen from Fig. 14, the uncertainty caused by the variation of the activity concentration induces an additional contribution compared to the reference model's uncertainties.The uncertainty is higher for low energies as we observe variations between +40% and −20% in the efficiency variations below 100 keV and between +15% and −5% above 100 keV.The source to detector distance d9.1 was uniformly varied within ±1 cm.For all energies, the results show that overestimating the distance d9.1 leads to a more conservative activity result."An uncertainty caused by the variation of the parameter d9.1 can also be calculated by associating the standard deviation of the efficiency curve values as an additional 0.5% to the reference model's uncertainties that is consistent across all energy values. "A lateral shift of the detector's position with respect to the source was modelled by uniformly varying this quantity in a range of ±3 cm.We see in Fig. 16 that moving the detector laterally reduces the efficiency at all energies which tends to overestimate the corresponding activities.Because of the symmetrically varied offset no trend is observed as can be seen in the right graph of Fig. 16.The source to detector lateral shift was uniformly varied within ±3 cm.Similar conclusions can be drawn as in the X-axis lateral shift.In this section the uncertainty is studied when varying several parameters simultaneously such as the volume, the material type, the definition of the hot-spot and the distance between the source and the detector.In this section we allowed all the ISOCS parameters to vary at the same time in the same parameter ranges of the previous section.As it can be seen from Fig. 18, the uncertainty caused by the variation of all the parameters induces an additional contribution compared to the uncertainties of the reference model.The standard deviations of the relative activity difference are given in Table 7.As mentioned before Figs. 9 and 11 show an inflexion point of the relative sensitivity of the efficiency with respect to the width and height of the sample.This change of the efficiency trend implies a non-monotonous behaviour of the perturbed efficiencies.In this paragraph we discuss the result of an analysis performed to understand the origin behind this behaviour and its dependence on energy and other model parameters.We suspect that in the case of perturbations of the width and height the inflexion point comes from a competition between the attenuation and the change of the solid angle covered by the sample with respect to the detector:In the calculations, the sample mass is kept fixed."Hence, an increase of the length implies a decrease of the sample's density.This decrease of density leads to a decrease of the attenuation within the sample."An increase of the sample's length will in addition cause a decrease of the solid angle covered by the sample with respect to the detector.These conclusions are verified by comparing two similar perturbations done at fixed density and fixed mass parameters.We can see in these figures that the inflexion point is not present anymore when the density has been fixed.The aim of this section is to present the dependency of the inflexion point on various parameters.In addition we will also describe the contribution of each parameter.We have previously observed in section 4.1 and 4.3 that the inflexion point can vary from 65 keV in section 4.1 to 100 keV in section 4.3.In what follows, we have performed width perturbations for different densities and source-to-detector distances to illustrate the effects of these parameters on the inflexion point.The effect of density variation on the inflexion point are studied with a fixed sample mass.Except for the sample mass all the other geometry parameters are kept identical to the reference model.We tested four different reference model densities and constructed sensitivity plots as shown in Fig. 20.We observe that when the density increases, the inflexion point moves toward higher energies."The inflexion point's energy values are shown in Fig. 20 for each assumed density value of the sample.In the two top graphs of Fig. 20, representing a density between 0.001 and 0.1 g/cm3, the inflexion point is around 45 keV.At higher densities, from 1 to 5 g/cm3, the inflexion point can vary from 100 to 2000 keV."In fact, increasing the sample's density tends to increase the attenuation effects contribution to the efficiency.The effects of the source-to-detector distance variation on the inflexion point are also studied with a fixed sample mass.Except for the source-to-detector distance, all the other geometry parameters are kept identical to the reference model.Here, we also investigated four different distances and constructed the sensitivity plots shown in Fig. 21.We observe that when the distance increases, the inflexion point tends to move toward the high energies.The inflexion point energy values are shown in Fig. 21 for each source-to-detector distance value.In the top left graph of Fig. 21, at distance below 10 mm, the inflexion point is below 60 keV.However, for distances of 100, 250 and 500 mm, we find inflexion point energy values centred around 90, 300 and 3000 keV respectively.Hence, an increase of the source-to-detector distance leads to an increase of the attenuation effects’ contribution to the efficiency as the impact of a varying solid angle becomes more negligible.A framework has been developed and used to perform both sensitivity and uncertainty analyses of an enclosing geometry modelling method used for gamma spectroscopy.The framework is based on the MIRION ISOCS/LabSOCS tool as well as the IUE one.A dedicated tool named “GURU” was developed for further data processing and analysis.The objective was to standardize the analysis procedure for future studies at CERN.We performed a sensitivity analysis to assess the impact of the different geometric parameters on the measured activities.We showed that the activity distribution within the sample is the most impacting parameter.We have studied the effects of varying different geometry parameters on the quantification of activities by gamma spectroscopy in order to evaluate if an envelop geometry is conservative in terms of determined activity values.We have observed the following for this specific sample:"Overestimating the sample's envelop volume causes an overestimation of the activity for energies above a threshold, depending on the distance between source and detector and the activity of the sample.Note that increasing the density and the source-to-detector distance implies an underestimation of the activities for higher energies,The distribution of the activity concentration, material types, and dimensions are the most impacting parameters on the efficiency curves,When varying all the parameters simultaneously while fixing the sample mass, the study presents a distribution of the efficiencies relative difference with respect to the reference model with the following:A bias of approximately -10% between the reference model and all the perturbed models.Hence the reference model underestimates the activities by 10%.A standard deviation of 12% needs to be taken into account due to uncertain geometry parameters.It is worth noting though that this uncertainty estimate is penalizing since we have utilized large perturbation ranges for the parameters with an assumed uniform distribution.The uncertainties due to the geometry parameters variations could be added in quadrature to the intrinsic efficiency calibration curves in order to propagate them correctly to the activity results with the gamma spectroscopy acquisition and analysis software."Finally, we investigated the reasons why an overestimation of the sample's envelop geometry does not impact the efficiency monotonously over the energy range.The main effect is caused bythe density dilution inside the sample.The energy at which this sensitivity changes is mainly driven by the density and the distance of the source to the detector.
The gamma spectroscopy technique is commonly used in many applications to evaluate the activity of gamma emitters in a given sample. This assessment of activity is of particular interest for the disposal of radioactive waste or for clearance purposes. However, for these specific applications, one needs to show that the evaluated activities are reasonably conservative. This paper shows an application of a methodology developed to quantify the efficiency calibration curve uncertainties originating from a test case sample and its associated geometry modelling. Therefore, the effects of enclosing geometries on the activity measurement results are discussed. The purpose is to provide an example of uncertainty analysis for an approach that could be applied to other studies in which a conservative estimation of the activity is required.
21
Estimation of own and cross price elasticities of alcohol demand in the UK-A pseudo-panel approach using the Living Costs and Food Survey 2001-2009
The consumption of alcohol and the related health and social harms are an issue of extensive policy debate in the UK and many other countries.Price-based policy interventions, such as minimum unit pricing and increases in taxation, have been actively considered by the UK and Scottish governments who aim to reduce harmful alcohol consumption and consequently various alcohol related harms among the population.The estimation of price elasticities of alcohol demand is essential for the appraisal of such price-based policy interventions, because they link the prices of alcohol, which these interventions directly affect, and the demand for alcohol, which such interventions aim to reduce.It is important to estimate elasticities for different beverage types and different trade sectors for policy appraisals because differential consumer preferences mean elasticities may vary across these categories and because prices and taxes are different for the different beverage types and sectors.Since changes in the price of one beverage type/sector could affect demand for others, it is also important to estimate both own-price and cross-price elasticities.That is, we aim to estimate own-price elasticities to enable us to quantify the percentage change in the demand for one type of alcohol due to a 1% change in the price of this type of alcohol, and cross-price elasticities to quantify the percentage change in demand for one type of alcohol due to a 1% change in the price of another type of alcohol.The cross-price elasticities estimated also allow us to identify whether two types of alcohol of interest are substitutes or complements.Previous meta-analyses have focused on differential elasticities by beverage type and demonstrate that beer, wine and spirits have different own-price elasticities, with beer appearing to be less elastic than wine and spirits.Cross-price elasticities, especially between off- and on-trade, are less widely studied.Previous studies suggested that different beverage types can be either substitutes or complements; whilst off-trade purchasing and on-trade-purchasing were typically substitutes, albeit with some exceptions.Few UK studies have investigated cross-price elasticities between off- and on-trade alcohol.Huang et al. examined own- and cross-price elasticities for 4 beverage categories using aggregate time series data in the UK from 1970 to 2002.Collis et al. used a Tobit approach to model own- and cross-price elasticities for 10 beverage categories) using household-level repeated cross-sectional data in the UK from 2001/2 to 2006.When modelling the effects of minimum unit pricing for alcohol, Purshouse et al. used the same cross-sectional data to estimate own- and cross-price elasticities for 16 beverage categories using an iterative three-stage least squares regression on a system of 17 simultaneous equations.A recent study examined long-run own- and cross-price elasticities specifically for off- and on-trade beer using aggregate time series data from 1982 to 2010.The key methodological limitation of these studies is the use of either national aggregate time series data which has the problem of small numbers of observations and lack of granularity or cross-sectional data which potentially has severe endogeneity problems.The ideal data source would be longitudinal panel data where individuals or households have repeated observations on both purchases and prices paid over time.Such individual-level panel data would have the advantage that individuals themselves can be used as controls to account for unobserved heterogeneity between individuals and stronger causal inferences can be made.However, individual-level panel data is generally more difficult and costly to obtain than cross-sectional or aggregate time series data.Compared to repeated cross-sectional data, it also suffers more from nonresponse and attrition and normally has smaller sample size and shorter time series.One solution to the lack of UK individual-level panel data is to use repeat cross-sectional data to construct a pseudo-panel.A pseudo-panel is constructed so that population subgroups rather than individuals become the unit of analysis.Subgroups are defined by a set of characteristics which do not change or remain broadly constant over time.It is assumed that although the individuals within groups change between waves of cross-sectional surveys, the group itself can be viewed as a consistent panel ‘member’ over time.Different ways to define the subgroups of the pseudo-panel can be tested, for example having larger numbers of groups with each having a smaller sample size but greater within-group homogeneity, or smaller numbers of groups with each having a bigger sample size but more within-group heterogeneity.Standard techniques for analysing panel data are then applied.The pseudo-panel approach has been applied in many empirical studies estimating elasticities of demand for various goods, however, it has not been used to estimate elasticities of alcohol demand.This study aims to apply the pseudo-panel approach using the LCF data from 2001/2 to 2009 to estimate the own- and cross-price elasticities of 10 categories of beverage in the UK.The key research questions are What are the own- and cross-price elasticities for different types of alcohol in the UK?, How do the estimates compare with previous estimates from the literature?, How robust are these estimates to different model specifications and alternative constructions of the pseudo-panel.The LCF, previously known as the Expenditure and Food Survey, is a national UK survey sponsored by the Office for National Statistics and the Department for Environment, Food and Rural Affairs.The LCF is a cross-sectional survey of private households, collecting information on purchasing at both the household and individual level.Data on the purchasing of non-durable goods including alcohol is collected via a confidential two-week personal diary for individuals aged 16 and over.In the UK, around 12,000 households per year are selected and the response rate is typically just over 50%.At the time of the analysis, LCF data was available for the 9 years from 2001 to 2009 covering 107,763 individuals in 57,646 households in the UK.We obtained the datasets from the UK Data Archive at the University of Essex and detailed data sources are listed in Appendix 1.Individual-level quantities of alcohol purchased are not available in the standard version of the dataset.However, via a special data request to DEFRA, we obtained anonymised individual-level diary data on both expenditure and quantity for 25 types of alcohol, e.g., off-trade lagers and continental beers.For this analysis, the 25 types of alcohol were grouped into 10 categories.The spending during the diary period and the corresponding purchase level were derived for each of the 10 categories of alcohol for each individual.Alcohol units were calculated by multiplying the recorded volume of product and the alcohol by volume for each of the 25 beverage types.For each individual, mean pence per unit paid was calculated for each beverage type by dividing the total spending by the total units purchased.Outliers were defined as individuals who pay extremely high or low PPU for any of the 10 types of alcohol and were excluded from the analysis.It is important that the subgroups in a pseudo-panel are defined by characteristics that are time-invariant such as the year of birth, gender and ethnicity.A trade-off also needs to be considered when deciding the number of subgroups in a pseudo-panel: a larger C increases the heterogeneity of the pseudo-panel by increasing the variations between subgroups, but also decreases the average number of individuals per subgroup resulting in less precise estimates of the subgroup means.Given a fixed total number N of individuals in the repeated cross-sectional dataset over time periods T, by definition, N = C × nc × T or N = C × nc × T*, where T* represents the mean number of repeated observations per subgroups.A large nc is important for the necessary asymptotic theory to be applicable to the pseudo-panel approach and previous empirical applications of the pseudo-panel approach normally have nc over 100.In the base case, a pseudo-panel with 72 subgroups was defined by 12 birth cohorts, gender and 3 socioeconomic groups – higher, middle and lower.The resulting average number of individuals per subgroup, or nc, is 140 with N = 90,652, C = 72 and T = 9.Table 1 summarises the characteristics of the subgroups.Subgroup observations with less than 30 individuals were excluded from the analysis to ensure robust estimates of subgroup mean statistics.For example, for the panel member of lower-income males who were born between 1940 and 1944, 5 out of 9 observations were excluded.Three alternative ways to construct subgroups were tested in sensitivity analysis: 96 subgroups defined by birth cohorts, gender and 4 socioeconomic groups, 48 subgroups defined by birth cohorts, gender and 2 regions in the UK, and 96 subgroups defined by birth cohorts, gender and 4 regions in the UK.The monthly retail price index in the UK was used to derive real term prices of alcohol and income, with December 2009 chosen as the base period.The income variable used in this study is the household gross weekly income which has been consistently collected in the LCF from 2001/2 to 2009.Alcohol consumption or purchasing estimated from self-reported survey data generally suffers from underreporting.Compared to the UK sales clearance data, the coverage of the LCF ranges from 55% to 66% over the period 2001 to 2009.We estimated beverage specific coverage rates for each year and applied these factors to adjust the alcohol purchase quantities for each individual in the LCF.For each observation of each subgroup, the mean units purchased of the 10 types of alcohol, denoted by Cijt, was used as the dependent variable, where i and j represent the subgroup and the type of alcohol respectively, and t represents the time period."The main independent variables are the mean PPUs for the 10 beverage types which are specific to each subgroup and time period, denoted by Pijt, and subgroup's mean income, denoted Incomeit.Four other time-variant independent variables were also tested, namely the proportion of individuals having children, being married, being unemployed, and smoking, denoted by KIDit, MRDit, UNEit, and SMKit respectively.Year dummies were included to control for the annual trend and any potentially omitted independent variables that change linearly over time.The square of the mean age of subgroup was also tested to account for a potentially non-linear relationship between alcohol purchase and age.REMs assume no correlation between unobserved individual effect and independent variables, i.e., Corr = 0 ; Corr = 0, and FEMs allow for arbitrary correlation between the individual effect and independent variables.In this study, the individual effect refers to the specific effect for each defined subgroup in the pseudo-panel.It has been argued that FEMs are the natural choice for pseudo-panel data when subgroup averages are based on a large number of individuals.The Hausman test was used to test whether the underlying correlation structure favoured the assumption of either FEMs or REMs.OLS models do not account for the longitudinal nature of the data and were tested only for comparative purposes.Models were fitted separately for each type of alcohol.All models were fitted using the STATA/SE 12.1 software.To account for the different size of the subgroups, weighted FEMs and OLS models were applied using the mean number of individuals within a subgroup, or nc, as weights.Hausman tests indicate that correlation exists between the independent variables and unobserved individual effects for off-trade beer and wine, and all five on-trade beverages at the 0.05 significance level.On this basis, we reject the null hypothesis and conclude that the FEMs are more appropriate for modelling this data.The choice of FEMs also agrees with previous literature.Table 2 summarises the estimated coefficients, standard errors, and statistical test for the FEMs for the 10 alcohol categories and Appendix 5-1 to 5-10 present and compare the results for FEMs, REMs and OLS models.F-Tests suggested that non-PPU/income independent variables are jointly significant for the majority of FEMs tested.The final chosen base case models were FEMs controlling for prices, income, year dummies, age squared, and the proportions of individuals having children, married, unemployed and smoking.Correlation among the 10 price independent variables was a concern and, if present, may bias the model estimates.The correlation matrix was calculated and it shows only weak to moderate correlations.The comparison of results from FEMs, REMs and OLS models suggest that different model specifications give broadly similar estimates, both in terms of the positive/negative signs and their statistical significance.For example, the estimated own-price elasticities for off-trade beer range from −0.980 to −1.105 for the three model specifications with all estimates statistically significant.Estimated own- and cross-price elasticities for the 10 types of alcohol are presented in Table 3 using the base case models.The estimated own-price elasticities are all negative and 8 out of 10 are statistically significant; off-trade spirits and on-trade RTDs being the exception.The estimates range from −0.08 to −1.27.In the off-trade a wide range of elasticities was seen with beer being most elastic after cider, followed by RTDs, wine and spirits.In the on-trade, elasticities are generally more similar across beverage types, with spirits being most elastic, followed by wine, beer, cider and RTDs.For wine and spirits, the estimated own-price elasticities in the off-trade are smaller than in the on-trade.The opposite is observed for beer, cider and RTDs.The estimated cross-price elasticities were a mix of positive and negative signs and only 6 out of 90 were statistically significant, among which 5 out of 6 have positive signs.F-Tests showed cross-price effects are jointly significant for the demand for on-trade wine and spirits, using a significance level of 0.05, and for on-trade beer, using a significance level of 0.1.The magnitude of the estimated cross-price elasticities was much smaller than that of the own-price elasticities.If we only focus on central estimates, most of the estimated cross-price elasticities of on-trade demand with respect to off-trade prices are positive, which appears to indicate some level of overall substitution effect, i.e., if prices fall in the supermarkets people appear to spend more in the pubs and bars.Using the base case FEMs, three alternative methods for creating subgroups were tested.Appendix 6 compares the estimated own-price elasticities using these methods and shows that these are broadly similar.For example, the own-price elasticity for off-trade beer was estimated to be −0.98 in the base case, −1.03 for the 96 subgroups defined by 4 social groups, −1.12 for the 48 subgroups defined by 2 regions, and −1.11 for the 96 subgroups defined by 4 regions.This suggests that the estimated elasticities are reasonably robust to different subgroup definitions.This is the first study to utilise a pseudo-panel approach to estimate price elasticities of demand for alcohol.The final base case FEMs enables estimation of own- and cross-price elasticities for 10 different beverage categories.This granularity is essential for detailed analysis of pricing policies which can affect the various beverage categories differentially.The estimated elasticities are not directly comparable with most previous estimates because the data used is from recent UK population surveys, and because the beverage categories included are more detailed than most previous studies which tend not to separate cider and RTDs, or consider off- vs. on-trade differences.Nevertheless, the estimated own-price elasticities are broadly in line with earlier estimates.Three recent meta-analyses estimated that the simple means of reported elasticities are −0.45 to −0.83 for beer, −0.65 to −1.11 for wine and −0.73 to −1.09 for spirits, while standard deviations and ranges of individual estimates for the 3 beverage types are 0.46, 0.51 and 0.37 for beer, wine and spirits respectively which demonstrated significant variations in estimates.The simple average of beer, wine and spirits own-price elasticities estimated from this study are −0.88, −0.63 and −0.49 which are all within one standard deviation) of any of the three mean estimates from meta-analyses.In the on-trade, a similar pattern is observed in this study as in previous meta-analyses, in that beer appears to be less elastic than wine or spirits.However, this pattern is not observed in the off-trade, where it was found that beer is more elastic than wine and spirits.Overall, the estimated own-price elasticities are broadly in line with historical estimates, and most modelled beverage types are found to have significant negative elasticities suggesting the pseudo-panel approach is a valid technique for deriving alcohol elasticities.It is more challenging to compare the estimated cross-price elasticities with previous estimates, especially when the beverages are separated by off- and on-trade, because there are few existing studies for comparison.Out of our 90 estimated cross-price elasticities, only 6 are statistically significant, which might be attributable to chance effects.However, the estimation of cross-price elasticities is still useful because: the estimation of own-price elasticities is improved by controlling for cross-price effects, and they can be jointly statistically significant as has been found in our study for on-trade wine and spirits.The estimated cross-price elasticities appear plausible regarding the expected signs and magnitude, and they enable quantified estimates of cross-price effects when appraising policy interventions.There are several advantages to the pseudo-panel approach.Previous analyses applying cross-sectional models on cross-sectional data are likely to have substantial problems with endogeneity.Those time-invariant variables that are omitted from the model and are correlated with alcohol prices will be uncontrolled for in such studies.For example, quality preferences are likely to vary considerably between individuals, affecting both price and quantity purchased, and cross-sectional methods would wrongly attribute these variations to prices.The FEM used in our base case substantially reduces endogeneity problems because all time-invariant independent variables, observed or not, are controlled for on the defined subgroup level.Another potential problem of using cross-sectional data relates to the observation interval.It has been observed that the length of the observation interval may have a significant impact on the magnitude of the resulting elasticity estimates, even for genuine panel data methods, and it has been suggested that this could be due to inventory behaviour.The LCF data has an observation interval of two weeks; however the pseudo-panel approach solves this issue through the use of subgroup average purchase quantities, rather than individual purchase quantities, thus smoothing out the short term irregular purchases that constitute inventory behaviour.The use of subgroup average purchase quantities also avoids the problem of excess zero alcohol purchasing observations in cross-sectional data.Nevertheless, panel models cannot remove all endogeneity problems.Price variables could be endogenous due to simultaneity because not only is the purchase level dependent on the price, but also the price could potentially be dependent on the purchase level.It has been found that a heavy drinker who spends a bigger proportion of their income on alcohol tends to choose ‘lower quality’ beverages with low PPU; while a lighter drinker with similar income tends to choose a ‘higher quality’ beverage with a higher PPU, perhaps for better taste or a more-convenient container size.The LCF data does not provide brand or packaging data, therefore the panel models have not controlled for the brand and packaging preferences which may change over time.The split of off- and on-trade beverages and separation of cider and RTDs in this analysis may alleviate the problem to some extent, but the issue remains a concern.In this study, we used self-reported prices which is the price paid by individuals.In theory, elasticities are defined as the change in demand due to a change in price where the price implicitly means price faced, rather than price paid.As far as we know, no survey has attempted to collect primary data on price faced.However, given current data and evidence, we are clear that the pseudo-panel approach is a substantial advance over, and a better alternative to, cross-sectional methods.When constructing subgroups in pseudo-panels, we assumed that the socioeconomic status and the region people live do not generally change over time.While the validity of these assumptions may be questioned, we think they reasonably hold given the limited time period of the data and the large size of the regions.Furthermore, the similarity of the results and conclusions obtained from the base case and sensitivity analyses is reassuring.Models tested in this study are static without the inclusion of lagged dependent variables.It has been suggested that the inclusion of lagged dependent variables may compromise the explanatory power of other independent variables and that a significant lagged effect of the dependent variable may be due to omitted variables or measurement error bias rather than a true lagged effect.The key implication of this study for decision makers is that they can utilise these elasticities to examine the effects of price-based interventions on alcohol purchasing and alcohol related harms in the UK.The estimated elasticities allow detailed estimation of beverage-specific demand changes due to beverage specific price changes.This is appealing for appraising interventions which have differential price impact on different beverage types such as minimum unit pricing which, by setting a floor to the retail price, will mostly affect cheap alcohol.The majority of the cheap alcohol sold in the UK is off-trade beer, cider, wine and sprites.The estimated own-price elasticities indicate substantial decrease in demand for these beverage types if their prices are increased, e.g., through minimum unit pricing and/or target excise duty increases.Given the strong associations between alcohol consumption and a range of alcohol-related harms, the decrease in demand is likely to translate into reduced mortality, morbidity and wider social harms such as crimes, absence from work and harms to family members.The pseudo-panel method could also be used to explore elasticities for population subgroups.We have performed exploratory analyses to estimate separate elasticities for population subgroups with regard to purchase level and socioeconomic status.We split the overall LCF dataset into individuals who are moderate purchasers and non-moderate purchasers.Then FEMs are applied to the two datasets separately.The estimated elasticities are presented in Appendix 7-1 and 7-2.For the socioeconomic analysis, we split the LCF dataset into individuals with lower socioeconomic status and those with middle or higher socioeconomic status.Then FEMs with 24 panel members and with 48 panel members were used for the low and higher socioeconomic groups respectively.The estimated elasticities are presented in Appendix 7-3 and 7-4.These subgroup analyses are exploratory in nature as the sample size for panel members is smaller and because the heterogeneity among panel members is reduced due to the smaller panel size.Therefore, caution is required when interpreting and applying these elasticities.The pseudo-panel approach is generalisable and could easily be applied to different countries or settings where large sample repeated cross-sectional data is available.The estimated elasticities are UK-specific and some caution should be exercised if considering applying them to a different context.The method could also be extended to a wider set of products which affect public health, for example tobacco or foods high in fat, salt and sugar.Future research to link prices faced and prices paid would be valuable if datasets could be obtained or constructed.Large scale and long-term individual-level longitudinal data would be hugely beneficial for better estimates of price elasticities.If possible, such data could also include information regarding the branding and packaging information so that the issues around potential price endogeneity could be addressed in more detail.In conclusion, this study has developed and implemented a pseudo-panel approach to estimate price elasticities of alcohol demand using repeated cross-sectional data.This approach enables longitudinal aspects of the data available to be taken into account, where previous detailed beverage specific estimates of price elasticities have tended to come from cross-sectional analyses with their associated caveats.The resulting estimates of own- and cross-price elasticities appear plausible and robust and could be used for appraising the estimated impact of price-based interventions in the UK.The Living Cost and Food Survey and The Expenditure and Food Survey are Crown Copyright.Neither the Office for National Statistics, Social Survey Division, nor the Data Archive, University of Essex bears any responsibility for the analysis or interpretation of the data described in this paper.
The estimation of price elasticities of alcohol demand is valuable for the appraisal of price-based policy interventions such as minimum unit pricing and taxation. This study applies a pseudo-panel approach to the cross-sectional Living Cost and Food Survey 2001/2-2009 to estimate the own- and cross-price elasticities of off- and on-trade beer, cider, wine, spirits and ready-to-drinks in the UK. A pseudo-panel with 72 subgroups defined by birth year, gender and socioeconomic status is constructed. Estimated own-price elasticities from the base case fixed effect models are all negative and mostly statically significant (p<. 0.05). Off-trade cider and beer are most elastic (-1.27 and -0.98) and off-trade spirits and on-trade ready-to-drinks are least elastic (-0.08 and -0.19). Estimated cross-price elasticities are smaller in magnitude with a mix of positive and negative signs. The results appear plausible and robust and could be used for appraising the estimated impact of price-based interventions in the UK. © 2014 The Authors.
22
Tectonostratigraphy and the petroleum systems in the Northern sector of the North Falkland Basin, South Atlantic
The Falkland Islands offshore designated area for exploration covers approximately ∼460,000 km2 and has received relatively little attention in terms of hydrocarbon exploration.It is composed of four main sedimentary basins of Mesozoic to Cenozoic-age; namely the North Falkland, Falkland Plateau, South Falkland and Malvinas basins, which lie north, east, south and south-west of the islands respectively.The most extensively explored and so far successful of these basins in terms of hydrocarbon prospectivity is the North Falkland Basin.More specifically, the Eastern Graben of the NFB, which has been the main focus of hydrocarbon exploration since the 1990s.Commercial interest, in terms of hydrocarbon potential of the NFB, has grown considerably with a number of successful exploration campaigns between 2010 and 2015.Initial exploration of the NFB between 1998 and 1999 focused on targeting late post-rift sandstones draped over structural highs in the central parts of the NFB, an exploration strategy influenced by North Sea-style tilted fault block plays.Despite encountering an excellent, organic-rich, Lower Cretaceous-aged lacustrine source rock during drilling, this campaign did not encounter economical resources of hydrocarbons.However, the presence of oil and gas shows in several wells indicated a number of elements of a working petroleum system, including a mature source rock; reservoir potential sandstones and a competent seal.The quantity of oil expelled from the source rock into the NFB is estimated to be approximately 60 billion barrels of oil.Subsequently, exploration concepts shifted to basin margin-derived early post-rift sandstones.In particular, the reservoir concepts of Richards et al. described basin-margin attached fans prograding into lacustrine waters, ranging from alluvial fan, fan delta to deep-lacustrine fan systems, forming at various palaeo-water depths.Seismically bright amplitude anomalies, identified on 3D seismic data were, described by Richards et al., indicated various potential sediment entry points.The 2010–2011 exploration campaign, was successful in discovering commercial quantities of hydrocarbons in the NFB and proved the basin margin-derived reservoir concept."This campaign targeted easterly-derived turbidite fan deposits, which form a stacked, margin-fringing succession within the Lower Cretaceous packages of the early post-rift, along the Eastern Flank of the North Falkland Basin's Eastern Graben.The major success of the Sea Lion discovery was a turning point for exploration success within the basin.Following Sea Lion, a number of analogous targets were drilled between 2010 and 2011 within the same play, leading to the discovery of hydrocarbons within the Casper and Beverley fans.More recently, three wells were drilled in the NFB, leading to further discoveries in the early post-rift, such as the Zebedee and Isobel Deep Fans in 2015.These discoveries not only extended the spatial and stratigraphic extent of the petroleum system, they highlighted the further potential for future significant discoveries in the North Falkland Basin.One area that has remained underexplored since the initial campaign in 1998 is the NNFB, which is essentially an extension of the main NFB, and likely contains a succession of early post-rift lacustrine sediments, similar to those mapped in the Eastern Graben of the NFB to the south.The stratigraphy of the NNFB also contains a presumably older, syn-rift succession, which is structurally complex and remains completely un-explored.This study addresses the following key questions:What is the structural configuration of the NNFB?,What are the main controls on the structural configuration?,How does the basin configuration and fill compare and contrast with the Eastern Graben towards the south?,What is the nature of the tectonostratigraphy of the grabens in this area?,What are the likely petroleum systems and plays in the NNFB?,The NFB, described as a failed rift system, comprises a series of depocentres following two dominant structural trends: N-S oriented faulting is predominant in the northern area; whilst significant WNW-ESE oriented faults control the Southern North Falkland Basin.Initial rifting of the NFB is likely to have initiated in the late Jurassic or early Cretaceous.This rifting phase was followed by a thermal sag phase that began in the Berriasian-Valanginian.The environment of deposition throughout this sag phase is thought to be predominantly continental and deep lacustrine until Albian-Cenomanian times, when the basin began to develop increasingly marine conditions.The main depocentre of the NFB is orientated N-S, is approximately 30 km wide and 250 km long, referred to here as the Eastern Graben.A shallower depocentre is present towards the west, termed here the Western Graben, and is separated from the main Eastern Graben by an intra-graben high known as the Orca Ridge.In this Eastern Graben, the basin displays an asymmetric half-graben geometry which is downthrown to the east.The footwall to main basin bounding faults are composed of a Devonian-Permian platform.In addition, there are a number of subsidiary depocentres immediately east of the Eastern Graben, all of which follow a similar N-S trend.The Southern North Falkland Basin represents an area intersected by a series of en-echelon WNW-ESE faults, which are easily identifiable on seismic data and gravity data.The WNW-ESE faults are typically offset by the main N-S faults, suggesting two significant and distinct phases of extension, potentially associated with separate phases of rifting.The older, WNW-ESE faults are similar in orientation to the trend of the Palaeozoic thrust sheets developed to the NW of the islands.The WNW-ESE faults were possibly formed by reactivation of the onshore structures."Although no well data exists in this part of the basin, the timing of this reactivation and basin development is thought to be coeval with the initial development of South Africa's Outeniqua basin during the Kimmeridgian.A tectonostratigraphic model for the NFB was presented by Richards and Hillier.The eight tectonostratigraphic units identified are: pre-rift/basement; early syn-rift; late syn-rift; transitional/sag; early post-rift; middle post-rift; late post-rift; and a post uplift sag phase.The post rift succession is further divided into a number of sub-units, including: LC2, LC3 and LC4 in the early post-rift; LC5, LC6 and LC7 in the middle post-rift; and L/UC1 and UC1 in the late post-rift, where ‘LC’ is Lower Cretaceous and ‘UC’ is Upper Cretaceous.Previous seismic interpretation studies of the NFB have discussed different stratigraphic schemes.The pre-rift has been encountered in one well in the basin which targeted an intra-basin high.At this well location, the pre-rift comprises Devonian to Jurassic lithologies, and as a consequence of limited data, remains poorly understood.The earliest phase of rifting initiated in the late Jurassic, and lasted until the Lower Cretaceous.During this time, an early to late syn-rift succession, comprising conglomerates, sandstones, organic-rich mudstones, and reworked tuffs, was deposited in a fluvial to lacustrine environment.Subsequently, the basin experienced a transitional-sag phase in which a succession of organic-rich lacustrine claystones were deposited.A succession of early post-rift sediments was deposited during the early Cretaceous, resulting in a laterally and vertically extensive lacustrine mudstone and sandstone succession.Sediments were transported into the basin through fluvial-deltaic systems prograding from the northern-most extent of the basin, along the Eastern Graben axis, towards the south.Concomitant with this, sands were also transported into the Eastern Graben from the flanks, along feeder systems that fed a series of turbidite fans; creating a complex, heterogeneous succession of sandstones and interbedded mudstone facies.In particular, the easterly-derived sandstones currently represent the main reservoir lithologies identified in the NFB, to date.During the Lower Cretaceous the basin fill began to develop as a thick succession of middle to late post-rift sediments were deposited.Here the sediment fill is characterised by a transition from a lacustrine dominated succession to terrestrial-fluvial systems.In the late post-rift the basin experienced the first significant marine transgression.The resulting sediments comprised of claystone interbedded with sandstone, deposited in a restricted, marginal marine or lagoonal environment.Following the late post-rift the region underwent significant uplift, during which up to 800 m of overburden is thought to have been removed from parts of the basin.The post uplift sediments consist of a dominant succession of claystone with interbedded sandstone, deposited in a fully developed marine basin environment.This study uses 1,250 km of 2D seismic reflection data collected and processed by Veritas in 2000 on behalf of Lasmo plc, located north of operated blocks PL001, PL032 and PL033.The seismic data is post stack time migrated, which displays a positive polarity and the data is zero-phased.These seismic lines have a line spacing of 2.5–5 km in an N-S orientation and 2.5–10 km in an E-W orientation.Overall, the quality of the 2D seismic data is of reasonable quality down to 3–3.5 s two-way-travel-time.Beyond this, the signal to noise ratio increases significantly and the reflections become chaotic.In addition to the seismic reflection data, major structures and basins were identified using Bouguer gravity data from global marine data.To date, there are no wells within the study area; however, a seismic correlation has been made from the “FALK2000” 2D survey, southwards into the “Company Composite” 3D seismic survey which consists of several merged 3D seismic datasets acquired by Shell in 1997; Desire in 2004 and Rockhopper in 2007 and 2011.This profile intersects the nearest well to the study area and wells near the Sea Lion discovery.In these more southerly areas, geological understanding is more mature and the stratigraphy is better constrained.These tie-lines enabled seismic well picks to be interpreted across into the study area, providing some stratigraphic control on the interpretation.Seismic data were interpreted using seismic and stratigraphic concepts.TWTT surface maps were produced from the seismic interpretation, gridded at 100 m increments.Fault polygons were created through extrapolation of 2D fault segments using a standard triangulation gridding algorithm method.In the NNFB, six regionally significant seismic reflections were identified within the 2D seismic data, defining six tectonostratigraphic units.These units have been defined by extrapolation of seismic data from the main Eastern Graben of the NFB.The seismic reflectors defining these units are: top basement; top early syn-rift; top late syn-rift; top transitional/sag; top early post-rift; and top late post-rift.The deepest reflector that can be mapped on a regional scale, the top basement, forms an unconformable surface that is present across the entire seismic survey.In the northern sector of the NFB, top basement is less clearly imaged below 3 s TWTT, while further south in the main Eastern Graben, the basin deepens with the “TB” reflector found around 4 s TWTT.It often presents as a very bright amplitude on basement highs such as the Eastern Flank, whilst in deeper parts of the seismic data, there are small, discontinuous, high amplitude reflections beneath the “TB” reflector.It is possible these features could represent either igneous intrusive bodies or Devono-Carboniferous metasediments, deposited prior to the basin rifting event.Two dyke swarms have been identified onshore Falkland Islands, with ages of 188–178 Ma and 135-121 Ma.The seismic reflector marking the top of the early syn-rift is challenging to distinguish laterally.Internally, the early syn-rift often displays, high amplitude reflectors, which are divergent and mound-like in appearance.In some places this unit thins onto pre-rift basement highs, in other cases these high amplitude reflections are discontinuous and have a chaotic appearance.The top late syn-rift is marked by an undulating, high amplitude seismic reflector, which separates the underlying, slightly transparent late syn-rift package from the overlying transitional package.In places, the late syn-rift onlaps onto the underlying early syn-rift unit.The internal character of the late syn-rift is relatively transparent throughout, with discrete, alternating, high and low amplitude packages observed.The top of the transitional/sag reflector is marked by a high amplitude laterally continuous reflector, which onlaps against the basin margins of the Eastern Graben, as well as the Eastern Flank.The transitional/sag interval is characterised by a relatively uniform sediment thickness, which only ever thins out onto the basement highs.Internally, it contains isolated, chaotic, high amplitude events.The top early post-rift reflector is a prominent, high amplitude reflector that is laterally continuous across this seismic survey and defines the top of the early post-rift unit.Internally, this unit contains high amplitude, sheet-like reflectors at the base, along with clinoform-like geometries forming at the top of the package, which appear to downlap onto the sheet-like reflectors beneath.These clinoforms prograde from the north towards to the south.The top late post-rift seismic reflector is laterally continuous across the seismic survey.The late post-rift unit, internally, consists of laterally continuous, seismically transparent intervals at the base, developing into alternating, high and low amplitude reflectors towards the top.The seismic package above the TLPR represents the post sag uplift sequence, which continues to the seabed.The package is generally transparent containing sub-parallel to parallel, low amplitude reflectors.Although within the package there are a few high amplitude, laterally continuous reflectors, which represent unconformable surfaces within the succession.This can be shown by downlap terminations of divergent reflectors on to these surfaces.A number of two-way-travel time structural maps were produced from the interpretation of the 2D seismic data in order to understand the structural evolution of the NNFB.A map of top basement shows four N-S orientated structural lows, defined from west to east as the: Western Graben Splay; Eastern Graben; Eastern Graben Splay; and the Phyllis Graben.The Western Graben Splay, Eastern Graben and Eastern Graben Splay, together, form the northern continuation of the main graben of the NFB.The Western High is considered to be a northward extension of the Orca Ridge to the south and therefore the Western Graben Splay is likely to be a northward extension of the Western Graben of the NFB.The Western High separates the Western Graben Splay from the Eastern Graben.The Eastern Flank forms the main structural high in the eastern part of this area.This has been separated here into a spur termed the Intra-Basin High, which separates the Eastern Graben and Eastern Graben Splay.The Eastern Flank forms the main structural high on the eastern side of the Eastern Graben and continues southwards to the Sea Lion discovery area.In the southern part of the NNFB, both the Eastern Graben and Eastern Graben Splay have half-graben geometries, and deepen towards the east against the main bounding faults.A series of NW-SE and NE-SW orientated faults are present across the NNFB and define the structural orientation of these grabens.The Phyllis Graben, located directly to the east of the Eastern Flank, is composed of a series of half-grabens that are predominantly orientated N-S.This graben also displays an asymmetrical profile, deepening towards the north-west.It is possible that the Phyllis Graben continues north of the study area, developing into a geographically larger suite of grabens, shown as N-S oriented `gravity-lows´ in Bouguer gravity data.These grabens have a comparable gravity signature to that of the Eastern Graben.By the end of the early syn-rift, all four of the main structural lows had developed.At this stage, the Eastern Graben was the deepest and spatially largest of the four depocentres.In the Eastern Graben Splay, the early syn-rift interval deepens towards the east and south against the Eastern Flank, while the Western Graben Splay deepens to the south.In contrast, the early syn-rift of the Phyllis Graben deepens north westerly.In general, the early syn-rift interval maintains a relatively consistent thickness in the study area, but thickens southwards towards the Sea Lion Discovery.In some areas, the early syn-rift shallows up against the main bounding faults, particularly in the northern part of the Eastern Graben and Eastern Graben Splay.The late syn-rift interval follows a similar structural pattern as the underlying early syn-rift, with increasing deepening in the centre and southern of the Eastern Graben in the NNFB.In the southern part of the NNFB, the late syn-rift onlaps the underlying early syn-rift interval against the Intra-Basin High.By the end of the transitional/sag phase, the sedimentary cover was significantly more extensive, with the overstepping of the Intra-Basin High between the Eastern Graben and Eastern Graben Splay and partially over the Western High.Structural depth increases southwards and towards the centre in the Eastern Graben, Eastern Graben Splay, whilst in the Phyllis Graben the basin depth increases northwards.Fault trends remain consistent with NW-SE and NE-SW trends observed at the top of the late syn-rift.A network of fault terraces is present at the southern extent between the Eastern Graben and Eastern Graben Splay.By the early post-rift, the basin continued to fill with sediments, primarily within the Eastern Graben and the Eastern Graben Splay.The Western Graben Splay displays a similar amount of deepening to that exhibited during the transitional/sag phase, whilst the Phyllis Graben has experienced overall deepening.During the early post-rift units, the Phyllis Graben appears structurally deeper than the Eastern Graben, the Western Graben and the Eastern Graben Splay.Furthermore, sediments have encroached further northwards onto the Intra-Basin High.The Western Graben Splay, Eastern Graben, Eastern Graben Splay and Phyllis Graben fault trends remain consistent, with a NW-SE and NE-SW trend as seen during the transitional/sag phase.In the late post-rift, sediment cover is preserved across the Western and Eastern Flanks, as well as the Intra-Basin High.Here, the late post-rift sediments deepen northwards in the Eastern Graben and Phyllis Graben respectively.By this phase, most of the faulting had terminated with only a few NW-SE faults remaining active.This study has shown that faults that were active during the syn-rift phase remained consistently active until the early post-rift phase.The NW-SE faults observed in the NNFB may represent similar structures as those observed in the Southern North Falkland Basin, which were interpreted as reactivated thrust faults similar to those seen onshore.The NE-SW orientated faults are likely to have formed due to the initial E-W extension associated with the opening of the South Atlantic during the late Jurassic-early Cretaceous.These faults form a component of the fault architecture along with the NW-SE faults, defining the margins of the N-S trending depocentres, namely the Eastern Graben, Eastern Graben Splay, Western Graben and Phyllis Graben.It is likely the initial rifting of the NNFB occurred contemporaneously with the central part of the NFB to the south.During this initial rifting, an early syn-rift phase led to the development of accommodation space within the centre of each of these grabens.This rifting continued into the late syn-rift with accommodation space increasing, particularly within the Eastern Graben.The presence of structural highs such as the Western Flank, Western High, Intra-Basin High and Eastern Flank, as well as consistent fault trends in the early and late syn-rift, suggest these faults remained tectonically active throughout the syn-rift.As rifting halted, and the NNFB entered the transitional/sag phase, the Western High and Western Flank became inactive and an overstepping succession was deposited across these highs.It is likely that during this time the amount of accommodation space developed at the edge of the Intra-Basin High started to be outpaced by sediment input, evidenced by the partial flooding and deposition of sediments over the high during this time.However, the Eastern Flank continued to remain a topographical high at the stage.The consistent presence of the NW-SE and NE-SW fault trends illustrates these faults remained active throughout the syn-rift into the transitional/sag phase.During the early post-rift, the Eastern Graben and Eastern Graben Splay formed a single connected depocentre deepening southwards and remained isolated from the Phyllis Graben by the Eastern Flank.Clinoforms observed within the early post-rift of the Eastern Graben suggest a prograding deltaic system drained into the basin from the north.The Phyllis Graben, seems to have developed at a steady rate throughout the syn-rift and transitional/sag phase, as seen by the gradual structural deepening of the basin northwards.During the early post-rift the Phyllis Graben appears to have more subsidence than the Eastern Graben, while in the late post-rift both depocentres had a consistent depth.By the late post-rift, most of this tectonic activity had ceased with only a few NW-SE faults remaining active, having either exploited crustal weaknesses derived from mid post-rift faults or though differential compaction.At this stage, the Eastern Flank was covered with sediment, as the Eastern Graben and Phyllis Graben amalgamated into a single, large depocentre.No well data is available in the NNFB and consequently source rock intervals to the south have been used to provide analogous data.In the North Falkland Basin, the main source rock intervals are organic rich claystones within the transitional/sag and early post-rift tectonostratigraphic units.These claystones were deposited in an anoxic, lacustrine environment during the Berriasian to Aptian, and are thought to be responsible for charging the reservoirs of the Sea Lion discovery.The source rocks in the Early Post Rift comprise Type I and II kerogens, and generally increase in total organic carbon from the transitional/sag unit, into the overlying early-post rift.Basin modelling has suggested the main phase of oil generation of the early post-rift source rock took place during the late Cretaceous between 70 and 100Ma.Analysis of the 2010–2011 wells characterised the recovered oil samples as: “a dark, waxy, lacustrine oil with an API ranging from approximately 24-29° sourced from various oil families”.Fig. 8 illustrates that the transitional and early post-rift units, which contain this main source rock interval, remain at the same depth across the main NFB and into the NNFB.Therefore, it is possible that hydrocarbons have been generated in this part of the basin.Continuous sub-parallel/parallel, low frequency, high amplitude reflectors in these units are likely to represent deep lacustrine organic rich claystone source rocks.In contrast, discontinuous reflectors are likely to represent shallow lacustrine sediments, which consist of organic lean claystone units interbedded with sandstone units.In addition, a secondary source rock interval is likely within claystone-dominated units within a fluvial succession of the late syn-rift, deposited during the Tithonian to Berriasian.Rock-Eval pyrolysis studies, completed on wells 14/05-1A and 14/10-1, suggest the presence of Type II source rocks in the late syn-rift with 4.5% average TOC.Basin modelling suggested an oil window between 2,800 and 3,500 m across the NFB.In the central part of the NFB, the syn-rift package reaches depths >4,000 m, Vitrinite reflectance data suggests any source rock encountered here is likely to be within the gas window.However, as the syn-rift is shallower in the NNFB there is potential for it to be oil prone in this area.Fluvial sandstones have the potential to act as reservoirs within the early syn-rift.These sandstones have been encountered in the nearest well 14/05-1A, with several zones of net thicknesses reaching up to 40 m, with porosities ranging from 4.4 to 7.5%.Greater potential is likely in fluvial sandstones of the late syn-rift, where thicker successions have been encountered with up to 125 m of net sandstone and porosities ranging between 27.8 and 30.4%.In the NFB, the best understood reservoir intervals are contained within the early post-rift unit, these sandstones were first identified as the primary reservoir target during the drilling campaign in 1998, and were later confirmed during the Sea Lion discovery in 2010.In the Sea Lion Fan, these reservoirs consist of well-sorted, fine-to medium grained, high-density turbidite sandstones deposited in a deep-lacustrine turbidite fan setting.The fans are composed of overlapping lobes fed into the basin from the east.Reservoir quality within the sandstones in the Sea Lion Fan is generally good, with porosity and permeability values averaging 22% and 185 mD, respectively.On the 2D seismic data, similar geometries comparable to that of the Sea Lion complex are observed.The Northern Lead forms a discrete, 5–7 km long, high amplitude seismic event, which was deposited near to the base of the early post-rift unit.In both examples of the Sea Lion complex and Northern Lead seismic reflectors are significantly stronger than the surrounding sediments and display a “mound” like topography, within an otherwise flat lying package of reflectors, suggesting depositional relief.Laterally, both display a reduction in seismic amplitudes towards the edge of each feature and are found at the base of southerly prograding clinofroms representing delta foresets.In addition, a series of high amplitude, sheet-like seismic reflectors can also be seen in the early post-rift, which may represent hydrocarbon-filled lacustrine turbidite sandstones.The first phase of exploration drilling in 1998 focused on targeting structural, four-way dip closures, such as drilled by well 14/09-1, which drilled on the crest of a large, titled fault block.One of the main reasons for failure was the ineffective top seal.In the NNFB there are a number of potential two-way and three-way dip closures identified in the hanging walls of faults in the early and late syn-rift intervals, which are yet to be tested.In the early post-rift, there is potential for stratigraphic traps containing deltaic-top and delta-front sandstones.These sediments are likely to be part of prograding deltaic systems, which can be observed as clinoform geometries in the seismic data.Furthermore, the more distal delta deposits, which are likely to be more mud-prone, provide lateral seal potential these trapping geometries.To date the most successful trapping geometries in the North Falkland Basin are complex, combined structural and stratigraphic traps, particularly within the early post-rift.Firstly, the stratigraphic component is provided by deep lacustrine turbidite fans that display abrupt, lateral pinch-outs and up-dip sealing through the detachment of feeder systems facilitated through slope bypass.Secondly, the structural component is provided through the draping of turbidite sands along basin margin geometries and over the inversion-related high in the centre of the Eastern Graben.Finally, in places basin-margin faults aid up-dip sealing through the offsetting of turbidite fan feeder channels from the depositional lobes, providing an element of fault closure to some of these traps.The regional seal across the NFB is formed by a thick mudstone succession within the early post-rift.The early post-rift unit is laterally extensive across the grabens of the NNFB according to the correlated seismic package, therefore likely forming a regionally effective seal.The seal will be most effective in the centre of the basin depocentres where the mudstone accumulations are likely to be thicker.In the NNFB there is also potential for sealing mudstones within the middle post-rift and late post-rift units.Evidence for an active hydrocarbon system in the NFB is proven by the by the discoveries to the south.It is inferred in the NNFB from seismic anomalies present in the seismic data.In the NNFB, the early post-rift unit is intersected by major, deep-seated normal faults that penetrate through to the late post-rift, typically at the basin margins.Amplitude anomalies with a negative impedance contrast, or `soft-kicks’, are common, some of which can be interpreted as bright-spots.The bright spots in the Eastern Graben of the NNFB appear to occur within the middle post-rift unit, brightening near faults, which may have acted as fluid-conduits or traps for hydrocarbon migration.In addition, gas chimney features are observed cross-cutting seismic reflections.Paleo-pockmarks visible in the seismic could indicate thermogenic or biogenic gas associated with the deeper source rock intervals.Stratigraphic packages within the early post-rift show brightening along reflections, which may indicate fluid filled sandstone packages within a turbidite fan succession, similar to those of the Sea Lion Fan.The conceptual model for the petroleum system of the NNFB is summarised in Fig. 11.In this area, the best reservoirs are likely to be within early post-rift structural-stratigraphic traps, with sand-rich turbidite fans, and the fluvial sandstones of the syn-rift in structural traps and in.The most organic rich sediments are likely to be found in the centre of the graben, in the transitional/sag unit, whilst a secondary organic rich interval maybe present along the hanging wall margins which generally represent the deepest section of the grabens during the late syn-rift unit.Moreover, both source rock intervals in the syn-rift and transitional/sag unit are likely to be mature in the NNFB as they are situated within the estimated oil window.Finally, hydrocarbon migration potential along major faults, through and above the source/seal interval, into the overlying, thick deltaic front sandstones.
The North Falkland Basin represents one of the frontier areas for hydrocarbon exploration in the South Atlantic. This study presents the results of new subsurface mapping using 2D seismic data in the north of the Falkland Islands offshore area, which has delineated a series of discrete grabens northwards of the main North Falkland Basin, referred collectively to as the Northern sector of the North Falkland Basin (NNFB). Six regionally significant seismic reflectors are interpreted within this data, dividing the sedimentary fill into six tectonostratigraphic packages, including: early syn-rift; late syn-rift; transitional unit; early post-rift; middle to late post-rift; and a sag unit. Structural interpretation of the 2D seismic data has led to the definition of four north-south orientated depocentres, namely: (1) the Eastern Graben, largest of the depocentres; 20 km wide by 45 km long, reaching depths of 3 km; (2) the Eastern Graben Splay, a smaller depocentre; 10 km wide by 20 km long, reaching depths of 2–2.5 km; (3) the Western Graben Splay, the smallest depocentre; 5 km in width and 20 km long, with a basin depth of 2 km and (4) the newly defined Phyllis Graben, which is 13 km wide and 30 km long, with a basin depth of 3 km. A network of NW-SE and NE-SW trending faults controls the development of these grabens, separated by a Western, Eastern and Intra-Basin high. These grabens represent a northern continuation of the Northern Falkland Basin to the south. Hydrocarbon discoveries to the south of this study area (e.g. Sea Lion, Casper, Beverley, Zebedee, Isobel Deep, and Liz) confirm a working petroleum system adjacent to the Northern sector. This study has identified a number of seismic anomalies, including amplitude brightening events, which potentially correspond to an extension of this petroleum system, indicating active migration pathways. The main targets, in terms of hydrocarbon interest in the northern sector, are likely to be stratigraphically trapped hydrocarbon accumulations, contained within vertically-amalgamated turbidite fan sandstone reservoirs, deposited within the early post-rift. A second, yet to be tested, syn-rift play, in which the trapping geometries are structural and the reservoirs are fluvial sandstones is also identified.
23
Failing entrepreneurial governance: From economic crisis to fiscal crisis in the city of Dongguan, China
The global financial crisis in 2008 triggered a series of chain reactions across the world.The impact of globalization on the local economy may be seen in the city of Dongguan, the heartland of export-oriented economies in the Pearl River Delta.Under the export-driven economy, the Pearl River Delta experienced export-driven urbanization, or so-called ‘exo-urbanization’, which transformed rural areas through bottom-up and spontaneous urbanization.The remarkable economic development of China’s rural areas has been attributed to a particular mode of ‘entrepreneurial governance’ based on the legacy of state socialism and the introduction of market mechanisms, known as ‘local state corporatism’.The concept was initially developed by Oi, and it provides a powerful paradigm to explain Chinese rural takeoff.In large cities such as Beijing and Shanghai, the so-called ‘state entrepreneurialism’ became a powerful driver for the development of new towns and spatial restructuring.The entrepreneurial state seems to be very effective in the Yangtze River delta and Shanghai but its limitation has not been fully assessed, in particular in the areas that experienced more bottom-up urbanization processes.While the process of economic restructuring had started before the global financial crisis, the crisis exacerbated the collapse of the rural economy, which had until then been based on rental incomes and was supported by the governance of local corporatism.Although the crisis had serious impacts on rural migrant workers, the implications for the Chinese city have so far not been fully examined.This paper examines village debt in Dongguan after the global financial crisis.The remainder of the paper is organized as follows.Section ‘Theoretical frameworks: forms of entrepreneurial governance’ reviews the relevant theoretical framework, especially the concept of entrepreneurial governance.Section ‘Research methodology’ introduces the methodological issues of this study.Section ‘The development stage of entrepreneurial governance in Dongguan’ discusses the development of entrepreneurial governance in Dongguan.Section ‘The creation of an economic crisis – declining rental incomes’ discusses the local economic crisis after the global financial crisis.Section ‘Public service spending under entrepreneurial governance’ then examines public spending and village debts.Section ‘From an economic crisis to fiscal crisis: village debt’ discusses how the economic crisis has led to the fiscal crisis in Dongguan.Section ‘Conclusion’ concludes and reflects on the limit of entrepreneurial governance.In this section, the theoretical framework of entrepreneurial governance is presented.First, the concept is discussed with reference to the western literature; then, the study of rural China and the unique form of governance is reviewed.There are different manifestations of local governance, according to national different national contexts: managerial, corporatist, pro-growth, and welfare governance models.The concept of entrepreneurial governance here refers to a specific form of network that combines the government with the local business community.Local governance has different forms in different countries.Political economic studies focus on the transformation of the state and urban entrepreneurialism, and the pro-growth development politics known as the ‘growth machine’.In studies of Chinese urban and rural development, there are two different but related kinds: one on urban entrepreneurialism and the other on rural ‘local corporatism’.More recent development in local entrepreneurial governance has been highlighted by a wide ranging body of literature on urban China.But this body of literature is mainly developed in larger cities or the places with strong local states.This section will review these two bodies of literature.In contrast to the limited number of studies on rural areas, the growth of Chinese cities has been under intensive study.Explanation is more aligned with the development of entrepreneurial urban governance, as described above.For example, Duckett discussed the driver of real estate and how this changed the nature of the local state in China.Echoing the thesis of the change from managerialism to entrepreneurialism in Western capitalism, and the growth machine which describes an urban politics driven by locally bounded interests in property, researchers found the aggressive promotion of land and housing development and a politics characterized by growth-oriented coalitions between the local government and real estate developers.For example, Tao et al. and Ye and Wu discussed how local government under fiscal incentives uses land development to promote economic competitiveness.This explanation mainly resorts to the notion of a ‘land revenue regime’ in which the local state aims to gain revenue from land and infrastructure development to make up the fiscal deficit under the tax-sharing system.The evidence of empirical research on land dynamics and urban expansion is strong.In the Yangtze River Delta, entrepreneurial governance is seen in administrative annexation by the central city to expand its jurisdiction and foster economic competitiveness, while in the Pearl River Delta the city of Guangzhou strove to improve its image and develop mega-projects to attract investment.The export-oriented development model used in the PRD has confronted with serious problems since the implementation of industrial upgrading strategy designated by Guangdong Provincial government prior and after the 2008 global financial crisis.The failure of entrepreneurial governance has been be demonstrated by the emerging tension between various levels of governments, which has triggered the fiscal crisis in Dongguan.From the studies of rural China, a specific form of entrepreneurial governance has been identified.It must be noted, however, that rural development in China comprises different types.The Sunan model represents strong local collectivism, whereas Wenzhou has seen an entirely different model, which is more based on individual entrepreneurs and family businesses.The rural areas in the Pearl River Delta represent weaker local government, but stronger village governance based on family ties and clans.Overseas investment in Dongguan presents a characteristic of smaller investors than in Suzhou, reflecting the different development approaches: the former is more towards households and villages while the latter is organized by cities and counties.For rural China, the concept of local state corporatism initially developed by Oi provides a powerful paradigm of explanation.It describes the close relations between the local state and enterprises.In fact, the local state, according to this explanation, uses financial resources to set up enterprises that are under its control so that it can benefit from the profit of these enterprises.The explanation was largely developed from the context of Chinese rural areas.The explanation is based on fiscal decentralization, which incentivized the local state in economic development.The local state directly participated in local economic development.The local state at the county level uses its power to mobilize capital and resources to support rural industrialization.At the village level, the thesis of local state corporatism argues that because the village was not a layer of government it acted more like a firm in the local governance.The explanation points to the institutional source of rural industrialization in the role of the local state.In the Chinese context, the legitimacy of the state and its authoritarian tradition supported a strong state role.This state-centered explanation is, however, different from the market transition thesis.The latter believed that rural development benefited from market transition and therefore the role of the market was more important than that of the local state.The thesis of entrepreneurial governance in general and local corporatism in particular attributes China’s remarkable economic growth and urban transformation to the institutions of governance.For example, in rural areas, we see the development of specialized towns and tourism-driven growth.Entrepreneurial governance has been criticized for its negative impact on the livelihood of residents through demolition and displacement, environmental degradation, and excessive inter-city and inter-region competition and market disorder.In the West, entrepreneurial governance has been criticized as the threat to the democratic process of decision making.But as a development approach, entrepreneurial governance has been regarded as powerful in terms of economic achievement.The East Asian Miracle described by The World Bank remains a myth about the combination of entrepreneurial governance and developmental state in the promotion of economic growth.The framework of the ‘developmental state’ is extensively applied at the nation-state level.However, its operation at the local level is not studied in detail.The literature of entrepreneurial governance in China is more concerned with the city and its promotion strategy.The problem of entrepreneurial governance, especially at the lowest level of villages, has not been fully understood.The financial side has not been studied in the literature of entrepreneurial governance.In short, we have seen parallel development of two groups of literature: the one on economic decentralization and urban entrepreneurialism inspired by the Western studies, and the other initially developed in rural China as local corporatism but has been further developed with a focus on property rights and local development and incentives given to local government.It is thus very meaningful to return to rural China to re-examine this thesis and to interrogate the effectiveness of this approach of local entrepreneurialism.This research is part of a larger research project commissioned by the municipality of Dongguan to examine economic restructuring and rural financial changes in Dongguan.The main project lasted two years from 2012 to 2013, involving 12 Professors and Associated Professors, 11 doctoral students, 33 Master’s students, and 10 undergraduates.Extensive fieldwork was conducted at the city, town, village, and unit levels, including formal meetings, semi-structured interviews, and a questionnaire survey.In total the team visited 26 government departments, 12 towns and industrial parks, and 5 case villages.Data obtained from individual villages and units were compiled in the final report.This research project led to a major multi-volume report submitted to the government of Dongguan.Because the main project is an applied policy project that covers a wide range of policies, it is not possible to include detailed descriptive materials here.The current paper only reports key findings from the case villages in which the data were compiled from fieldwork in the villages.The focus of the paper is on the condition of village finance.The data are not typically available in statistical yearbooks, as the information is outside the scope of conventional city-based statistics.The paper aims to identify the scale of fiscal deficit and the composition of income and expenditure at the village level.The complex operation of village finance should be left to other studies in the future.Some quotations are used for the purpose of illustration rather than as systematic evidence.Dongguan represents a typical phenomenon of bottom-up urbanization in China.It shares some similarities with other regions such as southern Jiangsu, such as ‘urbanization from below’, but has a more decentralized pattern of growth.There are four levels of governance: the municipality, towns, villages and units.A unique feature of governance in Dongguan is the absence of district or county, typically between the municipality and street offices or towns.To some extent, this represents a streamlined structure of governance, with development focusing on towns, villages, and units.The local governments at the town and village levels participate directly in economic development and form close coalitions with enterprises at their respective levels.This entrepreneurial governance is quite versatile and flexible, and has played an important role in local economic growth.In this section, we review the development stages of entrepreneurial governance.Village local corporatism has experienced three stages of development.The first stage was from 1978 to 1995.In the first stage, Dongguan developed a village economy based on village enterprises that used spare village halls to attract inward investment.Due to geographical proximity to Hong Kong and extensive social ties, Dongguan pioneered the so-called ‘three supplies and one compensation’, which literally refers to supplies of raw materials, equipment, and design from overseas, while villages gain compensation for the use of local labor, land, and other logistical costs.This is essentially a low risk and low value-added processing economy, in which villages gain processing fees.In this period, the local economy grew at an annual growth rate of 25.6%.Dongguan was thus industrialized but the driving force of industrialization came from overseas investment.The second stage ran from 1996 to 2007.This stage saw significant development of the rental economy.The municipal government promulgated a series of regulations on land administration, rural asset management, and rental housing management.Instead of using spare village halls, villages managed to obtain loans from banks to develop factory buildings to rent out to investors.At the same time, villagers expanded their housing to accommodate rural migrant workers and received significant rental income from housing.The rental economy occupied a significant proportion of village income, typically 70%.Another feature of this period was the development of specialized towns, for example, Houjie town specialized in shoe making, Humen in cloth, Chang’an in metal models, Dalang in the wool textile industry, and Dalingshan in furniture.In contrast, township and village enterprises declined because of the problem of ambiguous property rights leading to ineffective management.The profit rate of rural collectives declined from 5.9% in 1985 to only 1.5% in 2006.Most TVEs went bankrupt.In this stage, the average annual growth rate was still 21.7%.The second stage saw the strengthening of local governance.The city government strengthened land and rental management.The town government strove to attract inward investment.Industrial clusters and industrial parks started to appear, while TVEs collapsed.The village and unit levels experienced a boom in land development, either renting out the land to investors or borrowing loans to build standard factory buildings.The rental economy was the main feature of this period.The third stage runs from 2008 to the present.In this stage, the rental economy began to experience severe financial difficulties.Influenced by the global financial crisis, the annual growth rate of the Dongguan economy fell to 8.5%.The export-oriented economy, which was heavily dependent on the input of land, labor and energy, reached its limit.This triggered a process of economic restructuring.The provincial government accordingly proposed to relocate migrant workers and industries to inner regions of the province, under the policy of so-called ‘double relocations’.In addition, the provincial government developed a policy of regenerating old villages, old factories and old urban neighborhoods, or the so-called ‘three olds regeneration’.The city government proposed to abolish units, consolidate villages and convert them into urban administrative units, and reform rural collective shareholder companies.The city and town governments also raised the requirement for inward investment and promoted the development of industrial parks.The rental economy of villages and units was severely affected by the global financial crisis.With the collapse of factories and the decrease in the number of migrant workers, the demand for both industrial properties and housing declined.The rental economy encountered serious difficulties.Rents of factory buildings declined from 15 Yuan per square meter to about 6 Yuan per square meter, while the vacancy rate increased.This period is characterized by economic restructuring and changing governance.Village finance faced increasing debt and has found it difficult to fund public services.The city government strove to up-scale village governance to city and town levels.To sum up, the development of entrepreneurial governance in Dongguan has experienced three stages: the first stage is an initial industrialization with overseas investment.In this stage entrepreneurial governance supported low value-added industries.The second stage is the development of rental economies.With the collapse of indigenous rural industries, the villages had to rely on the rental economy.Entrepreneurial governance was embodied in the formation of village collectives.The third stage is an economic crisis leading to a fiscal crisis in rural villages after the global financial crisis.The transformation of export-oriented production plus the decline of rental income seriously damaged the finance of the rural collectives that were the backbone of entrepreneurial governance.In the next three sections, we will follow this process to examine village debt problems.As an export-oriented economy, Dongguan heavily relied on rental income from industrial prosperities leased to overseas investors and private rentals for rural migrants.For more developed parts of Dongguan, rural industries have been developed.But in less developed areas, their economies almost exclusively depend on rental incomes.The income of rural collectives comes from the following sources: profit from village-owned enterprises, rental income from factory buildings, payments from contracting agricultural work to rural migrants from other regions, and housing rent from rural migrants.The major source is rental income from industrial properties.In 2011, property rents and management fees in Dongguan reached 9.672 billion Yuan, accounting for 70% of the rural collectives’ income, while direct profit from rural industries was only 1.338 billion Yuan, accounting for 9% of the income of rural collectives.This study conducted a survey in four case villages.The result is presented in Table 1, which shows the distribution of village incomes.It can be seen that the income source varies.Some more developed villages own village enterprises and thus have direct income from these enterprises.Rental income occupies a relatively lower proportion.For example, Nance village of Humen receives 38.1% of income directly from production profits, 31.3% from rental income, and 3% from government subsidies.Nance’s condition is better because it receives more income from village industries.For less developed villages, rental income is the primary source of income.For example, for Chenwu and Chiling in Houjie town, rental incomes accounted for 86% and 82% respectively.For the poorest villages, such as Xiasha village in Shipai, they could not even gain sufficient rental income.Subsidies from upper level governments became an important source, accounting for 27.5% of income.From the table, it can also be seen that incomes from management fees and operational incomes are limited.They usually cover part of an operation but cannot fund the development cost.In other words, the development of infrastructure and services relies on cross-subsidies from renting out land and properties to investors and housing to the rural migrant population.The villages in Dongguan began to experience an economic crisis because the rental economy was particularly vulnerable to economic restructuring and fluctuations.The rental economy accommodated smaller and more labor-intensive enterprises in the past, but recently these small enterprises have begun to move out under the pressure of rising labor costs and land prices.An official from the Bureau of Finance said, “Now the cost of building factories is higher, now the rental could not even cover the operational cost, because many factories relocated to elsewhere, and there are fewer demands for factory buildings”.Over the past twenty years of economic development, investment from overseas has gradually raised its demand for space.Investors began to require a higher quality of space.The original development pattern was ‘chaotic’; according to an official in the planning bureau, “After the development of factories, we built roads and electricity and water facilities, causing the mixture of villages and factories; the enterprises attracted by villages and units were quite low ranked.,The rudimentary rural rental economy could not supply the new quality space.Because of the lack of coordination, space under village management became fragmented because many villages tried to develop their own rental housing and rental industrial properties.The village economy lacked the capacity to attract larger and more capital-intensive investment.Such a model of development depends upon small enterprises as the customers of their rental properties.Before the global financial crisis, Dongguan had already experienced a declining rental economy.The global financial crisis has accelerated the relocation of small labor-intensive enterprises and reduced rental incomes.Moreover, under the ‘three supplies and one compensation’ model, the villages only obtain processing fees.However, when smaller enterprises grow into larger ones, or larger enterprises that are wholly foreign-owned arrive, they stop paying fees to the local village, because they now pay industrial and commercial taxes to the city.Hence, the processes of economic restructuring and structural upgrading both had negative impacts on village income.After the global financial crisis in 2008, Dongguan eventually saw the creation of a rural economic crisis.Although rental income has been declining in recent years in Dongguan, its public service spending has been increasing.In 2011, rural collectives spent 5.751 billion Yuan on public services, accounting for 58% of total expenditure.Other expenditures were on public security, sanitation, and education, which are the major items in public service spending.Public security spending was 984 million Yuan, accounting for 10% of the total expenditure.The Party Secretary of a village in Zhongtang Town said, “Our village has to bear the salaries of nearly 300 staff, including 150 security guards and 40 office workers”.It can be seen from these figures that the rural collectives bore significant public service spending.Rural collectives undertook the costs of road construction, schools, environmental protection, public security, education, sanitation, family planning, social security, and support for veterans and the families of deceased military persons.These costs accounted for 44% of the net income of rural collectives between 2004 and 2007.In Humen Town, the pressure on village finance is very high; according to the head of a village in the town, “I was the head of the enterprise office.In the 1990s, there were only three to four staff in the office.But now there are more than ten people.The payment for security is over 2 million Yuan, and the sanitation cost increased from about 100,000 to 900,000 Yuan.The cost is really high because under each branch of the upper government we have to deploy staff to cope with their requirement”.The problem is that the city has not developed a public welfare system along with economic development, but still relies on income drawn from the contributions of rural collectives.The increase in public spending, however, did not slow down.In 2011, spending on public services at the village and unit levels increased by 34.3% compared with 2008.However, net income grew by only 1.7% in the same period.Spending on public services increased from 56% of net income to 73% of net income, jeopardizing the upgrading of rural economies and their future capacity for providing public services to residents.To understand the significant role of village collectives in public service spending, one must examine the configuration of village governance.The organizational structure of village governance is characterized by the so-called ‘one group of cadres but three branches’, which consists of the “villagers’ committee”, the party division, and the village collectives.While the village collectives have some autonomy in economic decision-making, they have to bear the cost of social expenditure.According to the party secretary of BL town1, “The management of village collectives is actually under the villagers’ committee.So the head of the villagers’ committee needs to look after both enterprises and the communities.The shareholders’ committee actually could not monitor the operation of the enterprises effectively”.In order to respond to the requests of various government departments at the upper level, the village had to set up a dozen divisions following the structure of the upper government.This has increased administration costs.Ironically, the original purpose was to reduce the burden of welfare expenditure under entrepreneurial governance.Economic devolution and governance decentralization have led to the shift of the burden to the bottom level of government.Through the creation of entrepreneurial governance, the government downloaded the responsibility for welfare provision to the village.According to the Chinese Constitution, the villagers’ committee is not a level of government but rather a mass organization under the principles of “self-management, self-education and self-services.,But they have now become de facto the lowest administrative unit.Under entrepreneurial governance, the state and enterprises at the lowest level of governance form a close relationship.In the process of rapid urbanization, the villages have to take responsibility for developing infrastructure, providing services, and managing security to cope with the increasing number of migrants in the village.Interestingly, the increase in public service expenditure has also been driven by the recent ‘democratization’ of governance and village elections.In order to get elected, village cadres promise more dividends to villagers during elections.The result is increasing dividends, regardless of the performance of village collectives.The second largest item in village collectives’ expenditure was dividends for shareholders, which reached 3.563 billion Yuan, accounting for 38% of total expenditure.The administrative reorganization of village governance did not enhance the performance of rural enterprises but rather had a detrimental effect on their viability.In order to fulfill the promise of increasing dividends, some villages even borrow money.The village collective has company status and therefore can borrow money based on the expected rental income from the banks.The expenditure of the rural collective economy is mainly on local public welfare and the direct operational costs of rural enterprises.We have studied village finance in detail through fieldwork.To take the example of Nance village of Humen, about 35.5% of expenditure was for the construction of factory buildings and maintenance, while 34.2% was used for providing local welfare, including security guards, garbage collection, and school buses, and 7.7% was allocated as dividends for shareholders.Other costs included payments for social insurance, public security fees to the government, land use fees paid to the land administration bureau, and industrial and commercial taxes to the tax bureau.Table 2 shows the detailed composition of expenditure in selected villages interviewed in our study.From Table 2, it can be seen that some villages bear very high social expenditure; for example, for the poorer village, Xiasha in Shipai town, 55.7% of expenditure was on public services.The village has an income of 6.8 million Yuan, but spent 8.0 million Yuan in 2011, creating a deficit of 1.2 million Yuan.Compared with other villages, this is a relatively poor village, which bears a much higher burden of public service spending.Because of the close connection between the local state and enterprises, rural enterprises could not operate as fully independent agents and were subject to ‘predatory’ claims from the collective governance body.Our study reveals that entrepreneurial governance has not developed a proper funding source for public services and has thus laid down the foundation of the predatory’ behavior of the local state.The governance of the village cannot effectively resist the demand for increasing dividends and spending in services regardless of the viability of the collectives.Just like large cities that use land development as a source to fund infrastructure development and fill the gap of fiscal gap, villages used their land assets and rental properties to borrow money from the bank, creating a hidden risk for public finance.There are four levels of governance in Dongguan: municipality, towns, villages and units; the debt issue in Dongguan mainly occurs at the village and unit levels.This is because the municipality of Dongguan is relatively weak, lacks the capacity for organizing large development projects, and therefore does not incur large quantities of borrowing.This was quite different from the cities of Suzhou and Kunshan and Wuxi in southern Jiangsu, where the municipalities organize large-scale land development to accommodate foreign investment.The Sino-Singapore Industrial Park and Suzhou new district are typical mega urban projects.On the other hand, Dongguan managed to download responsibility for social services to the villages and thereby reduced its financial burden.As a result, villages bear an excessive burden of social services and incur debts when income declines.To understand the structure of villages and units, it is useful to distinguish these two levels and their corresponding economic entities.In Dongguan, there were 557 ‘joint economic co-operatives’ at the village level in 2011.As suggested by the name, these JECs are joined with 2526 ‘economic co-operatives’ at the unit level.The unit is a natural village representing the physical cluster of rural households.In the socialist period, this was the smallest level corresponding to the ‘production unit’.At the unit level, there is no formal administrative structure.Several natural villages can form an ‘administrative village’, which bears some administrative functions to manage social services.In Guangdong, the administrative village is often simply referred to as the ‘village’.In this paper, JECs and ECs are also referred to as ‘rural collectives’ because they represent the collective economy of the rural area.The financial problem discussed in this paper mainly refers to these levels.Although the town is the lowest administrative level and the village is not a level of government according to the Chinese Constitution, the village is a de facto layer of government in reality because of the degree of formalization and its administrative duties.As discussed earlier, despite rapid growth in the 1980s and 1990s, village debt has become a serious problem in Dongguan.The problem started in 2000.Total debt at the village and unit levels increased from 1.31 billion Yuan in 2000 to 2.80 billion Yuan in 2011.Accompanying the fast growth of village assets was the accumulation of debt.In the period from 2000 to 2011, the average annual rate of debt increase was 7.1%.The rapid growth of village debt was in fact already occurring in 2004, at a rate of 15%.But in the same year village assets also increased by 15.5%.As a result, the debt issue did not present as a serious problem at that time.In other words, the rapid growth of the rural economy in the 2000s was driven by village debt.This is a quite striking finding of our field research.At the village level alone, about 60% of rural collectives saw their expenditure exceeding their income.The deficit increased by 11.4% to 868 million Yuan in 2011.This figure represents an increase of 5.14 times from 2000.In terms of the spatial distribution of debt, the pattern is uneven.The poorer villages incur more serious debt problems, and as discussed earlier they are more dependent upon rental income without their own industrial capacities.Fig. 3 shows the ratio of village debt to its asset value.The figure shows an aggregated level of debt ratio by the towns.It can be seen that the central area of Dongguan shows a lower debt ratio because it is relatively well developed.The higher debt ratio is seen in the towns adjacent to Shenzhen, such as Zhangmutou and Qingxi, perhaps reflecting an earlier stage of industrialization.During that time, investors were smaller, and when the land was used up development was pushed more towards the inner region.The figure also shows that village debt is a widespread problem, not limited to one or two towns in Dongguan.To investigate the situation of debt at the unit level, we selected Nance village in Humen town for an in-depth study.The village is actually a better-off case in Dongguan.From Table 3, we can see that the asset values of units were all positive.In 2005, three units in Nance village saw their debt exceeding their assets, causing negative equity.But the debt issue became more serious.By 2010, all the units in the village of Nance had negative asset values.The unit of Chongyuan had the most serious debt problem, with debt exceeding assets by more than 3.3 million Yuan.The lowest debt was in Sanjiang, but it still owed 370,000 Yuan in 2011.In sum, the figure and table both show that village debt has become a serious problem and is widespread across villages in the municipal region of Dongguan.This paper reveals an alarming problem—village debt—in a fast-growing industrializing region of China.Against the background of remarkable economic growth is the accumulation of debt.The crisis experienced in Dongguan after the global financial crisis in 2008 initially appeared as an economic crisis, but then became a fiscal crisis.The crisis has its structural causes in entrepreneurial governance.The fiscal crisis at the village level is less well known and has not been fully understood in the literature of Chinese urban development.In the literature of entrepreneurial governance, the critique is mainly about the negative social impact, the dominance of capital in the direction of development and the weakening of democratic politics.This paper reveals the problematic side of entrepreneurial governance as a development approach.Under entrepreneurial governance, rural collectives borrowed money from the bank for rental development as well as the provision of public services.The enterprise plays a significant role in local governance.The tight association of enterprises and the state in rural village governance is a feature of this entrepreneurial governance, while in more advanced western economies entrepreneurial governance is less directly seen as the combination of state and market but rather as public and private partnership.However, although the borrower is not strictly a level of government, because of this unique combination of rural collectives and government, failing rural enterprises transformed an economic crisis into a fiscal crisis, which will have severe impacts on public services if these enterprises go bankrupt.Village debt exposes the limits of ‘entrepreneurial governance’ because the state does not provide public services through public finance.In essence, under entrepreneurial governance, the government downloaded responsibility for welfare provision to villages for them to fund their public services.While policy makers attributed economic success to entrepreneurial governance, this paper shows that there is a severe local fiscal crisis.One original finding from this study is that the entrepreneurial local state not only restrains the provision of social welfare but also is not very successful as a development model because it is susceptible to the impact and financial risk of economic restructuring.This study has important policy implications.The government of Guangdong did not attribute the fiscal crisis to entrepreneurial governance but rather pragmatically the ‘burden of welfare’.The former provincial party secretary Wang Yang referred to the financial crisis of Dongguan as the replication of the Greek crisis in China.He said, “ continues to borrow money to provide promised ‘welfare’, it will become Dongguan’s Greece”.But examining village debt shows that Dongguan did not replicate a welfare state model.The development of the Pearl River Delta to some extent exemplifies the feature of the Chinese market-oriented approach in which the local state formed close relations with market agents.The welfare system has not been developed despite phenomenal economic growth.The landscape of the Pearl River Delta after three decades of economic development essentially remained the same, as ‘manufacturing towns’ characterized by local capitalism and deprivation of social welfare.The crisis is not due to the welfare cost of migrant workers.As shown in this paper, the privileged ‘rural rentier class’ managed to capture the dividends of rural collectives.This study shows that the fiscal crisis is derived from entrepreneurial governance, represented in rural areas in the early development stage as ‘local state corporatism’ and later more widespread urban entrepreneurialism.This study suggests that village debt is precisely an outcome of entrepreneurial governance rather than ‘excessive welfare’.The tension between different levels of government has intensified this crisis.Therefore, to tackle the fiscal crisis, the government has to develop proper public finances beyond dependence on rental economies and using the income of rural collectives to fund public services.
This paper analyzes the recent fiscal crisis among villages in the city of Dongguan. The city has been an exemplar of export-oriented growth in China. Rapid economic development has been attributed to local state entrepreneurial governance based on a close relationship between the local state and enterprises. However, this development approach has led to a severe fiscal crisis, especially at the village level, due to declining rental incomes, ineffective village governance and a heavy burden of public service expenditure following the global financial crisis. This paper examines the configuration of local governance and how an economic crisis has evolved into a public finance crisis in the city. Until now the limits of entrepreneurial governance have been understood only with regard to negative social impacts. This paper reveals the limits of a developmental approach.
24
DNA demethylation during Chrysanthemum floral transition following short-day treatment
The authors declare that there are no conflict of interest.DNA methylation is a common epigenetic phenomenon involving the transfer of a methyl group from S-adenosylmethionine to a specific location of the adenine purine ring or cytosine pyrimidine ring of DNA, which is catalyzed by methyltransferase .One of the most important mechanisms is the methylation of the C-5 carbon of cytosine in genomic DNA producing 5-methylcytosine.As an important epigenetic modification, the functional loss of DNA methylation can have an adverse effect on plant growth , because of the key role of DNA methylation in the growth and development of plants , genome maintenance, somaclonal variation, foreign gene defense, and endogenous gene expression .The transition from vegetative growth to reproductive growth is an important event in plant development.The regulatory pathways include the vernalization, photoperiod, autonomous, and gibberellin pathways.The photoperiod is an important inductive factor influencing flowering time .Many plants exhibit photoperiodism and respond to change in the length of light and dark periods.Depending on their response to night length, plants can be divided into long-day plants, short-day plants, and intermediate-day plants .Mature leaves sense the change in day length and produce a substance that stimulates flowering, ultimately initiating flower bud differentiation and regulating flowering after long-distance transport from the leaves to the shoot tips .Thus, the gene expression of mature leaves is crucial for flowering.The Flowering Locus T protein is an important component of “florigen,” which was first identified in Arabidopsis .This protein integrates the signals of different developmental pathways, including the photoperiod, vernalization, and autonomous pathways.The signals of these pathways are key to floral development .In recent years, the relationship between DNA methylation and regulation of flowering and that between DNA methylation and photoperiod have been elucidated.Flowering processes induced by photoperiodic changes have been shown to be accompanied by changes in DNA methylation in sample plants such as purple perilla and Silene armeria .The DNA methylation rate of individual flowering plants is considerably lower than that of nonflowering plants within the same cluster of Bambusa multiplex canes .The use of zebularine, a DNA methylation inhibitor, initiated flowering in the SD plant Petunia hybrida without any inductive SD treatment.This indicated that changes in DNA methylation are closely related to the expression of flowering genes during the photoperiod-induced flowering process .The use of 5-azacytidine initiates flowering in the SD plant Chrysanthemum 9–16 d before the control .As mature leaves play a key role in flowering, the DNA methylation changes in leaves during flowering via SD must be investigated.Chrysanthemum, one of the most economically important ornamental flowers and a typical SD plant, is often induced to bloom by SD treatment for commercial purposes.In this study, methylation-sensitive amplification polymorphism and high-performance liquid chromatography were used to detect the variation in DNA methylation of mature leaves during SD-induced flowering using two Chrysanthemum cultivars with different flowering time as the study materials.The early-flowering Chrysanthemum cultivar “He Hua Xian Zi” and the late-flowering cultivar “Qiu Shui Chang Liu” were used in this study.The plants were cultivated at the Chrysanthemum Institute of Kaifeng City from 2014 to 2015.Cuttings were taken on 18 May 2014, and the rooted cuttings were planted on 7 July.The plants were grown individually in pots in a medium composed of refuse soil–grass carbon–chicken manure.The plants were watered daily with tap water and given an inorganic nutrient solution once weekly.SD treatment, consisting of 7-h light and 17-h dark periods, was applied from 1 August to 13 September.At the time of treatment, the plants were about 10 cm tall.Plants growing under natural conditions were chosen as the control group, and 20 pots were used per treatment.The natural day length gradually shortened over the course of the experiment, but it was longer than the treatment duration throughout the experiment.The control group was grown under natural climatic conditions.The SD treatment was applied in a shading shed.The stages of the flowering process were defined as follows: 1.The date of capitulum bud appearance, on which the capitulum bud was first visible to the naked eye; 2.The time for the capitulum bud to appear, that is, the number of days from planting to the date of capitulum bud appearance; 3.The date of early blooming, on which the first whorl of ray florets was first visible; 4.The time to early blooming, that is, the number of days from planting to the date of early blooming; and 5.Capitulum development time, that is, the number of days from capitulum bud appearance to early blooming.Genomic DNA was extracted from mature leaf samples collected every 7 d at 12:00 from 1 August to 18 October.Healthy leaves were chosen for DNA extraction, according to the method proposed by Wang et al. .The DNA methylation level was determined using MSAP and HPLC.MSAP was performed as proposed by Xiong et al. with some modifications.The DNA samples were digested sequentially with EcoRI + MspI and EcoRI + HpaII.The digestion reaction was performed in a volume of 20 μL comprising 300 ng of the DNA template and 10 U of the restriction enzyme.This mixture was then incubated at 37°C for 7 h.The ligation reaction was performed in a volume of 30 μL consisting of 20 μL of enzyme digestion products, 2 U of T4 ligase, 5 pmoL of the EcoRI adapter, and 50 pmol of the HapII/MspI adapter.This mixture was incubated overnight at 16°C.The diluted digestion–ligation mixture was amplified using HapII/MspI and EcoRI pre-selective primers with the following protocol: 94°C for 2 min, 26 cycles of 94°C for 30 s, 56°C for 1 min, 72°C for 1 min, followed by 72°C for 10 min.The pre-selective polymerase chain reaction products were diluted tenfold and were then amplified using HpaII/MspI and EcoRI selective primers.The primers were synthesized by SANGON.The selective PCR cycling parameters were as follows: 94°C for 5 min, 94°C for 30 s, 67.5°C for 1 min, and 72°C for 1 min, a decrease in annealing temperature by 0.7°C per cycle for 13 cycles and then 23 cycles of 94°C for 30 s, 56°C for 1 min, and 72°C for 1 min with a final extension of 10 min at 72°C.Each PCR reaction was replicated once.Further, two aliquots of each reaction were electrophoresed independently by denaturing polyacrylamide gels) for 2 h at 65 W.After silver staining, reproducible and clear bands were scored.For HPLC digestion of DNA, the method described by Johnston et al. was followed.Each 10-μL DNA sample was incubated in an ice bath for 2 min and then immediately placed in a boiling water bath for 5 min.Nuclease P1, 4 μL of ZnSO4, and 3 μL of ultrapure water were added.This mixture was then incubated overnight at 37°C.Then, 0.75 μL of alkaline phosphatase and 1.25 μL of Tris–HCl were added, and the solution was incubated at 37°C for 2 h.After centrifugation at 1205 ×g for 3 min at room temperature, the supernatant was transferred to another centrifugal tube and then filtered through a 0.45-μm microporous membrane.Then it was subject to analysis with a Waters 1515 HPLC Pump.The chromatographic conditions used were as follows: a velocity of 0.5 ml/min; a pH of 3.88; a column temperature of 30°C; an ultraviolet detector; a sample quantity of 10 μL; a wavelength of 280 nm; a mobile phase with a tendency for 7.0 moL/l of heptyl alkyl sulfonate, 0.2% triethylamine, and 10% methanol; and a C18 column.The data were analyzed using SPSS 19 software.After SD treatment for 28 and 43 d, the plants heights of the early-flowering cultivar “He Hua Xian Zi” were significantly shorter than those of the control group.For the late-flowering cultivar “Qiu Shui Chang Liu,” no significant difference in plant height was noted compared with the control group after SD treatment for 28 d, although significantly shorter plant heights were noted in the SD-treated group for 43 d than in the control.After SD treatment, the timings of capitulum bud appearance and early blooming of the early-flowering cultivar “He Hua Xian Zi” in the control group and SD-treated group were 60 and 88 d, and 48 and 67 d after planting, respectively.The corresponding timings for the late-flowering cultivar “Qiu Shui Chang Liu” in the control and SD-treated groups were 77 and 105 d, and 59 and 86 d after planting, respectively.Therefore, after SD treatment, the timing of pre-blooming and early blooming of the two cultivars was significantly advanced by 11 and 12 d, and 18 and 19 d, respectively.The timings of capitulum bud appearance and early blooming of the SD-treated group were advanced in the early-flowering cultivar compared to the late-flowering cultivar.The period of capitulum bud development was significantly shortened by 9 d in the SD-treated early-flowering cultivar, whereas this period was not significantly affected by SD treatment in the late-flowering cultivar.The timing of capitulum bud development in the control group did not differ between cultivars.However, the period of capitulum bud development for the early-flowering cultivar was 8 d shorter than that for the late-flowering cultivar.Six pairs of MSAP primers were used to detect the variation in DNA methylation during the flowering period.The banding patterns can be divided into four classes: type I bands were present in both profiles, type II bands were present in EcoRI/MspI profiles alone, type III bands were present in EcoR I/HpaII profiles only, and type IV bands were absent in both profiles.Reduced DNA methylation was noted in both cultivars during the floral transition process as measured by MSAP.This finding was consistent with the results detected by HPLC analysis.The total DNA methylation percentage of the two cultivars as detected by HPLC was slightly higher than that detected by MSAP.With the gradual shortening of the natural day length during the experiment, the DNA methylation levels of the control group also showed a gradual decline.Over the entire floral transition period, the DNA methylation percentage of the SD-treated group was lower than that of the control group.The range of variation in DNA methylation of the early-flowering cultivar was larger than that of the late-flowering cultivar.For the early-flowering cultivar, the DNA methylation rates were 42.2–51.3% before the capitulum bud appeared and 30.5–44.5% after.For the late-flowering cultivar, the corresponding DNA methylation rates were 43.5–56% and 37.2–44.9%.The DNA methylation percentage of the two cultivars after SD treatment was significantly lower than that of the control.The six primer combinations used generated 149 type I, 72 type II, and 58 type III fragments in the control early-flowering cultivar, and 168, 72, and 58 in the SD-treated early-flowering cultivar, respectively.The equivalent fragments in the control late-flowering cultivar were 145, 74, and 53, and 158, 64, and 48 in the SD-treated late-flowering cultivar, respectively.The mean number of fragments produced by each pair of primers in the control group was similar to that reported by Wang .The DNA methylation rate of the SD-treated early-flowering cultivar decreased by 17.48% compared with the control at the early flowering stage.The DNA methylation rate of the SD-treated late-flowering cultivar decreased by 11.32% compared with the control at the early flowering stage.It may be necessary that DNA methylation decrease to a certain critical level for flower induction.DNA methylation decreased to a greater extent in the early-flowering cultivar than in the late-flowering cultivar, which may promote the expression of flowering genes, ultimately resulting in early flowering.Some studies have shown that the effective onset of photoperiodic regulation of the floral transition depends on the end of the juvenile stage .Juveniles are unresponsive to the photoperiod and induction of flowering, but plants become responsive once they attain maturity, leading to flower bud differentiation.The process of flower bud differentiation in flowering plants is divided into two general stages: the inflorescence differentiation stage and the floret differentiation stage.These stages can be analyzed via nine periods.Generally, cymules can be distinguished in the final stage of floret primordia development.In chamomile, the cymule is distinguishable usually on the 23rd day of SD treatment .In the present study, the capitulum bud appeared on the 23rd day of treatment for the early-flowering cultivar, but on the 33rd day for the late-flowering cultivar.The latter might require a certain SD treatment period to transition from the juvenile to mature stage, and in turn undergo flower bud differentiation.During the flowering process in chamomile, the CIFT gene is increasingly expressed once flower bud differentiation has been initiated .In the present study, the significant reduction in DNA methylation percentage begins at the initial flower bud differentiation stage.This implied that the reduction in DNA methylation is associated with floral bud differentiation.During plant growth and developmental processes, changes in DNA methylation play a key role in blooming, regulation of gene expression for vital functions, genomic defense, cell differentiation, and development .Hypermethylation of the promoter and coding region of a gene can inhibit the binding of transcription factor complexes.This in turn inhibits gene expression, resulting in gene silencing; furthermore, demethylation promotes gene expression.Methylation of the FT promoter causes gene silencing and late flowering in Arabidopsis .The present study showed decreased DNA methylation of mature leaves during Chrysanthemum flower development induced by SD.This is consistent with previous findings of increased flowering in Chrysanthemum via a reduction in the DNA methylation level on applying a DNA methylation inhibitor .DNA demethylation may promote and enhance the expression of the FT gene.The signal is transmitted to the meristem, which may in turn induce the expression of more flowering genes in the meristem.Thus, floral transition is initiated.In our recent study, plants treated with 5-azacydine showed high levels of FT gene expression in leaves, which confirms our hypothesis.Further research into this mechanism is needed.The early- and late-flowering cultivars showed different ranges of variation in DNA methylation, which may contribute to different initial flowering times.The DNA methylation patterns are determined by both DNA methyltransferase and demethyltransferase .It remains to be elucidated whether the reduction in DNA methylation level is due to the decrease in DNA methylase expression level or the increase in demethylase expression level during photoperiod-induced flowering in Chrysanthemum.Thus, the expression of DNA methyltransferase and demethyltransferase during this process was investigated in some studies.For instance, in a recent study, Okello et al. showed the significant effect of light on plant cell division, replication, and multiplication.The photoperiod may affect the expression of DNA methyltransferase genes by inducing cyclin expression and a reduction in DNA methylation.In addition, in the plant genome, the CAG, CTG, and CCG sites are often methylated.However, the MSAP method can only detect the methylation status of CG and part of CCG.It cannot detect double-stranded internal and external cytosine methylation.This accounts for the slightly higher total genomic methylation level detected by HPLC than that measured by MSAP in this study.This work was financially supported by the National Natural Science Foundation of China.
Background: Analytical techniques such as methylation-sensitive amplification polymorphism and high-performance liquid chromatography were used to detect variation in DNA methylation of mature Chrysanthemum leaves during the floral transition induced by short-day (SD) treatment. Results: For both early- and late-flowering cultivars, the time from the date of planting to the appearance of the capitulum bud and early blooming were significantly shorter than those of the control. The capitulum development of the early-flowering cultivar was significantly accelerated compared to the control, unlike the late-flowering cultivar. The DNA methylation percentage of leaves was significantly altered during flower development. For the early-flowering cultivar, DNA methylation was 42.2–51.3% before the capitulum bud appeared and 30.5–44.5% after. The respective DNA methylation percentages for the late-flowering cultivar were 43.5–56% and 37.2–44.9%. Conclusions: The DNA methylation percentage of Chrysanthemum leaves decreased significantly during floral development. The decline in DNA methylation was elevated in the early-flowering cultivar compared with the late-flowering cultivar.
25
Development and Validation of a Novel Recurrence Risk Stratification for Initial Non-muscle Invasive Bladder Cancer in Asia
Bladder cancer is the fourth most common malignancy in the West.In Asia, the incidence of bladder cancer is three to four times less than Western countries."However, population-based cancer registries covering 21% of the world's population only included 8% of patients in these registries are from Asia.At initial diagnosis, about 85% of patients have non-muscle-invasive bladder cancer, which is managed by transurethral resection of the bladder with or without intravesical therapy.Although the prognosis of NMIBC is generally favourable, 50–80% of patients have intravesical recurrence following TUR-Bt.Adequate risk classification allows clinicians to not only estimate the clinical behaviour of the tumour, but also the magnitude of benefit and the need for adjuvant therapy.Accordingly, some risk classifications that combine various parameters to estimate the prognosis of NMIBC patients have been reported.There are disadvantages in using these classifications in the clinical setting.For instance, the EORTC risk table involve complex calculations and imbalance of prevalence of individual risk groups.Furthermore, risk group stratification which takes into account the risk without Bacillus Calmette-Guerin instillation or intravesical instillation of chemotherapy has not been reported.In the present study, we applied the EORTC risk group stratification to predict recurrence and progression in a Japanese cohort."In addition, we developed a novel risk classification of recurrence to easily estimate a NMIBC patient's probability of recurrence after TUR-Bt based on a set of routinely assessed clinical and pathological factors and validated this novel classification using another validation cohort.In this multicentre retrospective cohort study, we analyzed data from patients with NMIBC who underwent initial TUR-Bt at four Juntendo University Hospitals and Teikyo University Hospital between 2000 and 2013.To achieve adequate pathological staging, the complete resection aimed to include the muscle layer of the bladder wall.Random biopsies were taken from normal-appearing mucosal area in patients with positive urine cytology and without abnormality of upper and lower urothelial tract.Patients with any of the following were excluded from the analysis: non-urothelial carcinoma histology, follow-up periods < 3 months: history of muscle-invasive or metastatic bladder cancer; history of carcinoma of the urethra, prostate, or upper urinary tract; history of local radiation therapy to the pelvis; history of every kind of chemotherapy; or history of previous BCG therapy.The TNM classification was assessed based on the 2002 TNM classification of the International Union Against Cancer.The tumour grade was classified according to the World Health Organization system.This study adhered to the Declaration of Helsinki.The clinicopathological data, including age, sex, pathological T category, pathological grade, tumour size, number of tumours, presence of concomitant CIS, and intravesical therapy were obtained from each hospital and merged.Each of these variables and their weight adhered to an EORTC scoring system.Standard cystoscopy and urinary cytological examination, computed tomography with contrast medium if possible, magnetic resonance imaging and MR urography with contrast medium if possible, were performed every three months for five years after TUR-Bt, and subsequently every six months after five years.No patients had a fluorescence cystoscopy.Visible recurrences or suspicious lesions were removed by TUR-Bt and biopsy.All recurrences were confirmed by histopathology, and progression was defined as the development of muscle-invasive tumour or metastatic disease.Progression was also regarded as recurrence.Patients without an event were censored at the last date of follow-up.RFS and PFS were defined as the period between the initial TUR-Bt and recurrence or progression, respectively.Patients who died from causes other than urothelial tumour were censored at the time of death.To evaluate the EORTC risk group stratification for predicting recurrence and progression in JT cohort, a total recurrence score for each patient was calculated based on the six clinicopathological factors according to the EORTC scoring system for recurrence and progression.Patients were then divided into four risk groups for recurrence and progression.Univariate and multivariate Cox proportional hazards regression models were used to assess the impact of various clinicopathological factors including age, sex, number of tumours, tumour size, pT, grade, concurrent CIS, BCG instillation, and intravesical instillation of chemotherapy on time to recurrence in JT cohort.We developed the novel risk classification system for recurrence in NMIBC patients using the independent recurrence prognostic factors based on Cox proportional hazards regression analysis in the JT cohort.Patients were subdivided into low, intermediate and high risk groups according to their total score.Validation was done on an external data set of 641 patients from Kyorin University Hospital.Inclusion and exclusion criteria of the validation set were the same as the JT cohort.RFS rates were calculated by the Kaplan–Meier method and the difference between each group was evaluated using the log–rank test.Calibration of predictions on the novel risk score was evaluated by comparing the predicted probability at 3 years with the Kaplan–Meier survival probability using the training data.Similar analysis was performed using the external validation data.The performance of the predictions was assessed by plotting actual survival against mean of the predicted risks.All statistical analyses were performed using the JMP Pro-11® and SAS version 9.2.P-values < 0.05 were considered significant and all reported P values were two-sided.This study was conducted in accordance with ethical principles of the Declaration of Helsinki.We registered this study in UMIN clinical trial registry.This analysis was based on 1085 patients with NMIBC treated between 2000 and 2013 at the Juntendo and Teikyo University Hospitals.Excluding recurrent cases, there were 856 patients with initial NMIBC in the JT cohort.The baseline clinical and pathological characteristics of these patients are presented in Table 1.All patients except one were Japanese.The non-Japanese patient was a Caucasian.Median follow-up periods were 31 months.Median age was 71 years old.A 2nd TUR was performed in 134 patients because of T1 or high grade cancer.Based on this, 53 patients were diagnosed with urothelial cancer.Immediate and adjuvant intravesical instillation of chemotherapy were performed in 59 and 56 patients, respectively.Two hundred twenty patients were treated by intravesical instillation of BCG.BCG maintenance therapy was performed in only 21 patients.According to the EORTC recurrence risk classification, the intermediate risk group had predominantly higher number of patients compared with the low and high risk groups.In terms of EORTC progression risk classification, 191 patients were categorized as low-, 341 with intermediate-, and 324 high risk.Radical cystectomy was performed in nine patients.Four patients died of bladder cancer and one died of an unrelated disease.During the observation period of this study, 342 of the 856 patients experienced intravesical recurrence.Overall, RFS rates of these patients were 60.3% at 2 years, 54.5% at 3 years, and 50.2% at 5 years.The median time to recurrence was 63.0 months.The RFS rates at 5 years were 64.7% for the low risk group, 50.4% for the intermediate-low risk group, 48.5% for the intermediate-high risk group and 44.1% for high risk group.There were no significant differences in RFS rates between groups according to the EORTC recurrence risk classification.Thirty-five patients had disease progression.Overall, PFS rates of the 856 patients were 95.6% at 2 years, 95.0% at 3 years and 94.1% at 5 years.Median PFS rates were incomputable because of the small number of patients with progression.The differences in PFS rates between patients in intermediate and high-low risk group were statistically significant.However, there were no significant difference for the low risk group vs intermediate risk, and high-low risk group vs high-high risk group.Univariate and multivariate Cox proportional hazards regression analysis revealed that the number of tumours, tumour size, BCG instillation, and intravesical instillation of chemotherapy had significant influence on time to recurrence.Other clinical factors including age, sex, pT, grade, concomitant CIS were not statistically significant prognostic factors for recurrence.We could not make an analysis of progression because of the incomputable median PFS.We developed a novel risk classification model for recurrence that classified patients into three groups by using weighted scores of clinicopathological factors identified by a univariate Cox proportional hazards regression analysis in the JT cohort.We showed the 3-year recurrence probability in the JT and validation sets in Table 3.The patients were then divided into three risk groups for recurrence based on their total scores.Calibration of the predictions was evaluated by comparing the predicted probability at 3 years with the Kaplan–Meier survival probability using the training data.The predictions were assessed for calibration accuracy by plotting actual survival against predicted risk.The predicted survival rate from the risk score was well correlated with the actual observation of 5-year survival in the training data.In this novel recurrence risk classification, 280 cases were classified as low risk, 344 as intermediate risk, and 232 as high risk.The RFS rates were 80.2%, 74.1%, 68.4% for the low risk group; 54.8%, 49.5%, 45.8% for intermediate risk; 42.1%, 36.3%, 33.7% for high risk.There were significant differences in 5-year RFS rates between low risk and intermediate risk and intermediate risk and high risk.We included 641 patients who were treated at Kyorin University Hospital as external validation cohort.The baseline clinical and pathological characteristics of the validation cohort are presented in Table 1.Although median age and male-to-female ratio in the validation cohort were similar to the JT cohort, other clinical background including pT, tumour size, number of tumours and pathological characteristics such as grade and concomitant CIS were distinctly different from the JT cohort.In addition, there were fewer 2nd TUR and BCG induction therapy in the validation cohort.On the other hand, adjuvant intravesical instillation of chemotherapy was more frequent in the validation cohort.Radical cystectomy was performed in 43 patients.Twenty patients died of cancer and 13 patients died of unrelated disease.Overall, RFS rates of the 641 patients in the validation cohort were 61.1%, 56.2%, and 50.2%.The PFS rates for these patients were 91.7%, 91.0%, 89.1%.Although the PFS rate in the validation cohort was significantly lower than the JT cohort, there were no significant differences in the RFS rate between the groups.According to this novel recurrence-risk classification, 202 cases, 159 cases and 280 cases in the validation cohort were classified into low, intermediate, and high risk groups, respectively.There were significant differences in the 5-year RFS rates between the low risk group and intermediate risk group and between the intermediate risk and high risk groups.We also evaluated the calibration by comparing the predicted probability at 3 years with the Kaplan–Meier survival probability using the external validation data.Using the validation data set, the predictions were assessed for calibration accuracy by plotting actual survival against predicted risk.The predicted survival rate from the risk score was reasonably well correlated with the actual observation of 3-year survival in the external validation data set.Although EAU guideline on NMIBC appears to be a useful decision-making clinical tool, one of the issues in the EORTC risk table is the disproportion in prevalence.In this study, 87.8% of all patients were classified into the intermediate risk group according to EORTC recurrence risk classification in the JT cohort.Xu et al. and Sakano et al. showed similar results with 78.0% and 92.5% of patients classified as intermediate risk, respectively.The low frequency of low risk patients could possibly depend in part to the lower rate of G1 tumours in the present study compared with the EORTC trials.Because other Asian studies have also reported lower rates of G1 tumours, there might be racial difference in grade distribution of bladder cancer between Asian and Caucasian populations, similar to the difference between Caucasians and African-Americans.Although some earlier studies reported significant differences in RFS and PFS rates between risk groups, other studies including ours, found that prediction of both recurrence and progression were poorly discriminated by the EORTC tables.Also in another Japanese cohort study, no significant differences in the RFS rates were found between low risk and intermediate-low risk groups or between intermediate-high risk and high risk groups.Regarding PFS rates, we could find no significant differences between low risk and intermediate risk and high-low risk and high-high risk.We stress that our patient population differed significantly from the population analyzed by the EORTC risk group in terms of geographic location, ethnic background, treatment algorithm and malignant potential.These differences may explain why EORTC table does not work well in Asian populations.These results underline the need for improving current predictive tools among Asians.In the EORTC series, only 171 patients were treated with BCG.Subsequently the Spanish Urological Club for Oncological Treatment developed a scoring model that predicted disease recurrence and progression in 1062 patients with NMIBC treated with BCG from four CUETO trials.Although both the EORTC risk tables and the CUETO scoring model were externally validated and recommended by international guidelines, reported that disease recurrence and progression in NMIBC patients were poorly discriminated by both models.At present, the standard adjuvant therapy in patients with NMIBC is the bladder instillation of BCG or chemotherapy.Therefore, it is very important for patients and physician to decide whether or not to receive the adjuvant instillation therapy.The EORTC risk table is, however, of little use for deciding this.We originally developed the novel risk classification to predict recurrence and progression for Japanese patients with NMIBC to compensate for the shortcomings of the EORTC risk classification.We demonstrated clear and significant differences in RFS rates between the groups.In addition, unlike in EORTC risk classifications, each risk group had almost equal proportion of patients.This three-tiered risk group stratification made it possible to determine recurrence risk and choose the better adjuvant treatment for individual Japanese patients.In addition, we performed the external validation study to confirm the usefulness of this novel recurrence-risk group stratification in Japanese patients.Even though there were clear differences in clinicopathological backgrounds between the original cohort and the validation set, we found an even distribution of patients and significant differences between groups.In addition, the scoring items of this novel risk classification system do not include pathological factors such as pathological T classification, concurrent CIS and malignant grade because multivariate analysis showed no significant differences.Therefore, theoretically, we could use this classification before TUR-Bt to predict prognosis.Furthermore, in contrast to the existing risk classifications, this novel risk classification system is characterized by the scoring items including adjuvant bladder instillation therapy of BCG and chemotherapeutic drugs on the first time.At present, the adjuvant therapy for NMIBC is almost BCG instillation or intravesical instillation of chemotherapy and many guidelines recommended these therapies.As a matter of course, this novel classification is not a tool to determine the indication for adjuvant bladder instillation therapy.However, using this novel classification we can evaluate the recurrence-risk classification with or without these adjuvant intravesical instillation therapies.The limitation of this study is that it was a retrospective analysis.Particularly, our patient cohort included patients treated from 2000 to 2013, which was before immediate post-resection intravesical chemotherapy and maintenance intravesical therapy were widely-accepted practices in Japan.Immediate intravesical instillation of chemotherapy and BCG maintenance intravesical therapy were performed in just 236 and 33 patients, respectively.Also, only 140 patients have had a 2nd TUR performed.Therefore, when BCG and 2nd TUR become widely accepted in Asia as standard therapy, clinical outcome could be different from this present study.Additional factors not included in the EORTC model or in our novel classification such as smoking, micropapillary histology finding, and the depth of invasion into lamina propria could be added to a prognostic model to enhance its usefulness in Asia.Furthermore, in this Japanese study, overall 4.1% patients had disease progression.Compared with 10.7% in EORTC study, this rate is obviously low.In this Japanese cohort, the rate of G1 bladder cancer in JT set and in validation set are clearly lower compared with EORTC study set.Adversely, the rate of G3 bladder cancer in JT set and in validation set are clearly higher compared with EORTC study set.In spite of the high-rate of high-grade NMIBC, in this Japanese study, we can see the very low number of progression compared with Caucasian study."We can't deny the possibility that bladder cancer in Asian people might be of quite different biology in contrast to western cohorts.Therefore, we require the greater consideration the usefulness of our novel classification for Caucasian NMIBC patients.In conclusion, the number of tumours, tumour size, BCG instillation, and intravesical instillation of chemotherapy were found to be independent predictors for time to recurrence after TUR-Bt in Japanese patients with NMIBC.Our novel, simple, and prognostic classification may not only predict the recurrence risk but greatly help to identify indicators for adjuvant intravesical therapy.Given the fact that comparing with advanced bladder cancer, NMIBC has only a small breakthrough drug, further studies with a more patients in a more diversified cohort are required to validate this risk classification and to enhance the effectiveness of existing treatment for Asian patients with bladder cancer.TI and SM analyzed and interpreted data and drafted the initial manuscript.FS, SY, KK, KT, KS, TO, MN, HI, TO, YW, YS, AT, RY, and KN collected data for this study.MT contributed to the data analysis plan and statistical methods used.SH supervised this study.All authors contributed intellectual input to the study design and interpretation of results, and all authors reviewed the manuscript prior to submission.SH approved the final manuscript for submission.
Background Some risk classifications to determine prognosis of patients with non-muscle invasive bladder cancer (NMIBC) have disadvantages in the clinical setting. We investigated whether the EORTC (European Organization for Research and Treatment of Cancer) risk stratification is useful to predict recurrence and progression in Japanese patients with NMIBC. In addition, we developed and validated a novel, and simple risk classification of recurrence. Methods The analysis was based on 1085 patients with NMIBC at six hospitals. Excluding recurrent cases, we included 856 patients with initial NMIBC for the analysis. The Kaplan–Meier method with the log-rank test were used to calculate recurrence-free survival (RFS) rate and progression-free survival (PFS) rate according to the EORTC risk classifications. We developed a novel risk classification system for recurrence in NMIBC patients using the independent recurrence prognostic factors based on Cox proportional hazards regression analysis. External validation was done on an external data set of 641 patients from Kyorin University Hospital. Findings There were no significant differences in RFS and PFS rates between the groups according to EORTC risk classification. We constructed a novel risk model predicting recurrence that classified patients into three groups using four independent prognostic factors to predict tumour recurrence based on Cox proportional hazards regression analysis. According to the novel recurrence risk classification, there was a significant difference in 5-year RFS rate between the low (68.4%), intermediate (45.8%) and high (33.7%) risk groups (P < 0.001). Interpretation As the EORTC risk group stratification may not be applicable to Asian patients with NMIBC, our novel classification model can be a simple and useful prognostic tool to stratify recurrence risk in patients with NMIBC. Funding None.
26
Future prediction with automatically extracted morphosemantic patterns
“Prediction is very difficult, especially about the future.,“The Press is a most valuable institution, if you only know how to use it.,– Arthur Conan Doyle.When an average person is in a need to make a prediction about how an event will potentially unfold, a good choice would be to first ask for advice of an expert, who, based on her expertise would present their views.Such views are usually based on some kind of expert knowledge, experience and are expressed using sentences referring to the future, containing important information supported by long years of experience and research.In the following paper we present our research into confirming whether it is possible to automatize this process.We develop an expert system for the support of future prediction.We base our approach on the assumption that especially future referring sentences should be useful in the process of predicting the future outcomes of an event, when they are efficiently extracted from a credible source.Below are some examples of sentences concerning energy problems that were published in newspapers1:Science and Technology Agency, the Ministry of International Trade and Industry, and Agency of Natural Resources and Energy discussed the necessity of a new system, and concluded to set up a new responsible council.The comparison of sources of energy available already in 1992 and predicted to be available in 2020, we can expect that coal and oil use will decrease substantially.The company aims to put “Space Solar Photovoltaics” in practical use and begin generating electricity from solar energy till the start of the 21st century.All of the above sentences, regardless of when they were written, refer to an event to occur in either closer of farther future.In the first example we read that a country will engage in construction of a new energy system.Interestingly, although the sentence is constructed in the past tense the sentence itself refers to the future.The second example presents a prediction that when a new energy source is developed the use of hitherto energy sources such as coal or oil will drop.The last example predicts that solar energy will be put into practical use.When we read the contents of a newspaper article, we can infer how other events would unfold if the presented view is correct.From the above sentences, we can reason that the current energy sources will be exchanged into a new, renewable energy sources.As mentioned above, the sentences which refer to the future need to contain the information that relates to the predicted event that is to occur in the future.However, the linguistically expressed future reference in the sentence can sometimes formally refer to the past.More generally, such future reference does not necessarily have to be on the level of surface nor grammar.To first understand the phenomenon of future reference expressions we investigate future referring sentences assuming that the future referring information in such sentences does not depend merely on the surface or grammar, but consists of a mixture of patterns representable by combined grammatical as well as semantic information.We conceive that the process of future prediction depends on various information.Previous research investigated the possibility of predicting future events by applying only surface information or causal relations.Until now, there has been no in-depth study of what actual patterns are encoded in future referring sentences.To fill in this research gap, we first thoroughly analyze the expressions and patterns that frequently appear in future reference sentences.Based on this survey we propose our original method for extraction of such patterns and apply these patterns in practice to both support future prediction in humans and propose a prototype method to perform future prediction automatically.The contributions of this research are the following.Firstly, we proposed the first known wide scale application of morphosemantic structure in text classification, and experimentally confirmed its usefulness.In the experiments we also confirmed that future reference sentences represent a uniform linguistic group.Moreover, and we found out that future reference sentences are a viable source for extracting valuable and frequent sophisticated sentence patterns, useful in the task of extraction of new future reference sentences, previously unseen for the training dataset.However, two of the most valuable contributions of the research presented here are the following.Firstly, we were able to show experimentally, that such future reference sentences, when automatically extracted in a short time with the proposed method from a large scale newspaper corpus, can be applied even by laypeople in determining the outcome of events with the same or higher accuracy, than when the person collecting information for a year and trying to guess the outcome of such events manually.Secondly, we were also able to propose a prototype method for automatic reasoning about the outcome of events, and experimentally show, that a proposed method was consistently nearly twice as effective in predicting the future than even the best humans, which can be considered as the most impressive contribution of the whole research.The paper is organized in the following way.In Section 2 we describe previous research related to the prediction of future events.Section 3 presents our investigation into future reference expressions.Section 4 describes in detail the proposed method for automatic extraction of morphosemantic patterns and their application in extraction of future reference sentences.Section 5 describes the experiments in which we evaluate and optimize the method.In Section 6 we verify the performance of the method on the validation set consisting of completely new data unrelated to both training and test data, and compare the method to a previously developed state-of-the-art method.In Section 7, we verify the usefulness of our method in the support of future prediction in humans and propose a prototype method for automatic prediction of unfolding of future events.Finally, we conclude the paper in Section 8 and present some of the possible directions for improvement and application of the developed method.Although there have been previous research, related to the task of predicting an unfolding of events, such as the study by Girju who focused on automatic detection of causal relations with the application in question answering, only a few studies focused particularly on predicting the unfolding of upcoming events.Despite that, the legitimacy of practical utilization of future-related information has been analyzed by the following researchers.For instance, Baeza-Yates investigated around half a million sentences containing future events, which they extracted from one day of Google News, and concluded that pre-scheduled events happen in relatively consummate likelihood and that there is a high connection between the reliability level of the event and its temporal distance.Hence the information about the future event is of a high significance for foreseeing future outcomes.Moreover, as indicated by the investigation of Kanhabua, Blanco, and Matthews, who examined daily news articles, 33% of all articles contain future reference.In another study, Kanazawa, Jatowt, Oyama, and Tanaka obtained implications for future information from the Web utilizing explicit expressions.Alonso, Strötgen, Baeza-Yates, and Gertz have shown that time data incorporated into an online report, such as news article, is viable for improving information retrieval applications.Kanazawa, Jatowt, and Tanaka extracted unreferenced future time expressions from a vast accumulation of documents and proposed a technique for assessing the validity of the forecast based on online searching for the actual event related to the one being the object of prediction.Jatowt et al. used a keyword search on the Web to study how future news written in English, Polish and Japanese relate to each other.Popescu and Strapparava, when studying the distribution of terms within the Google Books corpus, noticed that it significantly changes with time and is correlated with words referring to emotions and sentiment.As for the research focused on the retrieval of future-related information, the previously-mentioned Kanhabua et al. proposed a model for ranking predictions according to their relevance.With regards to the prediction of the likelihood of an event to happen and its relation to the actual event, Jatowt and Yeung proposed an algorithm for clustering of information extracted from a Web corpus with an application in recognizing future occurring events, and quantifying the likelihood of the event to occur.In an alternate study, Jatowt, Kanazawa, Oyama, and Tanaka utilized the rate of occurrence of news articles over time to predict repeating events and proposed a technique for supporting human users in examination of future events, by incorporating a method summarizing future-related information contained in news documents.Aramaki, Maskawa, and Morita applied an SVM classifier to classify influenza-related information on Twitter with future application in prediction of the spread of the disease.Kanazawa et al. proposed a system estimating the legitimacy of a forecast with cosine similarity calculated between news articles being the object of prediction and events that happened in reality.Radinsky, Davidovich, and Markovitch proposed a system for predicting future events in news.Their system was applying numerous ontologies and causal relations to calculate similarity measure between documents.Unfortunately, their system, depending heavily on simple causality expressions, and not on particular future-related patterns, could not adapt to, e.g., past-referring sentences containing causality patterns.As for more recent research, Nakajima, Ptaszynski, Honma, and Masui presented their initial study on how future reference is represented linguistically, by looking at sophisticated sentence patterns consisting of semantic role annotations and morphological information.They performed their research for the Japanese language, and confirmed that in this future reference can be considered as a consistent separate linguistic entity.Later, Nie, Choi, Shepard, and Wolff performed similar research to Nakajima et al. and confirmed their results, this time for the English language.Al-Hajj and Sabra also applied the approach of Nakajima et al. to analyze future reference expressions in Arabic.However, they limited their study to simple one-word patterns.Yarrabelly and Karlapalem have also shown that applying dependency relations in extraction of one form of future reference, namely, predictive statements, is also a viable method, at least when applied for news articles written in English language.Finally, the most recent work Hürriyetoğlu, Oostdijk, and van den Bosch, applies separate words and simple temporal expressions supported with several heuristic temporal logic rules and tweet historical context to predict time-to-event on Twitter.This method is interesting due to its applicability in live tracking of the unfolding of events.Unfortunately, in this preliminary study Hürriyetoğlu et al. only focused on estimating TTE for events such as concerts and football matches, which, despite being a good test bed for evaluating their application, are not much of a challenge for prediction due to the fixed and preplanned character of such events.However, when only the TTE estimation method is considered, their study is a valuable contribution, and could be applied to estimating TTE of more challenging events, such as political meetings, or major market fluctuations for which the outcome is not known in advance.The above discoveries lead us to the possibility that by utilizing future-referring expressions from news articles, we could improve the process of predicting the future within the framework of everyday tasks performed by users in a daily manner.For instance, when we analyze the following usual future-related sentence taken from a daily newspaper: “The method for applying gas contained in underground waters to create electricity is uncommon and the corporation will offer it worldwide, including Europe, China, and other countries.,and correctly estimate the believability of such sentence, we could support predicting of future unfolding events and apply this ability in stock investments, corporate administration, trend prediction, risk prevention, etc.Moreover, as indicated in previous research, such method could be used to analyse Social Networking Services, to help mitigate natural disasters or disease outbreaks.Techniques utilizing time-related expressions, for example, “year”, “hour”, or “tomorrow”, have been utilized to extract future-related data and documents of high relevance.It has likewise been shown that it is helpful to estimate future unfolding of events by utilizing data occurring in widely accessible contents.Unfortunately, albeit all studies quoted above have utilized explicit future-related expressions, none have applied more refined, implicit patterns.Thus, a technique utilizing such patterns would mitigate the problem of predicting the future from a novel point of view and could largely contribute to the general study of future data extraction.The main purpose of this research was to develop a support method for making predictions in real life about events to happen in the future by applying a more sophisticated approach than simple time-related expressions or information retrieval from chronologically arranged data.We propose and evaluate a method for automatic extraction of all possible sentence patterns which refer to the future.We define such patterns as words, phrases and more sophisticated constructions with disjointed elements extracted from future referring sentences generalized using the combined semantic representations and morpho-syntactic information.As a preliminary step in our research, we investigated words and phrases used in cases of referring to a change in time generally or to the future specifically.The investigation was performed on articles found in the following daily newspapers: the Nihon Keizai Shimbun2, the Asahi Shimbun3, and the Hokkaido Shimbun4.We read through a number of articles from those newspapers both in their paper version and online.From the articles we gathered we manually extracted 270 representative sentences which referred to the future.Next, from those sentences we manually annotated and extracted future expressions.There were 70 unique time-related and 141 unique not time-related words and phrases, but still referring to the future.Several examples of the analyzed word and phrase samples are represented in Table 1.Linguistics differentiates two general kinds of future-related expressions.The first one contains explicit expressions such as numerical values.The second one conveys the future-relatedness through grammatical information, such as phrases “will ”, “the middle of a month”, “in the near future”, or, especially in Japanese, particles -ni, -made, or -madeni.However, many of the 270 extracted sentences did not contain the usual time- or future-related expressions.Among all the expressions we extracted from the sentences, 55% appeared two or more times, while 45% appeared only once.The reason for this twofold split could be that the same future expressions come in many variations, while some function as future-related expressions only in specific contexts.However, we can assume that those which appear the most often could be said to have a more general tendency of being utilized as future-related phrases.Thus, considering sentences and their various representations as sets of patterns which appear in a corpus we ought to be able to extract from those sentences new future-related patterns.For instance, a sentence represented with semantic roles would allow the extraction of new semantic patterns appearing in future reference sentences.In the fol- lowing section we explain the method for the extraction of such patterns and check its viability in the task of automatic classification of future-related sentences.In this section we present the proposed method for automatic extraction of morphosemantic patterns from sentences.The method is divided into two steps, in which the sentences are represented using a combination of semantic role labeling and morphological information, and frequent combinations of such patterns are extracted using a system for automatic pattern extraction.In the first step of the proposed method, all sentences from the training datasets, were represented in morphosemantic structure.In the second step, morphosemantic patterns were extracted from all such sentences.The concept of morphosemantics and morphosemantic structure appears widely in linguistic and structural linguistic studies.In one of such study, Levin and Hovav, although mostly limiting the scope of their research to verbs, distinguished MS an important basic type of morphological operation on words, which changes the semantic representation of a word, also referred to as the Lexical Conceptual Structure.In another study, Kroeger used the morphosemantic approach to study one of the suffixes in the Indonesian language.Fellbaum, Osherson, and Clark, on the other hand, improved links connecting WordNet synsets by utilizing morphosemantic patterns.A more recent study by Raffaelli reported on applying morphosemantic patterns to study a lexicon in Croatian, a language known for its richness in morphology and semantics.A similar reason motivated us in this study, where we utilize morphosemantic structure to analyze datasets in Japanese.Applying a single-layered structure limits the range of analyzed information encrypted in the language.Additional motivation for us was the fact that MoPs have not yet been applied in practice to tackle real-world tasks.The model of morphosemantic structure was produced by applying a semantic role labeling system supported with a morphological analyzer.We describe the whole process of morphosemantic analysis in detail below.Firstly, the whole dataset with all the sentences is processed with a semantic role labeling system.SRL annotates labels on expressions according to the role they play within the context of a sentence.This can be illustrated by a simple example: a sentence “Mary killed John.,is annotated with the following labels: Mary = Actor, kill=Action, John = Patient.Therefore the semantic structure of this sentence is “--”.We performed the semantic role labeling for Japanese with ASA5, a system for semantic role labeling and semantic generalization of sentences utilizing a hand-crafted thesaurus.Some examples of labels provided by ASA can be found in Table 2.Two examples of SRL done with ASA can be found in Table 3.One drawback to using ASA is that it does not provide semantic labels for all words.The words and phrases that are usually omitted include those not in the thesaurus, function words, or grammatical particles, often not directly influencing the sentence’s semantic structure, although adding to the overall meaning in general.Such cases were dealt with by applying MeCab6, a morphological analyzer for Japanese, combined with ASA to provide morphological labels.Unfortunately, MeCab in its basic settings analyzes all words in sentences separately.This causes a problem where a compound word could be divided.For example, “Japan health policy” should be perceived as one morphosemantic concept, but in basic settings its morphological representation takes the form of “Noun Noun Noun”.Thus we added a set of linguistic rules as a post-processing procedure to additionally specify compound words when only morphological information is annotated.Finally, in cases when no semantic labels are provided, the procedure is organized in the following way:Below we present one example of how a sentence is analyzed with the proposed morphosemantic information labeling method.Sentence: Nihon unagi ga zetsumetsu kigushu ni shitei sare, kanzen yōshoku ni yoru unagi no ryōsan ni kitai ga takamatte iru.English: The expectations rise towards mass production of eel in aquacultures following specifying Japanese eel as an endangered species.After all sentences have been annotated with morphosemantic structure, SPEC, or Sentence Pattern Extraction arChitecturte developed by Ptaszynski, Rzepka, Araki, and Momouchi was used.The system allows automatic extraction of frequent sentence patterns characteristic for a certain corpus.In the process of extracting the initial combinations, the system places a wildcard between all non-subsequent elements.Then SPEC calculates occurrence frequencies of all patterns generated this way and retains only frequent ones.The system also used the pattern occurrences to calculate weight of the patterns.In the experimental phase the weight is calculated in three ways.Two features are crucial in the process of calculating the weight.Firstly, a pattern can be considered as the more characteristic for a certain corpus the longer the pattern is, and secondly, the more often it occurs in the corpus.Therefore during the experiment the optimal weight settings are verified by,awarding length and occurrence,awarding none.The list of frequent patterns generated this way is then additionally modified.In the process of binary classification of sentences into two classes, there will be patterns that occur only on just one side or on both sides.Thus the list of all generated patterns can be modified by,using all patterns, both unambiguous and ambiguous,not using ambiguous patterns,not using only the ambiguous patterns appearing in equal numbers on both sides.Furthermore, since pattern lists developed this way will contain both the combinatorial patterns as well as n-grams which are used more commonly, the classification can also be performed on:all patterns, or,All versions of the above-mentioned parameters are verified in the evaluation experiment to choose the best model.The outline of the whole method was depicted in Fig. 1.This section describes the experiment verifying whether the morphosemantic pattern extraction method is effectively applicable in the classification of future-referring sentences.First, we collected 1000 sentences at random from the multiple newspaper corpus described in Section 3.Three people then manually decided whether these sentences referred to the future or not.The agreement coefficient between the annotators was 0.456.We divided the sentences into three groups, namely those for which there was a perfect agreement for future reference sentence annotation for all three annotators, ambiguous sentences and other sentences.The 1000 collected sentences contained in total 130 sentences annotated as future reference sentences by all three annotators, 330 ambiguous sentences and 540 other sentences.From this dataset we used the 130 future-referring and additional 130 non-future referring sentences selected randomly from the subset of 540 sentences.From this sentence collection we created two experiment subsets.The first one contained 100 sentences, with 50 future reference sentences and 50 sentences not referring to the future.The second one contained 260 sentences, similarly with equal sentence sample distribution.Both subsets were preprocessed and represented in morphosemantic structure applying the method presented in Section 4.1.From sentences preprocessed this way we extracted pattern lists using the procedure described in Section 4.2.We summarized the results using standard measures of Precision, Recall and balanced F-score.Unfortunately, although the number of sentences on each side was the same, typically training collection is biased toward one of the sides, as the sentences of one kind could be longer in average.Therefore, there the situation where more combinatorial patterns of a certain type appears on one of the sides, will always occur making this side statistically stronger in the classification process.Therefore, applying a simple “rule of thumb” could hinder the results.Thus we also applied threshold optimization to specify the threshold that the classifier achieved the optimal scores for.In experiment fourteen different variations of the classifier were compared.This gave an overall number of 280 experiment runs.We used a number of various criteria in evaluation.Firstly, we specified the algorithm version that achieved the highest scores for each threshold.Secondly, we specified the version that obtained the highest break-even point of Precision and Recall.We also verified the statistical significance of the results using paired t-test.Figs. 4, 2, 5 and 3, represent the results of the experiment in F-score for all tested versions of the classifier on set50 and set130, trained on n-grams or patterns, respectively.A more detailed comparison can be done by looking at Figs. 6 and 8, showing Precision and Recall for set50, for two classifier versions, namely, all_patterns and all_n-grams, respectively.The proposed method based on combinatorial sentence patterns achieved advantageous results comparing to n-grams most of the time.This, suggests a considerably more prevalent presence of frequent combinatorial patterns in future-referring sentences, comparing to n-grams.The analysis of differences between different pattern lists and weight calculation schemes showed, that discarding 0-patterns did not impact the final results in a significant way.Much more noticeable distinction was observable for cases in which all ambiguous patterns were discarded, leaving only the most frequent patterns characteristic for each side.Additionally, modifying pattern weight by awarding the length of patterns usually yielded higher scores.The highest scores of F = 0.71 with P = 0.56 and R = 0.98, were obtained for the classifier version using pattern list with length awarded in weight calculation and 0-patterns discarded.The most visible advantage of using patterns over n-grams always appeared in the Recall.This suggests that when the model is trained only on n-grams many valuable patterns are omitted.Precision did not indicate significant changes and usally oscillated within 0.55–0.60.There were several threshold points where n-grams obtained comparable or higher Precision, which suggests the point around 0.55–0.60 as the optimal maximum achievable for the proposed morphosemantic pattern-based approach.In the future we plan to find an improvement to the method raising the Precision score while retaining Recall.Next, the results were also compared separately to the cases of modifying pattern lists by discarding 0-patterns, or all ambiguous patterns, and modifying various weight calculation schemes.The highest obtained F-score for patterns was 0.71.For n-grams the highest obtained F-score was 0.70.The difference was not large, however, because of higher Recall in most cases, patterns usually obtain higher F-score for the same threshold point.All the above results imply that the pattern-based method achieves higher results in general.As for the next step in the analysis, we compared the results obtained by two datasets, namely, set50 and set130.The comparison results are shown in Table 4.For set50, the plateau for F-score achieved by pattern-based method was reached at about 0.67–0.71, and 0.67–0.70 for n-grams.The same plateau for set130 for patterns reached about 0.67–0.70 and 0.67–0.69 for n-grams.The threshold point, where the results were usually the highest was close to 0.0, although minimally biased toward 1.This suggests that the training set was in general balanced, with slight bias toward future-related sentences.Apart from the automatic classification results, we were also interested in the actual patterns that influenced those results.In this section we present detailed analysis of the results to facilitate better understanding of the usefulness of the future-referring morphosemantic patterns.We extracted the most frequently used7 unique morphosemantic patterns from the experiment based on set50.Each time the classifier used a pattern from the pattern list, the used pattern was extracted and added to the separate list of most useful patterns.This extraction was performed for each test in the 10-fold cross validation.By taking the patterns extracted this way from all tests and leaving only the frequent ones, we obtained a refined list of most valuable patterns.This way we obtained 1,131 future-referring patterns and relatively much fewer number of 87 patterns not referring to the future.Some examples for both kinds of patterns are shown in Table 5.We investigated those patterns together with the sentences they were used in.The following two example sentences contain the pattern ∗∗.Iryō, bōsai, enerugī nado de IT no katsuyō wo susumeru tame no senryaku-an wo, seifu no IT senryaku honbu ga 5gatsu gejun ni mo matomeru. ,Tonneru kaitsū ni yori, 1-nichi 50 man-nin wo hakobu koto ga kanō ni naru mitōshi de, seifu wa jūtai kanwa ni tsunagaru to shite iru. ,The following examples contain a slightly different pattern, namely, ∗∗.Nesage jisshi wa shinki kanyū-ryō, kihon ryōkin ga 12gatsu tsuitachi kara, tsūwa ryōkin ga 1996nen 3gatsu tsuitachi kara no yotei. ,Kin’yū seisaku wo susumeru ue de no kakuran yōin to shite keishi dekinai, to no mondai ishiki no araware to wa ie, kin’yū-kai ni hamon wo hirogesōda. ,In the above examples the patterns that were matched comprise the ones we studied manually in Section 3.These include time-related expressions and future reference expressions.Next, we examined sentences containing non-future patterns.The following example sentence contains the pattern ∗∗.20man-ji no chōhen shōsetsu kara 2 moji dake wo kopī shite shōbai ni tsukatte mo ihō to wa ienai. ,The following sentence contains the pattern ∗∗,Nagata-ku wa Hanshin Daishinsai de ōkina gai wo uketa chiiki de, koko de wa Betonamu no hito ga kazu ōku hataraite iru. ,Finally, the following example sentence contains the pattern ∗∗,Sakunen 6gatsu, Kaifu ga Jimintō to tamoto wo wakatte aite jin’ei ni kumi shita toki mo, rinen to meibun ga hakkiri shinakatta., their ideas and causes were unclear.) ,Example 5 contains a phrase to wa ienai, which is labeled in morphosemantic structure as – also a frequent label in future-referring sentences.However, just by this fact the sentence is not yet classified as future-related.Example 6 contains a phrase –shiteiru which is labeled .This phrase appears also in future reference sentences.However, in future-related sentences it is usually labeled as due to the type of verb it accompanies.In Example 7, although it contains time-related expressions, the use of sophisticated patterns taking into account wider context, allows correct disambiguation of such cases.Furthermore, since this pattern does not appear on the list of future reference patterns, although it contains time-related expression, it suggests that the presence of time-related information alone does not influence the classification.Instead, other elements of a pattern, such as appropriate tense, etc., together with time-related expressions constitute the pattern as being distinctive for future reference sentences.There were many future reference patterns with high occurrence, which means the sentences in test sets contained many of those patterns.Therefore we can conclude that the concept of “the future” in general has high linguistic expressiveness.For non-future reference patterns, the occurrence frequency of patterns is low, which means that although they could appear in a large variety, each of the patterns was used only once, thus they were not included in the list of most useful patterns).The high variety of patterns suggests that there were no particularly distinctive patterns for sentences not referring to the future, which is an expected result as “non-future-related” is not a linguistically consistent concept, and could mean both related to the present, past or not time-related at all.After the evaluation we performed a series of additional experiments to validate the method in practice.At first we performed an estimation of the effectiveness of morphosemantic patterns in the task of future reference sentence extraction.Firstly, we collected an additional new validation set, unrelated to the initial datasets.From the Mainichi Shinbun daily newspaper articles from one year we extracted 170 sentences from articles appearing on first three pages of each edition, and articles from the “economy” and “international events” sections under the topic “energy.”.We manually annotated these sentences as either future or non-future-related with five annotators: one expert annotator and four laypeople.Each sentence was annotated by one expert- and two layperson-annotators.We decided to leave the sentences for which there was an agreement between at least one layperson annotator and the expert.As a Result 59% were left as the validation set.Next, we classified these newly obtained sentences using the most frequent patterns generated in previous experiment.In particular, we performed pattern matching on the new sentences with the following sets:the first 10 patterns,adding 5 patterns of the length more than three elements to set A,subtracting 5 patterns from set A,using only the first 10 patterns containing more than three elements,Once the performance level reached a plateau, increasing the number of patterns made little difference.In the future, we will investigate in more detail how the plateau fluctuates according to the size of the validation set and the number of patterns used for classification.The performance of pattern set C was poor because only a few patterns were used.The Precision of pattern set D is slightly higher than that of the other sets.This indicates it could be more effective to use frequent morphosemantic future reference patterns containing more than three elements, even when the number of applied patterns is small.From the above, we conclude that it would be more effective to use patterns consisting of a few elements if the focus of the extraction was on Recall, whereas it would be more effective to use patterns consisting of three or more elements if the focus of the extraction was on Precision.The scores obtained in this experiment were generally lower than those in the evaluation experiment.However, we were able to extract future reference sentences with approximately 40% of Precision using only ten patterns, a score not far below the one achieved in the evaluation experiment.This suggests that the performance could be also further improved when morphosemantic patterns are narrowed to those appearing in specific genre of events.We also compared our experimental results with those reported by Jatowt and Yeung.In their experiment they extracted future reference sentences with 10 words and phrases unambiguously referring to the future, such as temporal expressions like “will,” “may,” “is likely to”, etc.We translated those phrases into Japanese and applied them to the new validation dataset of 170 sentences.The results were P = 0.50, R = 0.05, and F = 0.10.Although the Precision seems higher than the one described in Section 6.1, our method extracted much more future referring sentences correctly with equal number of 10 morphosemantic patterns.This indicates that the proposed method is valid.The reason for the low score obtained by the method of Jatowt and Yeung on our validation dataset, despite its showing a better performance reported previously by Jatowt and Au Yeung, could be explained by the differences in the approaches.Jatowt and Yeung used future-related words and phrases well known in linguistics, and searched for future sentences on the Internet which contains sufficient amount of data for extraction with even minimal number of seed words.We on the other hand trained our method automatically without providing any linguistic knowledge on a corpus from which we automatically extracted sophisticated morphosemantic patterns.The results were summarized in Table 6.Finally, we verified the performance of a fully optimized model.The results of evaluation experiment described in Section 5, in which we compared 14 different versions of the classifier on two initial datasets, indicated that the model with the highest overall performance was the one using pattern list containing all patterns with weights modified by awarding pattern length.Therefore we re-trained the above model using all sentences from set130 and verified the performance by classifying the validation set of 100 sentences.For the evaluation metrics we used standard Precision, Recall and F-score.The scores of sentences oscillated from −0.01 to 2.27.The higher the score, the stronger was the morphosemantic similarity to the training data.We also verified the performance for each threshold, beginning from 0.0 and checked every 0.2, up until 2.2.The overall performance is represented in Fig. 12.The highest reached Precision was 0.89, at R = 0.13 with F = 0.22.The highest reached F-score was 0.78 with Precision = 0.65 and Recall = 0.98 around the threshold of 0.4.Finally, the break-even point was at 0.76, which indicates that the proposed method which trained on automatically extracted morphosemantic future reference patterns is sufficiently capable of classifying future reference sentences.In the experiment described in Section 6.1 the obtained result was not high.However, the number of future reference patterns used in that experiment was only the most useful 5 to 15.Obtaining this result using less than one percent of the whole coverage of the method makes it effective for extracting future reference sentences.Moreover, the results of the experiment indicate that the performance shows a tendency of constant improvement by increasing the number of patterns in the future reference sentence extraction procedure.This is also confirmed in the experiment with increased number of patterns, in which the performance of the classifications results greatly improved with an F-score reaching 76% at the break-even point.An important point is that after the high BEP, the F-score does not show a noticeable decrease in performance.If the BEP is low, the increase in Recall throughout the threshold usually causes major changes in the balanced F-score.If only a few strong patterns are used, the Precision might become high, however the extraction will not be exhaustive.With the increased number of patterns both Precision and Recall reached 76% which is around the threshold 1.0.Furthermore, when a lower threshold, around 0 is reached, Recall gets close to 100% while the F-score is retained around 75–76% and although Precision, although decreases, it does not decrease significantly.The result clearly shows that future reference sentences are extracted exhaustively using the presented method based on morphosemantic future reference patterns.In other words, such patterns are helpful in extraction of future reference sentences.Therefore we state that it is a valid method.Apart from the automatic classification results, we were also interested in the actual patterns that influenced the results.In Fig. 13 we present detailed analysis of two sentences which obtained high scores in the experiment with the first four patterns mapped on the sentences to facilitate better understanding of the future-referring morphosemantic patterns.We conducted two experiments to confirm whether future reference sentences extracted with the proposed method are effective for actual future trend prediction.When predicting future trends, people synthesize multiple sources of information.This includes their own knowledge, experience, experts’ opinions regarding the future, past examples, and news from the Web, radio, television, and newspapers.Such a large number of information sources potentially gives an access to an incalculably vast amount of knowledge.In practice it is difficult to follow all of it, however, even if knowledge and expertise are in short supply, it is possible to acquire a vast amount of information through an Internet search.Unfortunately, by using a simple keyword search, millions of pages are retrieved and it can be extremely difficult to find the information one needs.Searching through related sites often brings up similar information.This is due to the fact that search engines function according to statistical data storing based on access frequencies.On the other hand, professionals such as data scientists are carrying out predictions of future trends according to statistical analysis and processing of numerical data.By applying data-mining techniques it should be possible to blend the expert’s experience and knowledge found on the Internet to predict future trends automatically, or semi-automatically.The most important factor in prediction activities is to efficiently and effectively obtain the data actually useful in such trend prediction.In previous sections we presented our method for classification of future reference sentences and evaluated it in a closed environment.We confirmed the validity of the method and its performance in comparison with the state-of-the-art method.The final purpose of the method is to support everyday predictions regarding specific events, described in newspapers.In the following sections we will first evaluate the performance of our method with regards to this specific function, namely, a future prediction support tool.However, it would be desirable to also implement the method not only as a tool for everyday future prediction support for laypeople, but also as a part of a larger framework for a fully automatic prediction of the unfolding of future events.Therefore we created a prototype method for fully automatic future trend prediction and compared its performance with human level - both for laypeople, as well as experts.As a measure for practical evaluation of future prediction support systems, Jatowt and Yeung as well as Kanazawa et al. indicate that the validity of the prediction could be estimated by searching for a real-world event corresponding to the one predicted automatically.However previous research did not propose how to objectively select such events.Therefore to assure maximum possible objectivity of the evaluation we needed to find such data.In the experiment for supporting future trend prediction we used the fully optimized model of future reference sentences trained on morphosemantic patterns described generally in Section 4 and specifically in Section 6.3.The model was applied to extract new FRS concerning a specific topic, from the available newspaper data.Such sentences were further called future prediction support sentences.Future prediction was performed by a group of thirty laypeople, who were told to read the FPSS and reply to questions asking them to predict the future in 1–2 years from now, or from the starting point of prediction.The questions were taken from the Future Prediction Competence Test, released by the Language Responsibility Assurance Association8, a nonprofit organization focused on supporting people of increased public responsibility and people responsible for making decisions influencing civic life.Such people often need to perform public speeches in which they reveal details or opinions regarding future events.In such situations they are obliged to express some contents, while restraining from revealing other contents.Thus the association helps to prepare people’s public speeches and responsibility-bound presentations.The Future Prediction Competence Test is an examination that measures prediction abilities in humans regarding specific events that are to happen in 1–2 years in the future.It was created in 2006 and from that time it has been performed six times.The test consists of various questions, including multiple choice questions,essay questions,and questions that must be answered using numbers.The questions are scored after those particular events have come to pass.The questions for the experiment to benchmark our future prediction support method were selected from the 4th of the past six future prediction tests, as it had the largest total number of questions, and respondents, which would assure the highest possible objectivity of the evaluation.Implemented in 2009, the 4th Future Prediction Competence Test contained questions regarding predictions for 2010 and 2011, and the scoring was performed in 2011.Respondents were to choose to answer at least 15 questions from a total of 25 questions in six areas, namely, politics, economics, international events, science and technology, society, and leisure.The test contained a large number of multiple choice questions and several questions requiring predicting specific numbers.There was also a small number of questions requiring a written explanation of the reasoning for the prediction.When participating in the Test, respondents could browse through any and all available materials, and were free to seek the opinions of others in answering the questions, but the submission deadline was fixed and set at December 31st, 2009.The scoring is set at 90 total points on prediction questions and 30 total points for descriptive questions, for a total of 120 points.The prediction support method we developed in this study is intended to provide future prediction support sentences related to a given question, thus helping participants in making a decision on which answer to choose for each question.Therefore for its evaluation we limited the questions to multiple-choice questions.Questions with two or more choices were selected from the 4th Future Prediction Competence Test and applied as questions for the experiment.Six examples of such questions – used in this research – are represented in Fig. 14.In this section we describe the data preparation for the experiment.Firstly, a total of 7 multiple-choice questions were selected from the 4th FPCT test.Laypeople participants read the FPSS presented to them and were given some time to respond.However, in contrast to the original settings of the FPCT test we did not give the participants one or two years to answer, but required them to answer on site.The FPSSs for each question given to the laypeople participants were gathered in the following way.At first we extracted all sentences related to the questions on the basis of topic keywords from the Mainichi Newspaper’s entire 2009 year.For example, for Q3 from Fig. 14 it would be “US Army”, or “Afghanistan”.Those sentences were then analyzed by the proposed method using the fully optimized model trained on MoPs, and sorted in a descending order of the FRS probability score.This way the sentences that appeared on top of the list were in the highest probability to be future reference sentences.We retained only those FPSS with scores over 0.0 and presented the highest 30 of them to the subjects in order.We decided to present the subjects FPSS in the order they appeared in newspapers instead of descending order of scores so that the subjects could have a better image of how the events unfolded, which would make the prediction more natural.We also decided to limit the number of sentences for the subjects to read to thirty so that the subjects did not become bored or tired too quickly.However, we stored the rest of the sentences in case the subjects insisted on further reading.Moreover, there were also situations in which the list of initial sentences extracted with topic keywords was less than thirty.In such situations we gave the subjects all sentences which had a probability of being future reference sentences.As an example, some of the FPSS for Question 3 are presented below.Other newspapers are also carrying out the Mainichi Newspaper’s three-part feature reportage on trilateral coordination between Japan, Korea, and the US regarding North Korean nuclear arms, cooperation between Japan and Korea on reconstruction aid to Afghanistan, and the establishment of regular meetings or “shuttle diplomacy” between the respective leaders of these countries.Additionally, it revealed their intention to finish the Iraq War through the gradual withdrawal of US combat troops stationed there, and put full force into the War on Terror in Afghanistan.Substantial negotiations toward realizing the campaign pledge to reduce the number of stationed US forces “within 16 months of inauguration” have begun, aiming for an early formulation of a comprehensive plan that includes sending more U.S. troops to Afghanistan, a key battleground in the War on Terror.Ahmad Saif, an engineer in Baghdad, rejoiced that President Obama had reemphasized the need to focus on the War on Terror in Afghanistan, increasing the likelihood of an early withdrawal of U.S. troops from Iraq.At a cabinet-level meeting between Finance and Foreign Ministers of each country, in addition to the steps to be taken on the deterioration of public order in Afghanistan caused by formerly dominant Taliban forces, the agenda featured discussion on water resource development policies in response to the ongoing drought, and negotiations over assistance measures.At the conference, a US-Japan joint investigation into strategies regarding Afghanistan was agreed upon, and a special envoy will be dispatched to the US to settle the details.On the 6th, the Russian Ministry of Foreign Affairs made an announcement suggesting that both countries share a stance on the condition in Afghanistan and the War on Terror, and that they are “mildly optimistic” about the results of the Foreign Ministers’ talk.The questions were answered directly after reading each question and the provided FPSS.Additionally, the respondents were asked to report the ID number of those FPSS they referred to in their answer, or the FPSS that was the most informative and useful in their opinion.In the evaluation of participants’ choices we retained the scoring schema as applied in the original FPCT.Namely, questions 1, 2, and 7 were allocated 3 points.Moreover, in questions 2–5 the participants were allowed to choose from one up to three answers: primary candidate, secondary candidate and tertiary candidate, allocated 3 points, 2 points and 1 point, respectively if selected correctly.Additionally, to make the evaluation more strict and objective, for comparison, we also used a different scoring, based strictly on only one point per question give only for the correct answer.The results obtained by the subjects in the future prediction task when supported only with the proposed method, with comparison to the original results of the Future Prediction Competence Test are represented in Fig. 15.At first, the scoring was performed in accordance with the future prediction test scoring procedure, wherein each question is worth up to 3 points with a total of 21 possible points.Apart from the experiment with 30 respondents, we analyzed the original responses of the participants in the 4th Future Prediction Competence Test.The total possible score was 120 points.The test was taken by 11 people.From the total of 120 points, prediction questions accounted for 90 points, while essay questions accounted for 30 points.The comparison was based on prediction questions with a maximum score of 90 points.In the performed experiment, the average score of our participants was 35.71%.In comparison, the average score of the test participants was 33.4%.These results were similar, which shows that even though the events for prediction in our experiment were in fact from the past, the experiment participants performed similarly to original test participants.Therefore it can be said that the participants did not use the knowledge about the predicted events and that they based their judgments on the provided FPSS.Furthermore, in comparison with the original test results, an improvement of approximately 2% was noticed.This can be considered as the contribution of our system.Additionally, the highest score in our experiment was 61.9%, while the lowest was 14.29%.In comparison with the 4th Future Prediction Competence Test these results indicate an improvement of 0.8 percentage points for the highest score range and 7.62 percentage points for the lowest range.The accuracy of the results is shown in Table 7.The results indicate that the most significant contribution of the system for human-based future prediction support is in the time efficiency.By performing all the search, read-through and extraction automatically, the system is time and workload efficient.One does not need to read through all newspapers alone, but the system provides the user pin-point sentences with the highest relevance to the predicted event.Therefore, if one can make the prediction on a similar, if not slightly higher level, with almost no workload - the contribution of the system is in fact considerably high.When it comes to the results calculated according to the strict scoring, although we assumed they would be lower due to a higher chance of obtaining any point, in practice they were higher.In the performed experiment, the average score of our participants was 42.9%.In comparison, the average score of the test participants was 33.4%.The highest score in our experiment was 85.7%, while the lowest was 14.29%.In comparison with the participants of the 4th Future Prediction Competence Test these results indicate an improvement of about 8%-points for the lowest score range to even 25%-points for the highest range.The accuracy of the results is shown in Fig. 15.Moreover, the Future Prediction Competence Test has an established ranking system based on the number of points a participant received.On the 4th Future Prediction Competence Test, a final score covering over 60% of all points gives the participant a title of the 1st Class Future Prediction Competence Expert; participants with scores within 51–60% are given the title of 2nd Class; participants with 41–50% are given the title of 3rd Class.This refers to the level of competence a participant is said to have when it comes to the prediction of future unfolding events.On the 4th Future Prediction Competence Test, 2 people earned 1st Class, none earned 2nd, and 2 people earned 3rd Class.In comparison, experiment participants making predictions with the use of FPSS produced significantly more accurate results, if their scores were calculated at the time of test submission: 6 people earned 1st Class, 6 people earned 2nd, and 4 earned 3rd Class in Future Prediction Competence.Hence, supporting future prediction with future reference sentences extracted for specific topics can be considered as greatly more efficient than collecting available information by oneself throughout a year.In this section, we further discuss the effectiveness of FRS for future trend prediction while comparing in detail experiment results with the results of the Future Prediction Competence Test.As shown in Fig. 15, when we looked at the accuracy of the 4th Future Prediction Competence Test, the average was 33.4%, which demonstrates that when people have every means at their disposal, they still only accurately predict the future about one third of the time.This was confirmed for all of the officially announced results of FPCT.These results are supported by Kurokawa and Kakeya who analyzed trends in the answer results of the 1st Future Prediction Competence Test and verified whether the idea of such collective intelligence is useful or not in the context of future prediction.The accuracy rate at that time was 33.17 %.Moreover, Kakeya et al. concluded that the collective intelligence is not possible when it comes to future prediction.This indicates that predicting future trends is not an easy task for people, even when they have plenty of time and access to all available resources.On the other hand, the results for participants who used the proposed method was, depending on which scoring was applied, either approximately 36% or 43%, which shows some improvement.Furthermore, a consideration of the certification breakdown from 1st Class to 3rd Class shows that only one third of all Future Prediction Competence Test participants received a certification, while over half of our experiment subjects using FPSS, would receive the certification.This indicates that when predicting future trends, FRS can greatly reduce time and effort spent gathering information and achieve above-average predictive accuracy.Therefore, we can conclude that using the FRS to support future trend prediction is both effective and efficient.Next, we analyzed the FPSS most often referred to by experiment participants as useful in choosing an answer.As an example, Fig. 16, shows a graph of the most useful FPSS for Question 3 according to the experiment participants The gray bars indicate the number of times a sentence was referred to by successful respondents, while the white bars indicate the number of times a sentence was referred to by respondents who failed the task of prediction.The contribution of these statements to choosing correct answers can be analyzed by focusing on the gray bars.It is possible that differences in prediction accuracy depend on which of the 30 FPSS statements were referred to.Taking Question 3 as an example, we analyzed both the content of the sentences that were only referred to by incorrect answers as well as those that contributed to correct responses.The values on the horizontal axis of Fig. 16 correspond to FPSS numbers.In our experiment, 83.33% of responses to Question 3 were accurate.The examples of FPSS extracted for this question presented in the previous section indicate that although all sentences contained the keyword “Afghanistan”, some sentences also contained references to US Army troops, whereas others contained the word “Afghanistan” but did not refer to the troops.Therefore, in order to improve the prediction accuracy, it is necessary to devise a better keyword setting for selecting FPSS from newspaper corpora.In this section, we describe and validate a final step in our present research, namely, a prototype method for the fully automatic prediction of unfolding of future events.In the validation experiment we aimed to perform the previous task – described in Section 7.1 – fully automatically.We developed the prototype method for the automatic future prediction to analyze the questions from the Future Prediction Competence Test used in the previous experiment.Although the method can be applied to analyze any content, in this research we limited the input to the existing data to make the evaluation possible and objective.The method consists of following steps.Building an optimized model for Future Reference Sentence extraction,Extracting topic keywords from FPCT questions about future unfolding of events,Applying the optimized FRS extraction model and the topic keywords to extract FRS related to the questions from a limited corpus data,Training a new event-topic-specific FRS model on the extracted topic-related FRS, using the method for Automatic Extraction of Future Reference Sentence Patterns,Analyzing all answers to each question and choosing the one with the highest score as the correct answer.A general flow of the prototype method for automatic prediction of unfolding of future events is represented in Fig. 17.Next, we evaluated the performance of the prototype method for automatic future prediction.In original evaluation task laypeople were to read the automatically extracted Future Prediction Support Sentences related to Questions from the Future Prediction Competence Test and then select those answers to the questions they considered as correct using only the provided FPSS.The method for automatic prediction takes the human out of the loop in the prediction task.Therefore in practice the method accounts for automatically reading through the limited corpus and providing an automatic inference regarding answers to the FPCT questions based only on the automatically learned information.In the evaluation, as the reference corpus for the method for learning we applied the same newspaper corpus as in Section 5, but limited it to one year, namely 2009, which presumably contained news articles related to the questions.For each of the questions we used the extracted topic keywords with the fully optimized general FRS model to extract FRS related to each question.Next, the newly obtained FRS were used as training data to train a new model for each question.Finally, the newly created topic-oriented FRS-based model was used to analyze the answers for each question and and the answer with the highest score was selected as the correct answer.Moreover, in order to analyze the influence of FRS on the accuracy rate of correct answers, we developed two versions of the prototype method.Used for training thirty or less FRS,Used for training all FRSs which scored over 0.98.To put the developed prototype method in the same position as human participants, in evaluation of the prototype method we adopted the same weighted scoring schema as in the future prediction support experiment.Namely, for questions 1, 2, and 7 if the prototype method answered correctly, it was given 3 points for each question.Furthermore, for questions 2–5 if the correct answer was selected by the prototype method as either the first, second or third candidate, it was assigned 3 points, 2 points or 1 point, respectively.An example of scoring of answers for the questions, for both versions of the prototype method are represented in Table 10.The results of the prototype method for each question are shown in Table 11.For each question the answer with the highest score was selected as the correct one.The version of the method using thirty FRS, obtained the accuracy rate of 57.14% for correct answers.It is an improvement of over 20 percentage-points over the results obtained by human participants in both the experiment and the original FPCT.Additionally, although the scores assigned by the prototype method to each answer were different for both versions, there was no difference in the final ratio of correct answers between the version using thirty FRS or less, and all FRS with over 0.98 of FRS-resemblance score.Considering that the number of FRS used in training did not influence the results, it could be more efficient to use the version of the method using fewer number of sentences for training.In this experiment, we automated the task of reading future reference sentences and responding to future prediction questions.The experiment results showed an improvement of over 20 percentage points of the developed prototype method over human participants who took part in the prediction support experiment.Moreover, the result was also 23.7 percentage-point higher than for the average results of participants of the original 4th Future Prediction Competence Test.In fact, with Accuracy on the level of 57.14% it was very close to the highest result obtained by participants of original test and of the future prediction support experiment.Therefore we can clearly say that the prototype method was nearly as good in predicting the unfolding of future events as the test takers with an excellent score, and it was almost twice as good as test takers with an average score, both using all available resources to prepare their answers over a year, and using our support method and making the prediction at the time of the experiment.The final results are compared in Table 12.In addition, when the correct answer was allowed until the third candidate, 5 out of 7 questions could be considered correct, which gives a 71.43% score for Accuracy, being over twice as high as an average human and over 10 percentage-points higher than the best scoring human, which can be considered a success.Furthermore, if the tendencies of correct answer rates for each question were compared between the prototype method and the future prediction support experiment, the tendencies of correct and incorrect answers were very similar.In particular, both human participants and the proposed prototype method failed in questions Q4 and Q5.This suggests that the process of inference done automatically resembles, and further exceeds human performance.Although we acknowledge that there were many limitations imposed by the controlled character of the experiment, the final result was more than satisfactory.Therefore we plan to perform additional experiments on other real world events to obtain a clearer image of the capabilities of our method, most desirably on events that in reality will unfold in the future from the time of the prediction.In this paper we presented two original methods, namely, a method for extracting references to future events from news articles, based on automatically extracted morphosemantic patterns, and a prototype method for the automatic prediction of future unfolding of events using the first method.The first method firstly represents news articles in morphosemantic structure using semantic role labeling supported with part-of-speech tagging and compound word clustering.Next, it extracts all possible morphosemantic patterns from the corpus including sophisticated patterns with disjointed elements.After the method was trained on patterns distinguishable specifically for either future- or non-future-related sentences we performed a text classification experiment in which we compared 14 different classifier versions to choose the optimal settings.The optimized method was further validated on completely new data unrelated to either training or test sets, and compared to the state-of-the-art method.The proposed method using morphosemantic patterns outperformed the state-of-the-art method and when optimized reached the final score of high Precision and Recall with the break even point and plateau balanced on 76% level.Detailed analysis of the automatically extracted future reference sentences also showed that our method, taking advantage of morphosemantic structures in language, is capable of correctly extracting the future reference sentences both in the case of explicit expressions of future reference information as well as in more difficult cases of implicit and context dependent information.Furthermore, comparison with a previous method using a global Web search indicated that it is more effective to train the method specifically on newspaper corpus rather than other kinds of textual data.This indicates that newspaper articles can be considered as sufficiently reliable source of future-referring information.Next, we conducted a validation experiment to determine whether the developed method for future reference sentence extraction could be effectively applied in supporting future trend predictions.We drew questions from the official Future Prediction Competence Test and, using topic keywords from those questions, gathered newspaper articles from the entire applicable year.Then we extracted future prediction support sentences from those articles, and had thirty laypeople read those sentences and make predictions regarding unfolding future events.The results yielded an average improvement of 10 percentage points over the results of the original Future Prediction Competence Test.However, the original test allowed respondents to prepare their answers for over a year and use any available information, as well as seek the opinions of others, including experts.On the other hand, the subjects of our experiment replied immediately after reading the provided support material, which consisted of only thirty FPSS.Therefore, although further confirmation experiments are needed, we can say that within the scope of the present experiment, the significance of obtained results for prediction support has been sufficiently demonstrated.In the experiment, only separate future reference sentences were extracted from whole articles.One would expect that to make an accurate prediction, a human respondent would need to read the whole article.However, the experiment showed that if FRS are extracted with accurately set topic keywords they yield very detailed information sufficient to make the prediction.After confirming the usability of FRS in supporting future predictions in human respondents, we designed a prototype method for fully automatic future prediction.The method was designed to answer questions from the official Future Prediction Competence Test.Similarly to human-based experiment it used topic keywords from those questions, and gathered FRS from newspaper articles from one entire applicable year.Those topic-related FRS from news articles, were then used in training a question-specific FRS model.Finally, the model was used to score answers for each question according to their probability of being the correct answer – the most probable unfolding of each event basing on the automatically obtained knowledge.The results of the proposed prototype fully automatic method for prediction of future unfolding of events showed that the method exceeded even the best humans in the prediction task and the average human over two times.Although additional experiments need to be performed to confirm this, the results provide a strong suggestion that full automation of future prediction is possible.In the future we plan to increase the size of the experimental datasets to evaluate the FRS extraction method even more thoroughly.We also plan to approach the data from different points of view to increase the Precision of the classifier while not decreasing the Recall.We will verify in detail which patterns influence the results positively and which hinder the results.This knowledge will allow us to determine a more general morphosemantic model of future referring sentences.As presented in this paper, such a model could be useful in estimating probable unfolding of events, and would contribute to the task of trend prediction in general.Also, carrying out a chronological analysis of FRS and the addition of sentiment analysis could lead to the discovery of additional new knowledge.We also plan to take part in the next Future Prediction Competence Test to prove that it is possible for a fully automatic method to obtain an official certificate for being an expert system for future prediction.We will also verify to what extent the method trained on newspaper articles could be applied to classify other kinds of corpora, such as blogs or tweets.We plan to apply the future reference sentence classification method to real world tasks by finding new content and sorting them in chronological order, which would allow the support of useful future predictions in everyday life.The authors declared that there is no conflict of interest.
In the following paper, we investigated the usefulness of future reference sentence patterns in the prediction of the unfolding of future events. To obtain such patterns we first collected sentences that have any reference to the future from newspapers and Web news. Based on this collection, we developed a novel method for automatic extraction of frequent patterns from such sentences. The extracted patterns, consisting of multilayer semantic information and morphological information, were implemented in the formation of a general model of linguistically expressed future. To fully assess the performance of the proposed method we performed a number of evaluation experiments. In the first experiment, we evaluated the automatic extraction of future reference sentence patterns with the proposed extraction algorithm. In the second set of experiments, we estimated the effectiveness of those patterns and applied them to automatically classify sentences into future referring and other. The final model was then tested for performance in retrieving a new set of future reference sentences from a large news corpus. The obtained results confirmed that the proposed method outperformed state-of-the-art method in fully automatic retrieval of future reference sentences. Lastly, we applied the method in practice to confirm its usefulness in two tasks. The first is to support human readers in the everyday prediction of unfolding future events. In the second task, we developed a fully automatic prototype method for future prediction and tested its performance using the tasks included in the official Future Prediction Competence Test. The results indicate that the prototype system outperforms natural human foreseeing capability.
27
Business model innovation: How the international retailers rebuild their core business logic in a new host country
Internationalization of the firm can be understood as an innovation decision process.Moreover, the extant research suggests that organizational learning, innovation and internationalization are linked together in a complex way.Reflecting on the significant advancement in international business literature since the Uppsala internationalization process model first proposed by Johanson and Vahlne, Forsgren called on further research to address how various “counteracting forces affect the shape and direction of the internationalization process”, such as relationships among players inside and outside the business network.We argue that research into business model innovation in the host country for MNEs will provide such new insights into internationalization process as business model innovation not only deals with views from the ‘supply side’ but also the ‘demand side’.This is important as noted by Rask “An international perspective on business model innovation is rare in the literature but is a common phenomenon in business.,Rask further suggested that “internationalization through business model innovation involves the creation and reinvention of the business itself”, which is an important part of internationalization process as described in the revised Uppsala model.Even though the literature has attempted to identify typologies of business models, very few are about business model innovation, especially in the international context.What becomes clear is that to understand business model innovation and the activity driven strategies from home to host country, there is a need to explore local contextual factors that impact on the business model and furthermore how these contextual variations affect the changes of the business model.Dunford et al. suggested that business model literature so far has paid much less attention to the specific details of processes whereby new business models are “discovered, adjusted and fine-tuned” in one given host country.Most of the studies have been cross sectional.There is call for a sharper focus such as within particular industries, for example, the retail industry and service-based companies.The business model literature suggests that the process of business model innovation can be considered as an on-going learning process with the need to consider what is known as double-loop learning.Given the close relationship between organizational learning, innovation and internationalization, we believe using organizational learning as a framework to classify the different patterns of business model innovation in the international context provides a promising angle.The business model in practice is a complex phenomenon, understanding which in the international business context is more nuanced and challenging.Therefore, more research especially those by multi-case studies are needed to investigate the best practice of business model innovation in the international context as called by Anwar; Landau, Karna, and Sailer, and Delios.A critical issue is that the transference of business models from home to host country does not itself guarantee success.Business model innovation arises from not only the interactions and shared learning between home and host organizations but a multiple set of actors from consumers, stakeholders and competitors.These interactions create patterns of strategic behavior.It, therefore, becomes important to mobilize two conjoined theoretical perspectives: internationalization, and organizational learning to address one neglected question.What are the different patterns of business model innovation which enables international retailers to rebuild their core business logic in new host country?,We focus on the retail industry in one given host country for three reasons: First, as we discussed previously, the studies of business model innovation in the international context need to concentrate on a small number of industry segments and countries in order to control industry/segments or country-specific differences.Second, in contrast to the international manufacturing firms, the international retailers are more embedded into the local business environment and their practices encompass a broader range of activities as they seek to develop new ways for interacting with the local customers and the local suppliers.Thus, the retail setting provides a compelling context for observing how the firms conduct business model innovation in a new business environment.Third, the existing studies regarding international retailers’ market operations often focus on the analysis of retail format or concept transfer strategies and adaption decisions.However, this study adopts the concept of business model innovation which aims at consciously renewing a firm’s core business logic rather than limiting its scope of innovation on single retail concepts or formats.Thus, the choice of the retail industry setting enables this research to make contribution to international retailing.China is the largest, the fastest growing, and the most heavily engaged country in international trade and investment and in retail.On the basis of comparing and contrasting the business model changes of 15 international retailers from various home countries to one single host country, our study provides an in-depth understanding of the business model innovation in the context of international retailing."By looking at the firm's capabilities in rebuilding core logic in the setting of a host country, we reveal six routes of retail business model innovation.Utilizing the lens of organizational learning theory, we identify three patterns of resource deployment by international companies in the process of developing business model innovations.Our study therefore provides insights and guidance for multinational companies in general, international retailers in particular, as for how to successfully adapt their business model from home country to host country.In the remainder of the paper, first we review briefly the concepts from previous research regarding retail business model and organizational learning.Then we provide a detailed account of our qualitative research design and methodology.Subsequently, we report our findings and link them with previous research.The paper concludes with the theoretical and managerial implications and suggestions for further research.We draw our literature review from three research streams: retail business model, internationalization of retail firms, and organizational learning.Research into business models has grown exponentially in the last two decades, but there is no consensus on what is a business model which often causes confusion among scholars as well as managers.In an extensive review of over 40 definitions from 216 articles, Massa et al. summarized three distinct interpretations regarding the role of business model: business model as attributes of real firms have a direct real impact on business operations, that is, the core logic with which an organization achieve its goals; business model as cognitive/linguistic schema, that is, the dominant logic capturing how a firm believed to operate; and business model as formal conceptual representations or descriptions of how an organization functions, that is, business model as a scaled-down simplified formal conceptual representation.In an earlier review, Zott et al. considered the common ground of various business model conceptualization and characterized business model as “as a new unit of analysis, as a system level concept, centered on activities, and focusing on value.,Zott and Amit suggest that the business model is “a theoretically anchored robust construct for strategic analysis.,In this paper, we consider business model as a concise representation of a firm’s underlying core logic for creating value for its stakeholders.This core logic suggests that a properly crafted business model helps articulate and make explicit key assumptions about cause-and-effect relationships and the internal consistency of strategic choices.The strategic choices are related not only to the structure of the value chain, but also regarding the choices of customer and value proposition to these customers.The consequences describe whether the firm can create and deliver value to customers and to itself.Drawing from these core ideas, we define a retail business model as a representation of a firm’s underlying core logic and strategic choices of target client, shopper value and the structure of retail value chain for creating and delivering value to the firm itself and its customers.In other words, we follow the first interpretation of business model conceptualization as identified by Massa et al.This interpretation of business model is especially relevant to research into internationalization of firms as it “sought to shed light on the role of business models in competitive dynamics and performance”, explicitly deal with “organizations themselves and their network of partners”, and “defines also the role a company chooses to play within its network.” In their revised Uppsala model, Johanson and Vahlne regard reciprocal commitment between the firm and its counterpart as essential requirement of successful internationalization.Since the Uppsala internationalization process model first proposed by Johanson and Vahlne, significant changes in business practices and theoretical advances have been made in the international business literature.Johanson and Vahlne revised the Uppsala model based on the view that the business environment faced by an internationalizing firm could be considered as a web of relationships, a network, rather than as a neoclassical market with many independent suppliers and customers.Coviello and Munro suggest that “from this network-driven behavior, cognitive development also occurs”.Departing from this revised Uppsala model, Blankenburg-Holm et al. argue that internationalization process can be understood as a transition from the position of being outsiders to become insiders in the foreign market business network through opportunity recognition and exploitation.Moreover, internationalization of the firm can also be understood as an innovation decision process.Therefore, Chiva et al. suggest researches in international business field should link the concepts of organizational learning, innovation and internationalization to better understand the knowledge-based economy in the age of globalization”.The internationalization literature claims that there is a positive and reciprocal relationship between internationalization and innovation and argues it from different dimensions.First, innovation facilitates internationalization because it confers market power and competitive advantage on the firms to compete in foreign markets.Second, innovation motivates internationalization because innovative firms tend to expand their markets in foreign countries for increasing their sales volumes or their pay-off from uncertain innovations.Third, internationalization improves the firms’ ability to innovate because it enriches the firms’ sources of knowledge and allows the firms to get new ideas from foreign markets.Therefore, it seems relevant to conduct a study in the international business field from an innovation perspective.There are different types innovation.However, business model innovation is increasingly emerging as an important aspect in international business studies.At an abstract level, Business model innovation has been defined as the “process of defining a new, or modifying the firm’s extant activity system” or “the discovery of a fundamentally different business model in an existing business”.Sorescu et al. define retail business model innovation as “a change beyond current practice in one or more elements of a retailing business model and their interdependencies, thereby modifying the retailer’s organizing logic for value creation and appropriation”.Combining these ideas and our definition of retail business model, we define a retail business model innovation as a change beyond current strategic choices of a retailing business model, thereby modifying the retailer’s underlying core logic for creating and delivering value to itself and its customers.Retail firms’ internationalization in contrast to that of manufacturing firms has its own specific characteristics.In general, international retailers are more embedded in the local context than production-based international firms.For example, international retailers have direct contact with consumers in the host country, which makes retailing highly culture specific.As for sourcing and supply chain activities, food retailers in particular still source the vast majority of their products from the local suppliers, which makes retailers’ competitive advantages more vulnerable to local business practices and existing relationships.Furthermore, international retailers need to sink capital into physical assets such as the store network and the infrastructure of distribution and logistics, which connects them intricately to the real-estate and land-use planning system of the host country.In response to these challenges, interest in business model innovation for international retailing studies seems particularly pertinent as it provides a useful tool capable of simultaneously considering all relevant internal and external factors.Understanding the mechanism of retailers’ internationalization is therefore from the perspective of rebuilding the firms’ core logic in host country rather than limiting its scope of transfer or innovation on single retail concepts or formats.We argue that research into business model innovation in the host country for MNEs will provide new insights on this point as business model describes the position of the firm within the value network linking suppliers and customers and business model innovation therefore useful for the firm to map and seize the opportunities for value creation across the network.Building from the above arguments and from an organizational learning perspective of internationalization literature, we propose an analytical framework for classifying the international firms’ business model innovations in the host country.In general, the literature posits close relationships between organizational learning, innovation and internationalization.In particular, research suggests that the process of business model innovation can be considered as an organizational learning process.Therefore, it seems relevant from an organizational learning perspective to identify the axes by which business model innovations could be classified.Jansen, Van Den Bosch, and Volberda discuss that innovation could be incremental or radical, that is, based on exploitative or explorative organizational learning.Exploitation refers to “refinement, efficiency, selection and implementation”.Exploitation is operational efficiency-oriented arising from the incremental improvement of existing organizational routines to enable the firm to realize economies of scale, and consistency through the application of standardized practices across all its units.For example, international retailers’ global integration is essential for them to reduce costs, optimize return on investments and protect their established reputation.However, exploration refers to “search, variation, experimentation and innovation”.Exploration is the development of new routines to capitalize on novel environmental conditions, but more time consuming, entails uncertain results, and has a longer time horizon than refining current knowledge and extending current competencies.For example, international retailers may innovate on different dimensions: retail formats, branding, assortment, customer experience, information technology, new media, handling of payment and order fulfilment for addressing the challenges raised from host countries which are different from those of their home country.As explorative learning and exploitative learning are relatively contradictory but also interdependent, Li suggests that such duality of exploitative and explorative learning is essential for international business especially for cross-border learning.Using such a lens therefore enable us to scrutinize business model change patterns and thereby business model innovation when MNEs move to host country from home country.Although both exploration and exploitation are essential for organizations, they compete for scarce resources.The resources which foreign subsidiaries of the international firms draw upon may be located in the home base of the firm and/or the subsidiary’s host country environment.Even though the valuable, rare and inimitable resources from parent company are deemed important sources of competitive advantages for its foreign subsidiary, the ultimate successful deployment of these resources requires not only the parent’s transference of resources but also the subsidiary’s absorptive capability and motivation to effectively utilize these resources.The parent company generally is in the position to select, reconfigure and integrate existing resources to address changing environments.However, the subsidiary’s motivation to learn from its parent and its ability to integrate locally the transferred resources is important.The development of governance mechanisms to assist in the transference and use of resources while ensuring protection against misappropriation and misuse by subsidiaries seems a fundamental premise for the international firms.Furthermore, subsidiaries also have to learn to use their local-based, external resources as they may face constraints both in term of quantity and type of resources required to build its competitive advantage in the host market.Therefore, the effective exploitation and exploration of local resources require the subsidiary’s dynamic capability to combine external local resources and internal resources together.The strategy for the international firm to decentralize is seen to be important in ensuring that both the flexibility and responsibility of the subsidiary is not eroded.To sum up, drawing from an organizational learning perspective and the internationalization literature, an analytical framework for classifying the international firms’ business model innovations in host country could therefore be built with two dimensions: organizational learning capability and source of resources:.We apply this analytical framework in our case studies of international retailers in China.The results of this study allow us to identify the major patterns of international retailers’ business model innovations as well as the concrete routes to rebuild the firms’ core logic in the host country under each pattern.We report the details in the next sections.To evaluate the research questions we used a multi-case inductive study approach, based on a comparative analysis of 15 companies.Given the complexity of describing business models the multi-case study approach enabled the collection of data from multiple sources instead of self-reported events.Furthermore, it enabled the examination of each firm’s behavior over a reasonable period time rather than being a simple cross sectional check.The use of multiple case studies allows a replication logic which tends to yield better grounded theories and more generalizable results than those from a single case study.In our research, we followed the procedure recommended by Eisenhardt and Yin.The unit of analysis is the business model of an organization at the level of strategic business unit which is defined here as a given retail format in a given foreign market.For example, Carrefour hypermarket China and Carrefour hypermarket France are two different SBUs.Carrefour hypermarket China is also different from Carrefour supermarket China.If the international retailer operates multiple retail formats in China, we focused only on its dominant retail format in that market while we were carrying out this research.For example, we realized our interviews with the managers of Carrefour China in 2005 when it operated 70 hypermarkets but only 8 supermarkets in China.We therefore only chose to study Carrefour hypermarket being the major retail format exploited by Carrefour in China.Fifteen international retailers from a total of 152 registered in China were selected.Suitable firms were chosen using the following criteria as shown in Table 1.First, a high level of retail sector representativeness was required.Nearly all these firms were listed in top 100 retailers in China.Second, these firms were richly diversified in terms of sector of activity, entry mode, order of entry and country of origin, enabling our theory build.For each company, we collect the data through documentation and semi-structured interviews.To gain contextual and company-based information we also used a wide range of databases such as EBSCO, Euromonitor, Factiva, Xerfi 700doc, S&P Capital IQ as well as the firms’ websites, existing case analysis of individual companies), archival data from the investigated companies, as well as other sources such as retail business portals.We then identified managers who were involved in both strategic decision-making and strategy implementation for the interviews.These included Chief Executive Officers, Chief Operating Officers, and General Managers and Directors operating in different functions.A total of 18 face-to-face digitally recorded semi-structured interviews of top managers were conducted from the 15 selected firms.The interviews commenced by initially identifying each manager’s understanding of the Chinese retail industry, their firm’s business strategy; their role in the firm and how their firm sought to achieve a competitive advantage in Chinese markets.Using open-ended questioning we were able to identify: each organization’s strengths and weaknesses, the methods used to establish the local consumers’ requirements, the retail solution for host country consumers and the implications of these solutions to the organizational core logic of each firm.We then explored how the organizations adapted their home country business model to the specificities of the Chinese market and what resources were transferred.To enable responses to be easily compared interviews with uniform and structured close-ended questions were carried out in order to establish insights into firm performance, such as, sales turnover, number of stores, inventory turnover, etc.Potential informant bias was addressed in several ways.First, the interview guide was developed by initially piloting the questionnaire with two experts: an academic, expert in the retail field, and a retail industry expert.Second, the use of open-ended questioning focusing on the firm’s strategic activities rather than on general descriptors, helped to limit recall bias, following the “party line”, and to this extent enhanced the level of openness and accuracy.Third, to prevent causal and temporal inferences we triangulated the data enabling a systematic examination of data from multiple sources which raised more questions, enabling verification, and therefore reducing the likelihood of data and researcher bias.For example, in addition to the information gathered from the managerial interviews, we collected more objective data from other sources such as S&P Capital IQ.Finally, as part of the verification of the data process we also carried out post-interview e-mail and telephone follow-ups to reiterate and clarify the data obtained, and to ensure that it was a true reflection of what the interviewees intended to express.We conducted a five-stage data analysis.First, we assembled the data from the multiple sources in chronological order for each study unit.The data was validated by undertaking parameter comparisons for each of the companies enabling the data-set build −up for each study unit.Second, we undertook the coding process using Nvivo9 and assigned labels to the each retailer’s strategic choices from three dimensions: target clients, shopper value propositions and retail value chain.With respect to establishing the retail value, we identified and summarized the key strategic activities for each study unit using twelve categories: conception of products, production of private label products, procurement, logistic, retail concept, HR management, financing and accounting, organization/structure, information system, relationship with suppliers, relationship with customers and CSR.Third, we compared the business model of an international retailer between home and host country and wrote the summary for all 15 sample companies.Fourth, by so doing we were able to identify the key strategic choices and consequences for each unit of analysis.The choices are related to the strategic changes of an international retailer in the host country.The consequences were observed from two dimensions: value created for consumer and value created for the enterprise.We then were able to draw “a causal loop diagram” for each study unit in host country.The causal loop diagram, which was first introduced by Casadesus-Masanell and Ricart, is a useful way to represent a business model where strategic “choices and consequences were linked by arrows via causality theories”.We illustrate our approach of building a causal loop diagram through the example of 7-Eleven China.For instance, we are able to draw the causal linkage between the strategic choice of “alliance with local partner” and the consequence of “reduction of purchase cost” based on the analysis of the documents of the firm.This result was confirmed and enriched by the statement of the interviewee “7-Eleven in South China has strong negotiation power with the suppliers for some categories of products as we shared the source of purchase with our local partner Dairy farm international Corp., HK”.We, therefore, draw a logic flow from “alliance with local partner” to “reduction of purchase cost” which in turn contributes to the effect of “low cost” and “high profit”.Fig. 2 shows the main logic flows from strategic choices of 7-Eleven China to their consequences.The same analysis was applied for all the 15 companies in the sample.Fifth, based on the above results, we classified the routes of business model innovation.Five firms were dropped as they did not arrive at rebuilding the core logic of their business models implemented in China.For the remaining 10 firms, we conducted a cross-case analysis using a constant comparing method.The objective of these cross-case analyses with no a priori hypotheses was to build our theory.Each change was labeled with a label − for example, we assigned the label “Alliance with local stakeholders” to the changes on the procurement policy “more than 70% products sold at local stores are made in China and establishing the long relationship with more than 300 Chinese suppliers”.Then, the next unit of data was analyzed and compared with the first.If similar, it was assigned the same label.If not, it was coded as a different concept.This step enabled us to identify 6 routes of changes for international retailers to rebuild the core logic of their business model in a host country.We present these in detail in our research findings.We now present the different routes of retail business model innovation which the international retailers adopted to rebuild the core logic of their business models in China.We then move on to discuss the critical activities which drove the firms to create and capture values which can be organized into three patterns.Table 3 shows the incidence of patterns to rebuild core logic: legitimatizing brand image in the local market, new innovation for the local market, alliance with local stakeholders, sharing resources within the group, transferring knowledge from headquarter and imitating the local competitors.Two firms pursued the route of legitimatizing their brand image in the local market to rebuild core logic.Both these two firms reviewed their target segments and provided the local consumers with new shopping values to meet the local market demands rather than the proposed shopping values of the home market.“In Japan we target clients living or working within 300 m walking distance from the store.However, in China we refocus towards new target clients: students and urban white collar groups, who were time restricted, low price sensitive, open minded to a new concept of retailing and inclined towards the western life style”. ,They emphasized symbolic values and integrated them into their brand identity in the local market.For example, 7-Eleven proposed the value of modern lifestyle and Zara made European brands affordable to Chinese consumers.“For our clients, shopping in our stores, it is not only for buying but also one way to access the modern life.” ,The brand image successfully established by these two firms enabled them to obtain a price premium in the local market, setting a higher price level than their competitors in the host market and in the home market.As a result, they were able to create values both for the local consumers and for the firms themselves.The value-based logic being rebuilt after the transfer of the business model from home to host country.“We set higher price in China than in Europe.Here, we target only young people in 18–35 years old and living in big cities.They are high sensible to the fashion but most of them without the ability to afford the international first-line brand.I think our competitive advantage is that we are European brand, and are able to provide them latest design similar as international first-line brands, but much cheaper”. ,Following the legitimatization of the brand image in the local market, the firms sought firstly, to achieve a social fit with the local market by proposing an appropriate shopper value.In enabling this the firms invested heavily in market research to gain insights into consumers’ expectations, social norms and industrial environment of the local market.“Although our first store was just opened in China, we had prepared our entry in China for a long time.In fact, our first office of market research was opened in Shanghai since 10 years ago.” ,Second, the firms sought a fit between the shopper value proposition and retail value chain build.The firms reformed their retail value chain in order to create and delivery new shopper values to the local consumers and thus to achieve competitive benefits.“We opened our stores in privileged commercial zones in Shanghai and Beijing, spending a lot on store decoration.Our transport cost is also relatively high because sometimes we delivery our new collections to the Chinese market by air instead of road, in Europe.We introduced our new collections at the same time between in Europe and in China.The cost escalation in China was partially compensated by the price premium we got, but we should still reduce the cost.We improved again our efficiency by centralization of design and logistics.We did also some local sourcing.Today, 35 percent of products sold in our Chinese stores were sourced in Asia.” ,Seven of the firms in the sample pursued the route of developing new innovation for the local market and rebuilding their core logic For example, companies such as 7-Eleven, Decathlon, Carrefour, Metro and Sephora developed specific products and services for the local markets.“We respect local consumer habits and started to sell living fish in our stores like in other traditional wet markets in China”. ,As these innovations brought about real added value to the local consumers, the firms sought to command a price premium in the local market and maintain a value-based logic.Therefore, these new innovations for the local market became important sources of competitive advantage for the firm.“We can stand out in the market, not because of the advantage on the price but because of our services.In the past thirteen years, we continued to introduce new services in China.Our success depended on not only our advanced technologies and management systems transferred from our group, but also our local experiences which enabled us to provide innovative convenient services which the local consumers really needed.” ,Apart from the innovations for developing new products and services, three out of the seven firms focused on process innovation to reduce costs of inventory, operation, etc.As a result, these innovations enabled the firm to rebuild an efficiency-based logic.“In January 1995 we opened our first shop in Shanghai.In the past decade of our development in China, we used our successful experiences in France.But more important, we fully studied the special characteristic of the Chinese market and innovated a new way to develop store network.Different from the stand alone stores in France, we opened our stores integrated within big department stores with relatively low cost of network development.Quite quickly, Etam became the well-known brand in China.” ,Following the route of new innovations for the local market, the recruitment and promotion of local talent was seen to be critical for the firms.As the local staff understand the local market better than the expatriate employees, attracting local talent and creating favorable working conditions for them helped many of the firms to achieve a superior insight into the local market.The firms were able to use a more qualitative knowledge of local consumer preferences and capture the value by way of new product/service or process innovations.“Our key successful resources are our ten years’ local experiences.Every year, we recruit young talents from renowned universities in China and provide them with adequate training and promotion opportunities.”,Four firms pursued the route of alliance with the local stakeholders to rebuild core logic.All these firms found the right local partners or local suppliers in order to achieve “embeddedness” in the local supply systems and the local cultures of consumption.“We found our local partner Shanghai Zhonglu group which helped us to learn the Chinese market and open our first retail store in Shanghai in 2003.” ,Moreover, the Chinese subsidiaries of these firms played the role of hub to facilitate the transactions between Chinese suppliers and global sourcing for their whole group.“We started our business in China since 1992 from sourcing the products here for our stores in other countries.We well established four supply centers respectively in Shanghai, Guangzhou, Shenzhen and Tianjin.We are closely working with more than 300 Chinese suppliers.In 2003, we exported 300 million US dollars products from China to the other countries where our store are located.” ,The big volume of global purchasing with suppliers in the host country facilitated the firms to achieve a stronger bargaining position and effectively reduce the cost of purchasing.As a result, enabling the firms to become more efficient and to rebuild their core logic in that local market.“We continued to reduce the cost of purchasing.Now, more than 70% products sold at our stores in China are supplied by the local suppliers which also supply our global market.” ,Regarding the pattern of alliance with the local stakeholders, the efficiency was not created only within an individual firm level but across the entire network of suppliers and customers.Although increasing the volume of transactions is certainly important, what is seen to be more critical for the firms is to enable the local suppliers to integrate quickly into the global value chain of the relevant activity.“Local suppliers are motivated to do the business with us as they intend to integrate into our global sourcing system or even into the whole global value chain of sport products.We organized the training and the seminars for them in order to help them to meet our international standard of the products.Today, one third of products in the global market of Decathlon are from China.The importation from China is developed in our group in double-digit annual growth.” ,Six firms in rebuilding their core logic sought to share resources within the group.“The purchasing of Auchan China was integrated into the Auchan global sourcing.” ,Apart from centralizing procurement, Zara integrated its logistics management across the countries.“Each Zara store, either franchisee or company-owned store, should use the standard information system which was developed by our group in order to share data in real time.Our distribution center is located in Spain.Everyday it is able to distribute 2.5 million products to the diversified destinations across world.” ,The centralization of procurement and/or logistics enabled the firms to benefit from economies of scale and improve the efficiency of their supply chain.The cost of purchasing and inventory were effectively reduced.As a result, the firms were able to rebuild efficiency-based logic.Furthermore, the firms in China established privileged relationships with some key suppliers by sharing the source of supply with their group.The existence of such relationships often made it quite difficult for the local competitors to access the same suppliers.This strategy enabled the firms to create value to the local consumers, even achieve a price premium in the market and rebuild their value-based logic.“We share the source of international purchasing.For example, nearly all the famous brands in the category of cosmetic products collaborate with us.They have already done the business with our stores in south-east Asia.Now for our stores in China, they continue to supply us.” ,Following the route of sharing resources within the group, creating appropriate an organizational structure was critical for the firms.Most firms managed their activities in a centralized way in order to facilitate the integration and coordination of activities across different functions and business units within the group.However, for internationalizing decentralization was seen to be important to maintain flexibility and responsibility especially in highly fragmented markets, such as the Chinese market.Therefore, finding a fine balance between centralization and decentralization was a significant challenge for international retailers.“The negotiations with suppliers are organized in several levels in our group: international, national and local.Even we have our global sourcing centers, we have also our local purchasing teams.We think it is a good solution to provide the products which the local consumers specially need.” ,Eight firms pursued the pattern of transferring knowledge from their headquarters to rebuild the core logic.The systems, best practices and retail technologies transferred from headquarters to subsidiaries in host country helped the firms implement their activities in a more efficient way than their local peers and thus enabling a rebuild of an efficiency-based logic.“The adoption of the same system of supply chain management in China as in Germany helped us effectively reduce the cost of inventory and quickly respond to the needs of local consumers.” ,The firms also transferred their store concepts from headquarters to the host market.These new store concepts quickly filled the gap between Chinese consumers’ changing needs and traditional local retailers’ inability to fill them.There was a clear shift in the shopping habits of the local consumers who sought a more modern lifestyle and shopping experience from these new concept stores.“Our concept is totally new in the Chinese market.The consumers in our store can touch the product, freely test them.The products are displayed in terms of three categories instead of different brands.That may facilitate the consumers to compare the different brands in one category of products.Moreover, we give professional advice to the consumers.The merchandising, atmosphere and the rich color in store⋯The consumers enjoy their shopping.” ,Furthermore, the transfer of knowledge on quality control of products effectively helped the firms to build a reputable brand image in the local market.This value was appreciated by the local consumers, especially middle class consumers who were sensitive to the quality of products rather than the price.Therefore, the transfer of knowledge from headquarters to home market enabled the firms to create added-value to the local consumers and a price premium in the local market.As a result, this facilitated the firms to rebuild their value-based logic.“We respect strictly the rules and standards related to the supply chain management for fresh food.The variety, origin of production and producer are strictly selected.Then the producing process is inspected and controlled.The products that conform to the quality of system of Carrefour are allowed to be sold in no matter which stores of Carrefour in the world.We differentiate from other competitors here by the safe and fresh foods.” ,The experiences and best practices transferred from home to host country was seen to be invaluable.For example, the home firm employees conducted training, internally and within the field and also set up internship programs for the local staff.Tacit expertise could be shared at a distance through the virtual platforms built by the group.“The transfer of company culture and experiences in Carrefour is traditionally by oral means, by the expatriates, especially the Taiwanese, and also by the training internship program.” ,The knowledge transferred was shared both formally and informally.“This exchange of best practices had been done everywhere.In Sephora we did many business trips.We learnt on site by visiting our stores in different countries.” ,Two firms pursued the pattern of imitating the local competitors to rebuild core logic.The firms learnt from the local competitors some supply and logistic solutions which were specially efficient and adequate to the special characteristics of the local markets.“At the level of logistics, we adopted flexible and diversified ways even using tricycle motor to deliver the merchandise to the stores.As the poor logistics network in China when we had started the business here, most of our products were delivered directly by the suppliers to each of our stores instead of collected by our distribution center and then dispatched to each store.This solution which effectively had reduced the cost of logistics under the conditions in China at that time was similar as our local competitors are doing, but different as we did in Europe.” ,The complexity in the local markets and the big psychic distance between the home and the host markets bring about a high level of uncertainty for the firms.They had to learn not only from their own experiences in other countries but also from their local peers’ experiences in order to reduce the uncertainty.The “embeddedness” of the international firm into the local business culture helped the firm to find the most efficient way to organize its supply and logistics systems in the local markets, in the attempt to rebuild an efficiency-based logic.In imitating the local competitors, a strong understanding of the local business culture was seen to be essential.“It was these very interesting experiences that showed me that we are in another world.If you come to China with preconceived ideas after having been successful in Europe or the U.S., you make mistake after mistake.” ,Some firms sough to undertake experiments to understand how the local business culture worked.Two firms undertook research in Taiwan.Such pilots or experimentation enabled the firms to gain an insight as to how to balance standardization and adaptation in setting up in a new host country.“We discovered Chinese culture and the way to work with the Chinese suppliers in Taiwan.” ,The findings of this study suggest six routes of retail business model innovations as well as their critical activities.These six routes of retail business model innovations can be categorized into three patterns in terms of their organizational learning capability and resources to which they deploy.These three patterns of retail business model innovations are presented in Fig. 3.In pattern 1, international retailers exploit their established resources which are generated in their home market, such as brand reputation, privileged relationship with key suppliers, international distribution center, retail systems and technologies and best practices.Emphasis of this pattern is placed on the extension of the firms’ existing products and technologies in order to capitalize upon the firms’ established knowledge base.In pattern 2, international retailers exploit existing resources in the environment of host markets where their subsidiaries operate.The firms may achieve these resources via alliance with the local stakeholders or imitating the local competitors.The integration of these resources into the firms’ adaptive processes aids the customization of existing products and technologies to the local market needs.Emphasis on this pattern is placed on the embeddedness of the international firms into the local business environment in order to adapt to local needs and reduce the uncertainty.In pattern 3, international retailers develop new products, service or business process based on stimuli and resources in host markets where their subsidiaries operate.The subsidiaries themselves may emerge as the center of excellence within the firm for particular products and technologies.The autonomy of the subsidiaries allows local talents to pursue innovative paths that are more compatible with their own education, training, and experience.Emphasis on this pattern is placed on the autonomous set of activities of the subsidiaries that are less closely aligned to the existing resource base of the firms.International retailer seem inclined to follow more than one pattern of business model innovation to rebuild their core logic in China, such as three patterns for the cases of 7-Eleven and Carrefour and two patterns for the rest of the firms listed in Table 3.Internationalization of the firm can be understood as an innovation decision process.Existing research suggests organizational learning, innovation and internationalization are linked together in a complex way.Business model innovation provides a new source of sustained value creation for firms therefore a source of future competitive advantage.We argue that research into business model innovation in the host country for MNEs will provide new insights into internationalization process as business model innovation not only deal with views from the ‘supply side’ but also the ‘demand side’.However, on the question of whether there exists different patterns of business model innovation which enable the international firms to rebuild the core logic of their business model in host country, we note that little study is available.Our study focuses on the retail industry and analyses the data collected from 15 international retailers in China.Our findings reveal six potential routes of retail business model innovations: legitimatizing brand image in the local market; sharing resources within the group, transferring knowledge from headquarters; alliance with the local stakeholders; and imitating the local competitors; and new innovation for the local market.Apart from the new innovation for the local market, five of the six routes are exploitative business model innovations.From the organizational learning perspective, these six routes can be clustered into three patterns of resource deployment: extension of existing knowledge base, embedment with the local environment, and autonomic exploration of subsidiaries.The first two patterns are exploitative learning in nature, whereas the third pattern is explorative.These patterns provide novel insights into the nature of the internationalization process as “multilateral rather unilateral” as in the original model of Johanson and Vahlne and the process is “also inter-organizational and not just intra-organizational”.They also disclose different ways of MNEs which recognize and exploit opportunities together with or against their local partners in the host market, therefore addressing the call by Forsgren regarding how various “counteracting forces affect the shape and direction of the internationalization process.,We extend the research on international business literature by providing an analytical framework for explaining how the firms adapt and/or renew their business model in international context.Although prior research has considered the product, mode and organizational changes made by the firms while internationalizing, it has not clarified the interdependency between the changes of these elements as well as the consequences of the changes.To address this knowledge gap, we adopt the theoretical lens of organizational learning and internationalization to examine business model innovation which enables us to systematically examine the changes on the firms’ varied dimensions between home and host market and the consequences which the changes may result.Furthermore, we augment the study of the relationship between organizational learning, innovation and internationalization by adding the new dimension of resources.Our study into business model innovation in the host country of international retailers thus deepens our understanding of the complex and dynamic relationships of these four key ingredients for the international firms.We advance the literature in business model by proposing a novel analytical framework to study business model innovation in international context.The prior research suggests that the process of business model innovation is a learning process.We develop this view by building a theoretical framework to research into business model innovation in international context with two dimensions: organizational learning capability and source of resources:.This framework enables us to better understand and analyze how the international firms rebuild their business models in a new setting.We found that an international firm may simultaneously exploit and explore the resources both from home base of parent company and host base of environment to rebuild the core logic of its business model.Furthermore, we extend the literature on international retailing by adding a view from retail business model innovations in a new setting.Prior research related to international retailing strategies often focuses on the conceptualization of international retailers’ operations in individual dimensions other than business model as a whole.We augment these studies by adopting the perspective of retail business model innovation in the research of international retailing.This perspective enables us to rationalize the decision on changes/non-changes and levels of changes on each dimension in a holistic way, whilst examining whether these changes enable international retailers to rebuild the core logic of their business models in a new setting.This study provides guidance to international retailers that aim to change their business model to achieve success in host markets.According to our findings, international retailers may follow three patterns of retail business innovations to achieve competitive advantages in host country in terms of providing values both to the firms themselves and to the local consumers.If they follow the pattern of extension, they should focus on the capitalization of their existing resources generated in their home market.If they follow the pattern of embeddedness, the priority for them is to be aligned with the business practices of the local suppliers and the competitors.If they follow the pattern of autonomy, they should encourage local talents to develop new products, services and technologies to adapt to the local consumers’ needs.Furthermore, six specific routes to change the components of international retailers’ business model in a new setting have been identified.International retailers may implement the key activities of these routes to rebuild the core logic of their business model in host market, while considering different circumstances of the host market.Our study opens the door for future research.Firstly, our findings suggest that the international firms may simultaneously exploit and explore the resources both from the home base of the parent company and the host country environment to rebuild the core logic of their business model in the host market.However, the international firms to survive and succeed need to maintain the balance between exploitation and exploration.It is therefore interesting to study further how to handle the tensions while keeping this balance and to consider which contextual factors determine the choice and degree of exploration and exploitation for international firms.Secondly, our study has focused on only one host country, namely China.Arguably the most prominent limitation of a single country study analysis is the issue of external validity or generalizability.Nevertheless, we see our study enabling significant utility for analytical generalization.The exploration of the transfer of business models to a new host country provides a template for international retailers in considering the patterns of retail business model innovation and therefore enabling a theory build and groundwork for theory testing Although this has the advantage of comparing business model innovation in a similar host environment, it is essential to extend the findings to a different host context.Therefore further studies into different host countries are called for.
Although research into the business model has received increasing attention, few studies have so far been conducted on business model innovation in an international context. The purpose of the study is to identify different patterns of business model innovation which enables international retailers to rebuild their core business logic in new host countries. On the basis of comparing and contrasting the business model changes of 15 international retailers from various home countries to one single host country (China), our study provides an in-depth understanding of business model innovation in the context of international business. By looking at the firms’ capabilities in rebuilding their core logic in the setting of a host country, we reveal six routes of retail business model innovation. Utilizing the lens of organizational learning theory and internationalization, we identify three patterns of resource deployment by international companies in the process of developing business model innovations. Our study, therefore, provides insights and guidance for multinational companies in general, international retailers in particular, as for how to successfully adapt their business model from home country to host country.
28
Modelling aeolian sand transport using a dynamic mass balancing approach
A central aspect of understanding desert landscape dynamics relies on the quantification of surface deposition and erosion.Numerous formulae have been developed to predict sediment flux and these generally rely on temporally- and spatially-averaged measures of erosivity such as wind velocity and shear velocity.No single sediment flux model has been found to be broadly applicable under the diversity of natural environmental conditions that exist, as most are derived theoretically or from wind-tunnel observations without rigorous field-testing.Large discrepancies arise between field-derived and modelled rates of sand flux because existing models employ mean parameter values to determine the aggregate behaviour of a landscape, when aeolian transport often appears to be dominated by extremes in erosivity and erodibility parameters.In particular, the use of u⁎ for estimating sand transport has been contested due to its dependence on a log-linear relationship between wind velocity and height, which collapses at high measurement frequencies.Some wind tunnel and field experiments have identified relationships between sediment transport and u⁎ at timescales on the order of 100–101 s.Whilst this is incompatible with the traditional u⁎ approach – which requires wind shear to be calculated over periods of minutes to hours – suppressing the short-term variability of the transport system by using time-averaged series can provide reasonable relationships between wind velocity and sediment flux.Despite improvements in high-frequency anemometry and saltation impact sensors that allow for many of the relevant transport processes to be measured in the field, the relative importance of turbulence for aeolian sediment transport remains unclear.This is partly due to the spatially discrete nature of transport events such as sand streamers, and inertia in the saltation system causing changes in sediment transport to lag behind de/accelerations in flow.This lag has been notably explored through the concept of the saturation length Lsat, a single length characterising the distance over which sediment flux adapts to changes in the wind shear.The saturation time Tsat can be analytically derived from this, and is proportional to the reciprocal of the sum of localised erosion and deposition rates in a system.Evidently this makes it difficult for Tsat to be accurately quantified in field settings over relatively short timescales.Whilst Lsat has been shown to control the length of the smallest dunes evolving from flat sand beds and to be a key parameter for the realistic modelling of barchan dunes, it remains difficult to isolate the mechanism responsible for limiting saturation in aeolian systems.As a result of these complexities, no empirical, predictive model has explicitly incorporated the dynamic balance between the mass of sand moving into and dropping out of the wind flow.The aim of this study is to derive and test a new sediment transport model that accounts for the lag in flux that emerges when wind flow is analysed at sufficiently high frequencies.The model is fitted to empirical field data, and its performance at a range of measurement resolutions and averaging intervals is compared to two existing transport models, using two separate wind and sand flux datasets.A new aeolian sediment transport model is proposed that incorporates the effects of a lag between high frequency flow turbulence and sediment flux.In the generalised model, referred to in this study as the ‘dynamic mass balance model’, an increase in mass flux depends on above-threshold horizontal wind velocity.Whilst research in the fluvial domain has emphasized the importance of w for sediment transport, evidence from aeolian literature suggests that the w component tends to be an order of magnitude lower than u in natural wind flow, and that wind-driven flux is more simply associated with a positive u component.This is supported by the findings presented in Section 4.1.1, which show that the w component is poorly correlated with flux data.Therefore, vertical wind velocity is not incorporated in our DMB model.In this way, the first square bracket in the model describes instantaneous sediment entrainment.This is switched ‘on’ or ‘off’ depending on whether the wind velocity is above threshold, using the threshold criterion H. Debate exists within the literature about the power to which U or u⁎ should be raised.Here, sand flux is proportional to the square of above-threshold wind velocity, as is the case for many existing transport models.The second square bracket in the model describes depositional processes.Parameter b1 controls the vertical settling of grains due to gravity, a fundamental attribute of sediment transport that is primarily controlled by the grain size.Parameter c1 controls the vertical settling of grains due to horizontal wind velocity reduction, which reflects the fact that the saturation length Lsat is strongly controlled by the relation between the gravitational constant and the average sand particle speed above the surface.By inserting a variety of parameter values into the three model formulations, it is possible to investigate the sensitivity of model behaviour to each parameter.Fig. 1 displays the effect of varying different parameters on the shapes of sediment flux time series predicted by the three models.The models were initially run with estimates based on extensive preliminary testing, and with values ± 30% of this initial estimate.The theoretical threshold wind velocity was set at 6.4 m s− 1, to reflect the threshold velocity derived from the dataset of Experiment 1.As shown in Fig. 1, the Radok and Dong models respond much more rapidly to sudden increases and decreases in wind velocity compared to the DMB model.The Radok and Dong parameters effectively alter the magnitude of simulated transport but do not alter the onset or rate of decline in saltation activity.In contrast, the DMB model clearly displays a lag in its sediment transport response, producing exponential increases or decreases in mass flux as opposed to the sudden step-changes evident with the comparison models.In cases of constant above-threshold wind velocity, mass flux predicted by the DMB model tends to an asymptote more rapidly at near-threshold wind velocities than at very high wind velocities.Mass flux approaches equilibrium within 5–8 s when wind velocity is comparatively close to the threshold, but takes longer to approach equilibration when the constant wind velocity is higher.Using the example parameter values shown in Fig. 1, the relationship between approximate equilibration time and wind velocity over a realistic range of above-threshold wind velocities for the DMB model is given by: TE ≈ 1.5u − 5.This result is theoretically only applicable when flux is tending towards equilibrium, but equilibrium between wind flow and saltation at shorter timescales is not easily recognised in field data.Two field experiments were undertaken in order to provide collocated wind and sand transport data to fit and compare the performance of the three models described in Section 2.In this study, we differentiate the temporal resolution at which the data were measured from the interval over which the data are subsequently averaged for analysis purposes.Experiment 1 was conducted over a 10-min period to explore model behaviour in detail, and averaging intervals for this experiment did not exceed 10 s in order to provide enough data points to maintain statistical reliability.Experiment 2 was designed to provide longer wind and sediment transport datasets that allowed for longer averaging intervals to be calculated.Experiment 1 was conducted on 7th September 2005 in the Uniab River Valley along the arid northwest coast of Namibia.Experiment 2 was conducted at the same field site, on 26th August 2005.The field site was adjacent to a dune field and consisted of a large expanse of flat, sandy substrate, with a fetch of 500 m upwind of the experimental apparatus that was adequate for the development of an internal boundary layer.Weaver provides a more detailed description of the site.A sonic anemometer was deployed at the field site to obtain high-frequency time series measurements of wind velocity data at 0.30 m height.This measurement height accounts for the turbulent structures that are closest to the surface, whilst minimising interference of saltating sand grains with the sonic signal.For Experiment 1, the sonic anemometer sampled three-dimensional wind velocity at a frequency of 10 Hz over a period of 30 min, from which a representative 10-min period was selected.This conforms to the sampling period length recommended by van Boxel et al. for taking account of the largest turbulent eddies forming within the boundary layer.For Experiment 2, wind velocity was measured at a frequency of 10 Hz and sub-sampled to 1 Hz, over a period of 120 min.In both experiments, the anemometer was aligned parallel with the prevailing wind direction and level in the horizontal plane.In order to capture the high-frequency dynamics of saltation, 10 Hz measurements of grain impact counts were collected using a Saltation Flux Impact Responder.The Safire was deployed 1.2 m away from the base of the sonic anemometer array, with its sensing ring flush with the sand surface.Four vertically separated, wedge-shaped sand traps were also deployed, to calibrate saltation count with equivalent mass sand flux.The four sand traps were spaced 1.25 m apart to the side of and level with the sonic anemometer array, perpendicular to the dominant wind direction.The mass flux data collected over the experimental periods were averaged across the four traps.For Experiment 1, the sand traps were installed over the full 10-min experimental period, whilst for Experiment 2, the sand traps were installed for a 40-min period.Fig. 3a displays the time series for 10 Hz horizontal wind velocity and mass flux over the duration of Experiment 1, and Table 1 provides a summary of descriptive characteristics.The mean values for wind velocity and mass flux clearly mask the true turbulent nature of the transport system, with wind oscillating at a range of frequencies, and saltation occurring in a typically intermittent manner.Peaks in above-threshold wind velocity were often accompanied by rapid responses in sediment flux, and mass flux maxima were of the same order of magnitude as those recorded in previous similar studies.The relationship between the instantaneous vertical component and mass flux was much weaker than for u′.Consequently the relationship between − u ′ w′ and mass flux was very low.This has previously been noted in other studies, because sweeps and outward ejections offset each other in − u ′ w′ calculations due to their opposing contributions to shear stress.Therefore, the horizontal flow component is likely to be the principal driver of entrainment and deposition, supporting the exclusion of the vertical flow component from the formulation of the DMB model.The three transport models were used to fit the observed mass fluxes to wind characteristics over the 10 min sampling interval of Experiment 1, using the original 10 Hz data and sub-sampled 1 Hz data.The optimisation for the fitting was performed using Sequential Least Squares Programming, a routine based on sequential quadratic programming that allows a function of several variables to be minimised.In order to identify areas within the multidimensional parameter spaces where optimal fitting would occur, 1000-run Monte Carlo simulations were performed for each model with randomised initial conditions.The best-fit parameters for the Dong and Radok models were insensitive to initial conditions.The two-dimensional parameter space of the DMB model was somewhat sensitive to initial conditions, with some of the parameters producing more than one optimal fit, which suggests that the numerical fitting routine may be finding local minima."Therefore, the optimisation routine for the DMB model was run by setting realistic constraints on each parameter's range and allowing all parameters to simultaneously vary within a confined solution space.This fitting methodology was also applied to the saltation count dataset of Experiment 2.Following Ravi et al., the wind velocity threshold for Experiment 1 was taken to be the minimum wind velocity producing saltation for > 1% of the 10 min sampling interval, which was determined as 6.4 m s− 1.This corresponds well to the threshold of 6.7 m s− 1 derived using the modified Time Fraction Equivalence Method.Fig. 4 shows the relationships between horizontal wind velocity and observed and predicted mass flux for Experiment 1.At 10 Hz, the DMB model produced the best correlation between observed and predicted flux, compared to the Radok and Dong models.The spread of the data produced by the DMB model closely matched the shape of the observed data, whereas the Radok and Dong models simulated completely deterministic transport responses following exponential curves.The predictive performance of all three models improved at the 1 Hz averaging interval, but again the DMB model produced the most realistic data spread and the best correlation compared to the Radok and Dong models.All the coefficients of determination presented were statistically significant at the 95% confidence level.Whilst the DMB model clearly replicates the spread of data and absolute values of mass flux better than the Radok and Dong models, it is also useful to compare changes in predicted mass flux over time The three models produced time series that matched well with observed variations at both resolutions, although all models underestimated a peak in sand flux at t = 5 mins and overestimated flux by up to 0.02 kg m− 1 s− 1 at t = 2.5 min.The DMB model tended to predict a less ‘peaky’ time series that more closely resembled the observed data than the Radok and Dong models, especially at the 1 Hz averaging interval.The lag that emerges naturally from the formulation of the DMB model allows it to reproduce the absence of sediment transport at lower velocities.Comparing the total modelled and observed saltation count over a given period allows for a different method of assessing model performance than simple correlation analysis.Under the assumption that the relationship between saltation count and mass flux is linear – as shown by Weaver and Davidson-Arnott et al., and as is implicit in Eq. – the grain impact data were used to directly fit the three transport models.Saltation counts were used instead of flux rate because they can be directly summed to produce a cumulative measure of sand transport intensity.Fig. 6 displays the difference between total modelled and observed saltation count over the duration of Experiment 1, over a range of time-averaging intervals using the original 10 Hz data.The DMB model performed well at all averaging intervals, overestimating total count by a maximum of 0.48% and underestimating by a maximum of 0.37%.This represents a significant improvement on the performance of existing models, with the Radok model consistently overestimating total count by up to 5.50% and the Dong model consistently underestimating count by up to 20.53%.Correlation between total modelled and observed count generally improved as the interval increased, with the averaging out of peaks and lags in the high-frequency saltation series suiting the interval-independent formulations of the Radok and Dong models in particular.Overall, the three models performed best at the 4 s averaging interval, predicting total saltation count most accurately with the highest coefficients of determination.For a complete assessment of model performance, it is necessary to identify any systematic biases the models produce.The frequency distribution of residuals for the fitted model curves in Experiment 1 are shown in Fig. 7.At 10 Hz and 1 Hz, all three model fits produced residuals with an approximately symmetrical unimodal distribution, but strong positive skewness and high kurtosis values imply the existence of bias in each model.The residuals for the Radok and Dong models were positive for mass fluxes > 0.035 kg m− 1 s− 1, which suggests that both models systematically underestimated mass flux above this value.The DMB model systematically underestimated mass fluxes at higher levels in the 10 Hz data, around > 0.05 kg m− 1 s− 1, suggesting it may behave more robustly over a greater range of fluxes.For observed fluxes > 0.035 kg m− 1 s− 1, the DMB model on average underestimated mass flux by 46% and 37%, the Radok model underestimated by 73% and 48%, and the Dong model underestimated by 69% and 48%.These high-flux events occurred < 1% of the total experimental period during Experiment 1, but were responsible for 11.9% of the total transported sand.Given the high-end biases identified in Fig. 7, it is important to separate the ability of the three models to simulate increasing and decreasing mass flux, as these two transport behaviours arise from differing entrainment and depositional processes.Specifically, above-threshold horizontal wind velocity likely controls the saltation system during periods of increasing mass flux, whilst the importance of vertical settling rates is likely greater during periods of decreasing mass flux.A robust local regression method, which uses weighted linear least squares and a second-degree polynomial model, was employed to smooth the 10 Hz and 1 Hz curves and categorize them in discrete periods of rising or falling mass flux.At 10 Hz, the DMB model simulated mass flux equally well during periods of decreasing and increasing transport.The Radok and Dong models simulated mass flux slightly more accurately during periods of decreasing transport.At 1 Hz, all three models predicted transport rates better during periods of decreasing mass flux than during periods of increasing mass flux, which suggests that depositional processes are slightly better parameterised than erosional processes."In all cases, the linear fits in Fig. 8 were found not to deviate from unity to any significant extent.The three models are therefore statistically robust, even over the relatively short time period of Experiment 1.Results from Experiment 1 demonstrate that, during a relatively short experimental period characterised by intermittent saltation, the DMB model predicted sand transport better than the Radok and Dong models, both in terms of mass flux and total saltation count.This is evident from: the more realistic spread of transport data produced by the DMB model at these resolutions; the less ‘peaky’ time series due to the lag naturally emerging in the DMB model; and the ability of the DMB model to predict total saltation count with far less error than the other two models, albeit with slightly lower R2 values for averaging intervals longer than 4 s.The three fitted models were tested against the data of Experiment 2, in order to examine their transferability over a longer time period – which allows for coarser time averaging intervals to be investigated – and in a different wind/sediment transport regime to Experiment 1.The same fits derived for each model from the 1 Hz Experiment 1 mass flux dataset were used for predicting mass flux over Experiment 2.The wind velocity threshold was taken to be 6.4 m s− 1, based on the data of Experiment 1.Mass flux equivalents were only available for 40 min of the total experimental period of Experiment 2, so as for Experiment 1, the full 2-h saltation count dataset from the Safires was used to fit the three models directly.For the 40 min of grain impact data that were calibrated to provide mass flux equivalents, the relationships between horizontal wind velocity and observed and predicted mass flux are shown in Fig. 9.At 1 Hz measurement resolution, the DMB model produced the best correlation between observed and predicted flux and the most representative spread of data.Coefficients of determination improved at the 1 min averaging interval, although the Dong and DMB models underestimated flux at wind velocities < 7.5 m s− 1.These results support findings from Experiment 1 that the DMB model predicts sand flux in a more realistic manner than the other two models, although its relative performance declines over a longer averaging interval.The observed wind velocity/saltation count time series for the complete duration of Experiment 2 is presented in Fig. 10a, alongside the model predictions.Wind velocity exceeded the threshold velocity for 98.4% of the total experimental period, and, unlike in Experiment 1, almost continuous sediment transport was recorded.The saltation count series predicted by the three models is shown in Fig. 10b, c, d.The Radok and Dong models produced similar saltation counts throughout the study period, sometimes peaking significantly at times when there was no corresponding peak in the observed data.In contrast, the DMB model predicted a less peaky time series that more closely matched the observed data.The coefficient of determination between observed and modelled saltation count was highest for the DMB model compared to the Radok model and the Dong model.Fig. 11 compares the total observed and predicted saltation count from each model over the duration of Experiment 2, at different averaging intervals.As for Experiment 1, the DMB model performed well across all averaging intervals, overestimating total count by a maximum of 0.84%, whilst the Radok and Dong models generally did not predict total count as closely.At an averaging interval of 10 min, the DMB model overestimated total count by 0.26%, the Radok model by 0.42% and the Dong model by 4.29%.The coefficients of determination were similar for all three models, even at greater averaging periods.Whilst the DMB model provided the most accurate overall predictions of total saltation count at temporal resolutions ranging from 10 Hz–10 min, over this significant experimental period of well-developed saltation, the difference in performance between the three models was starkest at averaging intervals of 10 s or shorter.For intervals longer than 10 s, the Radok and Dong models performed almost as well as the DMB model.We have shown that a sediment transport model based on dynamic mass balancing, which takes account of the turbulence-scale lag between wind flow and sediment flux, improves our capacity to predict sediment transport over a wide range of measurement resolutions and averaging intervals, from 10 Hz to 10 min.This is achieved by formulating transport in more physically realistic terms, with two main differences compared to existing models: the natural emergence of a temporal lag owing to its differential structure; the explicit representation of both erosional and depositional components of sediment transport.First, by maintaining a fraction of the mass flux from one time interval to the next, the DMB model naturally leads to lags in response of the transported sand mass to flow de/acceleration.This reflects the inherent inertia in the saltation system that has been observed in numerous wind tunnel and field studies.Indeed, sand in a natural transport system is not immediately deposited as soon as the wind velocity decreases, because of the momentum contained in individual saltating grains.The presence of a lag resulted in stronger coefficients of determination between observed and modelled sediment flux, a much more realistic spread of mass flux data, and less ‘peaky’ time series of flux and saltation count, over short and longer experimental period lengths.Moreover, our DMB model predicted total saltation count to within 0.84% or closer for all experiments, whereas the Radok and Dong models over- or underestimated total count by up to 5.50% and 20.53% respectively.The DMB model tended to predict transport rates better during periods of decreasing mass flux, which suggests that deposition processes are slightly better represented than entrainment processes in its parameterisations."Given that the saturation time Tsat is the reciprocal of the sum of localised erosion and deposition rates in a system, further investigation of the model's behaviour is needed in order to quantify saturation limits using real field data.Second, our aim was to formulate a model that more explicitly accounts for turbulence-scale processes, as this has been a much sought-after tool for aeolian research.By accounting in some physical sense for both erosion and deposition in the saltation system, the DMB model provides a method for directly representing the movement of a saltating mass of sand.Crucially, the experiments presented here did not identify a temporal resolution that was too coarse for the DMB model to resolve the entrainment of grains to, and the settling of grains from, the saltation cloud.The DMB model performed well in a high-energy environment with both well-developed saltation conditions and partially developed saltation conditions.This is a strong advantage given that the relationship between saltation intensity and wind speed often breaks down in cases where transport is only partially developed.Nevertheless, there was evidence in Experiment 1 that the DMB model systematically underestimated high mass fluxes, although it did not suffer from this effect as strongly as the Radok and Dong models.Systematic underestimation may be a consequence of not explicitly considering splash entrainment, which arises in well-developed saltation conditions.At higher wind velocities, the additional energy contained in saltating particles could result in a power-law entrainment function that is inappropriate for lower wind velocities.Instrumentation bias at low wind velocities may have also artificially truncated the transport datasets, due to the momentum threshold in Safires for registering a sand grain impact.The introduction of a low-end bias such as this would evidently complicate the fitting of the models across the full range of observed wind velocities.However, since the distribution of wind velocities in many dryland areas broadly follows a Weibull distribution, the occurrence of large sand transport events remains proportionally low in most natural aeolian transport systems.The three models tested here should therefore be considered statistically robust predictors of sediment transport in the majority of wind and saltation conditions.Previous studies have proposed that suppressing the short-term variability of the transport system by using time-averaged series can provide relatively good relationships between wind velocity data and sediment flux.Over the short duration of Experiment 1, optimal model predictions were achieved when wind velocity and saltation data were averaged over a 4 s interval.This value is similar to previously reported response times of mass sand flux.Spies et al. showed that flux responds to positive changes in wind velocity on the order of 2–3 s, whilst the response to decelerating wind is about 1 s longer.Using cross-spectral analysis, Weaver revealed a periodicity of 1.5–5 s in the horizontal wind velocity that drives the majority of sediment transport.Therefore, it seems sensible that an averaging interval on this order should allow the models to capture the essential dynamics of the transport system without being affected by sub-second variability.Our DMB model significantly outperforms the Radok and Dong models at this scale of analysis.However, over the longer dataset of Experiment 2, the best results were achieved at 1-min averaging intervals.The ‘averaging out’ of peaks and lags in the transport system explains why the Radok and Dong models, which are interval-independent, were more suited to this resolution of analysis than at shorter resolutions.The DMB model still slightly outperformed the Radok and Dong models at the 1 min interval, possibly because longer periodicities in the sand transport system are accounted for by its differential formulation.On the basis of these results, we propose that in cases where the temporal scale of investigation is short, sand transport should be predicted using our DMB model run at a 4-s averaging interval.However, when the lengths of wind/transport datasets are longer, sand transport can be accurately predicted at 1–10 min averaging intervals, with only slight differences in model performance between the DMB, Radok and Dong models.This has ramifications for how wind and sand transport data are collected in the field, because it informs the appropriate resolution at which data should be measured and therefore the type of anemometry that is best suited to a given scale of investigation.Our recommendations also imply that turbulence-scale processes linked to the saltation lag, which are more explicitly accounted for in our DMB model compared with existing models, can be integrated into longer-term simulations of sand transport without substantially increasing computational expense.The approach we propose could prove to be significant for integrating turbulent transport processes into macro-scale landscape modelling of drylands.The new dynamic mass balance model presented in this study represents an improved method for predicting both high-frequency sand transport variability and coarser-scale total mass transport.The temporal lag emerging from the DMB model accounts for the momentum contained in individual saltating sand grains.This results in better predictions of sand transport than the models of Radok and Dong et al. in terms of accurately simulating mass flux distributions and time series of flux and saltation count.This has been shown to be the case over a wide range of measurement resolutions/averaging intervals.For all experiments and averaging intervals presented in this study, the DMB model predicted total saltation count to within 0.48% or closer, whereas the Radok and Dong models over- or underestimated total count by up to 5.50% and 20.53% respectively.The best correspondence between observed and modelled sand transport, in terms of accurately predicting total saltation count whilst maintaining high correlation with the observed data, was achieved when the DMB model was run using data averaged over 4 s or 1 min.We therefore propose that short-term, local-scale aeolian transport research should be conducted at a 4-s measurement resolution if the appropriate data collection methods are available, and employ our DMB model for predictive purposes.Over longer timescales, running our DMB model with averaging intervals of 1–10 min also yields slightly more accurate predictions of sand transport than the existing models analysed here.This could prove to be a revolutionary way of incorporating fine-scale transport dynamics into macro-scale studies of dryland landscape change.
Knowledge of the changing rate of sediment flux in space and time is essential for quantifying surface erosion and deposition in desert landscapes. Whilst many aeolian studies have relied on time-averaged parameters such as wind velocity (U) and wind shear velocity (u⁎) to determine sediment flux, there is increasing field evidence that high-frequency turbulence is an important driving force behind the entrainment and transport of sand. At this scale of analysis, inertia in the saltation system causes changes in sediment transport to lag behind de/accelerations in flow. However, saltation inertia has yet to be incorporated into a functional sand transport model that can be used for predictive purposes. In this study, we present a new transport model that dynamically balances the sand mass being transported in the wind flow. The ‘dynamic mass balance’ (DMB) model we present accounts for high-frequency variations in the horizontal (u) component of wind flow, as saltation is most strongly associated with the positive u component of the wind. The performance of the DMB model is tested by fitting it to two field-derived (Namibia's Skeleton Coast) datasets of wind velocity and sediment transport: (i) a 10-min (10 Hz measurement resolution) dataset; (ii) a 2-h (1 Hz measurement resolution) dataset. The DMB model is shown to outperform two existing models that rely on time-averaged wind velocity data (e.g. Radok, 1977; Dong et al., 2003), when predicting sand transport over the two experiments. For all measurement averaging intervals presented in this study (10 Hz–10 min), the DMB model predicted total saltation count to within at least 0.48%, whereas the Radok and Dong models over- or underestimated total count by up to 5.50% and 20.53% respectively. The DMB model also produced more realistic (less ‘peaky’) time series of sand flux than the other two models, and a more accurate distribution of sand flux data. The best predictions of total sand transport are achieved using our DMB model at a temporal resolution of 4 s in cases where the temporal scale of investigation is relatively short (on the order of minutes), and at a resolution of 1 min for longer wind and transport datasets (on the order of hours). The proposed new sand transport model could prove to be significant for integrating turbulence-scale transport processes into longer-term, macro-scale landscape modelling of drylands.
29
The motor repertoire in 3- to 5-month old infants with Down syndrome
Even though Down syndrome is the most common chromosomal cause of intellectual disability, with 20–22 individuals per 10,000 births affected, studies on early development are scarce.Infants with Down syndrome are known to be socially competent but show a delay in the acquisition of motor milestones and deficits in early gesture production.As early as the first months of life they scored lower than typically developing infants on both the Test of Infant Motor Performance and the Alberta Infant Motor Scale.They kicked less often and their arm movements were less accurate when reaching for objects of different sizes.Repeated assessments of their spontaneous general movements revealed a heterogeneous movement quality, although the fluency and complexity tended to improve between 1 and 6 months of age.Initially designed for infants with acquired brain injuries, the Prechtl assessment of general movements has recently also been applied to infants with genetic syndromes and infants later diagnosed with autism spectrum disorders.The assessment is based on visual Gestalt perception of normal vs. abnormal movements in the entire body.It is applied in foetuses, preterm infants, and newborn infants from term to 5 months post-term.The excellent predictive power of general movement assessments is mainly attributable to fidgety general movements, which occur from 3 to 5 months post-term age.Infants with normal fidgety movements are very likely to develop normally in neurological terms, whereas infants who never develop fidgety movements have a high risk for neurological impairment.Adding a detailed assessment of concurrent movements and postures to the assessment of fidgety movements, for example, showed a reduced motor optimality score to be associated with a limited activity in children who were later diagnosed with cerebral palsy, or with lower intelligent quotients during school age.We therefore assumed that determining the MOS by assessing fidgety movements as well as concurrent movement and postural patterns would enable us to systematically document the motor repertoire of infants with Down syndrome.The MOS makes it possible to quantitatively relate pre- and perinatal data to the motor repertoire of an infant and to data obtained from follow-up studies.The aims of our study were to describe movements and postures in 3- to 5-month-old infants with Down syndrome; to compare their MOS with the MOS of two matched samples, one of which was later diagnosed with cerebral palsy, while the other had a normal neurological outcome; and to analyse to what extent clinical risk factors during pregnancy, at delivery, and during the neonatal period were related to the motor performance in 3- to 5-month-old infants with Down syndrome.This exploratory study comprised a convenience sample of 47 infants with Down syndrome − 21 females and 26 males − who had been admitted to the Darcy Vargas Public Hospital, São Paulo; the Department of Physiotherapy and Rehabilitation at the Hacettepe University in Ankara; the Associação de Pais e Amigos dos Excepcionais at São Paulo University; the Rehabilitation Department of the Children’s Hospital of Fudan University in Shanghai; the Children’s Department at the City Hospital of Ostrava; the Clinic of Early Intervention at the University Hospital São Paolo; and the Medical University of Graz between June 2015 and May 2016.In order for the infants to be included in the study, the infants’ motor performance had to be recorded between 9 and 20 weeks post-term.The infants’ gestational ages at birth ranged from 29 to 41 weeks, with a birth weight range of 1440 g to 3680 g.Twenty-seven infants were born preterm, including two monozygotic twin pairs.Other clinical characteristics obtained from the medical histories are presented in Table 1.Three infants were diagnosed with mosaic Down syndrome, one with Robertsonian translocation.For comparison we used data of our international, MOS-based data bank, picking 47 individuals with a normal neurological outcome at 3–5 years of age, whose gestational age and age of video recording matched our study cases; and 47 individuals with a comparable gestational age at birth and post-term age at the time of the video recording who were diagnosed with cerebral palsy at 3–5 years of age.Since spontaneous movements are not related to ethnicity, we did not match the ethnic background.All parents gave their written informed consent.The ethical review boards of the various centres approved the study.Within the GenGM network we recorded 5-min videos of the spontaneous motility of each infant at a median post-term age of 14 weeks.The recordings were performed during periods of active wakefulness between feedings, with the infant partly dressed, lying in supine position.The videos were evaluated by at least two certified raters according to the Prechtl method of global and detailed general movement assessment.Scorers C.E. and P.B.M. were not familiar with the details of the participants’ clinical histories apart from the fact that they had Down syndrome.In case of disagreement, the raters re-evaluated the recordings until consensus was reached on a final score.Fidgety movements and the concurrent repertoire of movements and postures were assessed independently in separate runs of the video recordings.Using the score sheet for the assessment of motor repertoire at 3–5 months, we calculated the MOS, with a maximum value of 28 and a minimum value of 5.The score sheet comprises the following five sub-categories: fidgety movements, age-adequacy of motor repertoire, quality of movement patterns other than fidgety movements, posture, and overall quality of the motor repertoire.Fjørtoft and colleagues found a high inter-observer reliability for the MOS with intra-class correlation coefficients ranging from 0.80 to 0.94.Statistical analysis was performed using the SPSS package for Windows, version 23.0.The Pearson Chi-square test was used to evaluate associations between nominal data.To put the medians of non-normally distributed continuous data in relation to nominal data, we applied the Mann-Whitney-U test or, if there were more than two categories, the Kruskal-Wallis test.To assess the relative strength of the association between variables, we computed the following correlation coefficients: Cramer’s V coefficient was applied when at least one of the two variables was nominal.To assess the relation between two continuous variables, we applied the Pearson product-moment correlation coefficient.Throughout the analyses, p < 0.05 was considered to be statistically significant.Fourteen infants had normal fidgety movements; six infants displayed abnormal fidgety movements; 13 infants displayed no fidgety movements, which were therefore classified as absent; 14 infants showed fidgety-like movements whose amplitude was too great and whose pace was too slow.Abnormal and absent fidgety movements as well as fidgety-like movements were grouped as aberrant fidgety movements.Twelve infants displayed an age-adequate movement repertoire.The repertoire was found to be reduced in 20 infants, and age-inadequate in 15 infants.The quality of the various movement patterns was scored as predominantly normal in 39 infants and predominantly abnormal in three infants; five infants showed an equal number of normal and abnormal movements.On average the infants demonstrated three normal movement patterns and one abnormal movement pattern.The most frequent normal movement patterns included visual scanning, side-to-side movements of the head, foot-to-foot contact, hand-to-mouth contact, and kicking.Smiling, fiddling, hand regards, swipes, hand-to-hand contact, arching, and leg lifting were observed in fewer than ten individuals.Other movement patterns such as wiggling-oscillating arm movements, hand-to-knee contact, or rolling to the side were observed in fewer than five individuals.The most frequent abnormal movement pattern was long-lasting and/or repetitive tongue protrusion.In a few infants we observed hand-to-hand contact with no mutual manipulation, long lasting wiggling-oscillating arm movements, repetitive kicking, and monotonous side-to-side movements of the head.Posture was rated as predominantly normal in 22 infants and predominantly abnormal in 15 infants; ten infants showed an equal number of normal and abnormal postures.Infants with predominantly normal postural patterns were able to hold their head in midline, showed a symmetrical body posture and variable finger postures; a persistent asymmetric tonic neck response was absent in all individuals.On average the infants demonstrated three normal postural patterns and two abnormal postural patterns.The most common abnormal pattern was a lack of variable finger postures with just a few monotonous finger postures, finger spreading and/or predominant fisting.Twelve infants kept both arms predominantly extended, while seven infants kept their legs extended most of the time.Hyperextension of the neck and trunk was seen in three individuals.The following two postural atypicalities were observed, but are not captured by the MOS sheet: nine individuals showed an internal rotation and pronation of one or both wrists, and 22 infants showed an external rotation and abduction of the hips.Only three infants exhibited a normal, smooth and fluent overall movement character, while 44 infants displayed a monotonous, stiff, jerky and/or tremulous movement character.The median MOS was 13.The MOS of infants with Down syndrome was significantly lower than that of infants with a normal neurological outcome but significantly higher than that of infants later diagnosed with cerebral palsy.Similar results were obtained for fidgety movements, the age-adequacy of the motor repertoire, and the overall movement character.The quality of movement and postural patterns of infants with Down syndrome were similar to those of infants with a normal neurological outcome, while infants later diagnosed with cerebral palsy scored lower.None of the clinical characteristics was related to the MOS or fidgety movements.Preterm birth was not associated with the MOS or its subcategories, nor was any other clinical variable.We would particularly like to mention that congenital heart disease was not related to the motor performance at 3–5 months.Cranial ultrasound data were only available for a small proportion of the sample.Two infants with abnormal cranial ultrasound findings had abnormal fidgety-like movements.Among the six infants with normal cranial ultrasound findings three showed normal fidgety movements, one infant did not develop fidgety movements, and two infants had slow abnormal fidgety-like movements.Table 4 only lists significant associations between clinical variables and items of the MOS.A jerky movement character was associated with caesarean section and hyperbilirubinaemia, although the two clinical variables were not related to each other.Delivery by caesarean section was also related to a higher occurrence of a particular atypical posture at 3–5 months: in many cases external rotation and abduction of the hips led to a lack of movements to the midline.No difference was observed between twins and singletons.Neither the three infants with mosaic Down syndrome nor the one with Robertsonian translocation showed any sort of specific features in their motor performance at 3–5 months post term.Apart from one individual reported in the context of a larger sample of high-risk infants in Japan, this is the first study to use the MOS to describe fidgety movements and the concurrent motor repertoire in infants with Down syndrome.Mazzone et al. applied the detailed scoring for writhing general movements up to the age of 6 months rather than for fidgety and concurrent motor patterns, which occur from 3 months onwards.The motor coordination and performance of children, adolescents and adults with Down syndrome has been found to be highly heterogeneous.Interestingly, we already observed this heterogeneity at a much younger age: 30% of our 3- to 5-month-old infants with Down syndrome showed normal fidgety movements, 27.5% had no fidgety movements, and 42.5% showed abnormal fidgety movements.These highly diverse findings are reflected in the MOS, which range from 10 to the highest possible score of 28.Fifty percent of the infants scored between 10 and 13, which is significantly lower than infants with a normal neurological outcome and higher than infants who were later diagnosed with cerebral palsy.So far, two studies carried out in children with cerebral palsy in China, Italy, and the Netherlands demonstrated the lower the MOS at 3–5 months, the more severely limited their gross motor function.As we intend to monitor our sample of children with Down syndrome for at least 2 more years, we shall also assess the relation between the MOS and the motor, cognitive and language outcomes.The one infant with Down syndrome described by Yuge et al. displayed abnormal fidgety movements and an MOS of 13.Abnormal fidgety movements, exaggerated in amplitude, speed and jerkiness, were also observed in six infants in the present study.This is an exceptionally high rate of occurrence, as abnormal fidgety movements are usually rare.It has been a matter of debate whether or not infants with a low muscle tone are more likely to show abnormal fidgety movements.Although this may not be consentaneously defined, a low muscle tone is a general feature in infants with Down syndrome.A number of studies have documented an association between abnormal fidgety movements and coordination difficulties and/or disabilities in fine manipulative skills at school age; others describe an exceedingly high rate of abnormal fidgety movements in infants who were later diagnosed with autism spectrum disorders or Rett syndrome.However, most so-called abnormal fidgety movements in infants later diagnosed with autism spectrum disorders or Rett syndrome did not correspond with the category of abnormal fidgety movements described in infants with brain injuries, which were exaggerated in amplitude and speed.Several infants with a later diagnosis of autism spectrum disorders or Rett syndrome showed continual fidgety activity which was exaggerated in amplitude but too slow.We also observed this pattern in 14/47 infants with Down syndrome.It remains unclear whether this pattern is related to low muscle tone, another early atypicality in autism spectrum disorders and Rett syndrome, as video analysis does not allow for an assessment of active muscle strength or resistance to passive movements.Data on neurological examinations were not available in 44 of 47 individuals.In any case, the low rate of movements to the midline and the lack of kicking are in line with previous studies, where they have also been discussed as possible consequences of low muscle tone.The same is true of the frequent external rotation and abduction of the hip as well as the uni- or bilateral internal rotation and pronation of the wrist.Unfortunately we are unable to confirm this association for lack of information about our infants’ muscle tone.Several research groups have reported a significant impact of pre- and perinatal variables on the MOS at 3–5 months.For example, prenatal exposure to environmental pollutants or selective serotonin reuptake inhibitors, and perinatal hypoxic events resulted in a reduced MOS.No such association was established in our study.Neither gestational age nor birth weight or any other perinatal risk factor was found to be significantly related to the MOS or fidgety movements, notwithstanding the fact that 27 infants were born preterm.Nor was the percentage of aberrant fidgety movements and/or a lower MOS increased in the 22 infants with CHD, although toddlers with Down syndrome and CHD were reported to have a higher percentage of motor, language and cognitive deficits at 12–14 months than toddlers with Down syndrome and a normal heart structure.In our study, neither aberrant fidgety movements nor a reduced MOS were attributable to preterm birth or CHD.Only two variables were shown to affect movements and postures at 3–5 months: caesarean section and hyperbilirubinaemia were related to a jerky movement character, and external rotation and abduction of hips was more common among infants born by caesarean section.In their study on healthy full-term infants during the first week after birth, Ploegstra, Bos, and de Vries compared the general movements of low-risk neonates born by vaginal delivery with those of neonates after caesarean section and found no difference.So far, we cannot explain how external rotation and abduction of the hips is related to caesarean section.As for the jerky movement character, there have no doubt been more revealing findings: Groen, de Blécourt, Postema, and Hadders-Algra reported 11 out of 15 children to have shown a normal neurological outcome in spite of jerky movements at the age of 2–4 months.As various motor patterns such as abnormal fidgety or fidgety-like movements, lack of movements to the midline, and external rotation and abduction of the hips could be attributed to low muscle tone, we would also have needed to assess the infants’ muscle tone.Yet, the assessment of muscle tone is anything but unequivocal.Definitions are not standardised and inter-scorer agreement is prone to be low except for extremes.Diverging experience and procedures in the tonus assessment would no doubt have been additional challenges in a multicentre approach.Training and application of the Hammersmith Infant Neurological Examination was not available in most of the centres contributing to the study.This being a comparative study, one might consider our participants’ varied ethnic backgrounds as problematic.However, as we are dealing with spontaneous, i.e. endogenously generated movements, the respective care-giving practices are very unlikely to have had an impact on the early motor patterns in question.In fact, general movements have been assessed worldwide for more than 20 years with similar cross-cultural results.Nor did the various sensory stimulations affect the infants’ fidgety movements, which demonstrates their robust, environment-independent character.Of course, the infants’ different cultural backgrounds could be an issue with regard to the subcategory “posture”.Infants raised in a hammock, for example, seem to be faster in acquiring the “head in midline” position and/or a “symmetric body posture”, but none of our participants had experienced such exceptional practices, and this goes for all three groups.The significance of this exploratory study lies in its minute description of the motor repertoire of infants with Down syndrome aged 3–5 months.During this time window, fidgety movements are the predominant spontaneous movement pattern and an excellent marker for the neurological outcome.Infants with Down syndrome already show motor impairments at this early age, as evidenced by a significantly low MOS.Reassessing the same children as toddlers will show whether the high predictive power of fidgety movements found in infants with and without acquired brain injury also holds true for infants with a genetic disorder.A particularly important aspect will be whether the early spontaneous motor repertoire will also assist to predict the cognitive development of individuals with Down syndrome.A particularly important aspect that needs to be studied is the predictive power of the early spontaneous motor repertoire for the cognitive development of individuals with Down syndrome.But quite apart from this important aspect for clinicians and researchers, atypical postural features such as external rotation and abduction of the hips and/or internal rotation and pronation of the wrists call for earlier intervention.
Background Even though Down syndrome is the most common chromosomal cause of intellectual disability, studies on early development are scarce. Aim To describe movements and postures in 3- to 5-month-old infants with Down syndrome and assess the relation between pre- and perinatal risk factors and the eventual motor performance. Methods and procedures Exploratory study; 47 infants with Down syndrome (26 males, 27 infants born preterm, 22 infants with congenital heart disease) were videoed at 10–19 weeks post-term (median = 14 weeks). We assessed their Motor Optimality Score (MOS) based on postures and movements (including fidgety movements) and compared it to that of 47 infants later diagnosed with cerebral palsy and 47 infants with a normal neurological outcome, matched for gestational and recording ages. Outcomes and results The MOS (median = 13, range 10–28) was significantly lower than in infants with a normal neurological outcome (median = 26), but higher than in infants later diagnosed with cerebral palsy (median = 6). Fourteen infants with Down syndrome showed normal fidgety movements, 13 no fidgety movements, and 20 exaggerated, too fast or too slow fidgety movements. A lack of movements to the midline and several atypical postures were observed. Neither preterm birth nor congenital heart disease was related to aberrant fidgety movements or reduced MOS. Conclusions and implications The heterogeneity in fidgety movements and MOS add to an understanding of the large variability of the early phenotype of Down syndrome. Studies on the predictive values of the early spontaneous motor repertoire, especially for the cognitive outcome, are warranted. What this paper adds The significance of this exploratory study lies in its minute description of the motor repertoire of infants with Down syndrome aged 3–5 months. Thirty percent of infants with Down syndrome showed age-specific normal fidgety movements. The rate of abnormal fidgety movements (large amplitude, high/slow speed) or a lack of fidgety movements was exceedingly high. The motor optimality score of infants with Down syndrome was lower than in infants with normal neurological outcome but higher than in infants who were later diagnosed with cerebral palsy. Neither preterm birth nor congenital heart disease were related to the motor performance at 3–5 months.
30
Influence of cultivars and processing methods on the cyanide contents of cassava (Manihot esculenta Crantz) and its traditional food products
Cassava is a staple food consumed worldwide with an estimated 800 million consumers .It is cultivated mainly for its roots and leaves and have acceptable production yield in poor soils with low nutrient availability .In Cameroon, cassava root is a very significant food product with annual production estimated at 4.5 million tons, for revenue estimated at 700 million US dollars.Cultivated in all the five agro-ecological zones of the country, it constitute 80% of edible roots in the forest zone .Cassava is the first source of starchy foods in all the Southern half of Cameroon, with 43% of market shares of roots and tubers including 26% for derived products and 17% for fresh root.Each Cameroonian household consumes approximately 75 kg of cassava per year and “Chips”, “Gari” and “Fufu” constituting the most consumed cassava - derived traditional food .Despite the nutritional and commercial benefits, cassava contains toxic substances that limit its utility: the most important being cyanogen which is responsible for the bitter taste of some cassava cultivars name ‘bitter’ .This substance is the result of the enzymatic hydrolysis of molecules such as linamarin, lostaustralin, and acetone cyanohydrin .Linamarin being the most important cyanogenic compoundis synthesized in the leaves through N-hydroxylation of valine and isoleucine, and distributed to the roots .This compound is stored in vacuoles of cassava cells and is known to be more concentrated in leaves and root cortex when compared to root parenchyma .Linamarin and linamarase react when cassava cells are mechanically damaged when harvesting, and release acetone cyanohydrin which decomposes to release cyanide , either by hydroxyl nitrile lyase or spontaneously when the pH is greater than 5 .Cyanidric acid is the cause of many health problems which explains the prevalence of several neurological diseases including ataxic neuropathy, cretinism, and xerophthalmia in forest areas where cassava is the staple food .In addition, it causes thyroid disorders, goiter and stunting in children .An irreversible spastic paralysis and tropical ataxic neuropathy termed “konzo” is one of the diseases contracted as a result of acute or chronic exposure to cassava cyanogen .It is due to this high toxicity that the Nigerian Industrial Standards and Codex Alimentarius Commission recommended the maximal residual cyanogen to be 10 ppm in cassava products .Variable toxicity levels of cassava have been reported in literature with total content depending on altitude, geographic location, period of harvesting, crop variety and seasonal conditions.In regards to seasonal conditions, cyanide content of cassava tends to increase during drought periods due to water stress on the plant than during raining periods.In Mozambique, more than 55% of fresh sweet roots became extremely toxic during drought periods.Similar observations were recorded in the Democratic Republic of Congo and other countries in Africa .Splittstoesser and Tunya reported that cassava grown in wet areas contain relatively lower amount of cyanide than those grown in arid ones.In general, cassava cultivars contain cyanogenic glucosides although wide variation in the concentration of cyanogen exists among cultivars ranging from 1 to 2000 mg/kg .So, in order to be suitable for human consumption, cassava must be processed.Numerous processing techniques are found worldwide which not only increase palatability and extend shelf life, but also decrease cyanogenic potential of cassava .These methods consist of different combinations of peeling, chopping, grating, soaking, drying, frying, boiling and fermenting.The most efficient methods being grating and crushing as they remove cyanide due to the intimate contact in the finely-divided wet parenchyma between linamarin and the hydrolyzing enzyme linamarase, which promotes rapid breakdown of linamarin to hydrogen cyanide gas that escapes into the air .This, in combination with wetting, fermentation and drying can reduce cyanide contents up to 99%.The most common agent of cyanide extraction is 0.1 M of phosphoric acid.The main method used to measure cyanide extracted from plant material is spectrophotometry through colored reactions .Other methods based on biosensor , HPLC , hydrolyze followed by distillation in a strong basic solution have also been developed.Nevertheless, the picrate paper method coupled with spectrophotometry was reported to be reliable, effective and easy to use .This method was therefore adopted for this study.Taking into consideration cyanide toxicity, and in order to inform on the risks related to the cyanogenic consumption of cassava, this study was aimed at determining the cyanide content of the most consumed varieties of cassava in Cameroon and to evaluate the effects of some traditional processing method on the residual content of processed products.Plant material was collected in the locality of Mbankomo.This site was chosen for its availability in most of the cassava varieties cultivated in Cameroon and also to avoid the influence of agroecology on variability of cyanide content.Twenty cassava cultivars of 12 months old consisting of 10 improved and 10 local varieties were studied.These cultivars were selected because they are the most consumed species.Local names were identified and morphological characterization of each cultivar was described.These characteristics were mainly the color of leaves, stems, petioles, skin roots .Preparation of parenchyma and cortex was performed by collecting, washing and peeling each cassava root cultivar.Parenchyma obtained after roots peeling was immediately ground fresh for cyanide quantification while cortex was sundried for 5 days.Roots of each cassava variety were processed into “Chips”, “Gari” and “Fufu”, according to the protocol described by USAID/CORAF/SONGHAI .Preparation of “Chips”: 1 kg of fresh cassava roots of each cultivar was washed, peeled and cut manually into small pieces containing 1 g of sample and 1 mL of phosphate buffer at pH 8.The bottle was hermetically closed and left to ambient temperature for 24 h.The change in color of picrate paper from yellow to chestnut – red indicated the release of cyanide contained in the sample and its absorption by picrate paper.Thus, the picrate paper was removed, placed in a test tube and 5.0 mL water was added.The absorbance of the solution was measured at 510 nm using a spectrophotometer.A calibration curve of absorbance and cyanide content was obtained from potassium cyanide solution used as standard at concentrations of 1, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 µg.The regression model was a linear function f = ax + b).Data was analyzed using software SPSS for Windows versus 12.0.Samples were described using descriptive analysis.Analysis of variance was used to show the difference of cyanide content in various cultivars and compare the average contents in their processed products.Values of p < 0.05 were considered statistically significant.Cortex and parenchyma cyanide content of 20 cassava cultivars are illustrated in Fig. 1.The obtained results revealed that cassava cortex contained high cyanide content, compared to cassava parenchyma: cyanide content of cortex varied from 98.43±15.49 ppm to 155.44 ±12.11 ppm and 117.80±11.32 ppm to 210.07 ±9.15 ppm for the improved and local varieties, respectively.Cyanide content of cassava parenchyma ranged from 61.03±9.44 ppm to 118.04±7.16 ppm for improved cultivars and 79.34±3.58 ppm to 181.33±0.48 ppm for local cultivars.Improved varieties suchas 92/0326, 040, 95,109, 8034, 8017, 8061 with production yield of 35 ± 10 tons/ha produced less cyanide content when compared to other improved varieties.The lower content were obtained with 8017 and 8034 cultivars, and the higher content with 0110 and 961,414.Similarly, the local varieties such as Pola Rouge Beul, Macoumba, Man Mbong, Ngon Ezele, Owona Ekani, Mnom Ewondo with production yield of 20 ± 10 tons/ha, produced less cyanide content when compared to other local varieties with cyanide content that varied from 104.53 to 181.33 ppm).The local cultivar Ngonezele presented the lower cyanide content, and Madaga presented the higher content.Obtained results showed that cyanide content was relatively higher in local cultivars than improved ones; similar results were obtained by Akinfala when studying the nutritive value of whole cassava plant as replacement for maize in the starter diets for broiler chicken .Although, depending of the cassava genotype, some cultivars can have better cyanide potential than others belonging to the same group .The significant differences in the cyanide contents observed with the studied cultivars corroborate those of Adeniji , Faezah et al. , and Siritunga et al. who linked wide variability of cyanide content in cassava roots to cultivar differences, growing conditions of plant, maturity of plant, nutritional status of the plants, prevailing seasonal and climatic conditions during the time of harvest as well impact of environmental pollution and application of inorganic fertilizers.Also, significant variations in cyanide content amongst cassava roots of the same cultivar in different agro-ecological zones have been reported .In the same way, Ubwa et al. reported that the cyanide content of cultivars varied from one local government to another and also from one farm to another.In the parenchyma of studied samples, cyanide contents were above the WHO recommendations .Given that, high content of cyanide generally lead to high content of starch , high cyanide content observed in some improved cultivars as Champion, 0110, 961,414, Fonctionnaire, added to their high yield production are factors indicating better economic profitability for farmers.Cardoso et al. and CIAT has reported that the cyanide content of cassava varieties generally vary between 1 and 2000 ppm .The cyanide contents of studied cultivars are in accordance with those found in other African producer countries such as Ivory Coast, Benin, Congo, Burundi, Ghana, Nigeria where people have quickly recognized the need to transform cassava into derived products, India, but are high cyanogenic compare to varieties cultivated in Amazonia, and in Pacific countries.Concerning root parts, cassava root cortex contains higher cyanide content when compare to parenchyma.Therefore, peeling of cassava cortex considerably reduces cyanide levels in raw cassava root .Siritunga et al. explained that variation in cyanide content in different parts of cassava roots can be due to differential translocation of cyanogenic glycosides and their metabolizing enzymes in the different cellular compartments .Given that cortex is used in animal feed, it is very important to run a periodic check on the level of cortex toxicity in order to make recommendations concerning their incorporation in feed rations .The obtained results made it possible to classify semi-quantitatively roots of different cultivars following the guide of Nambisan who grouped cassava in three classes according to the total cyanide content: cassava is known as "inoffensive" when the cyanide content lies between 0 and 50 ppm, “moderately toxic” for contents between 50 and 100 ppm and “highly toxic” for contents >100 ppm.The studied cultivars could be classified into 2 classes:Moderately toxic cassava: Pola Rouge Beul, Macoumba, Man Mbong, NgonEzele, OwonaEkani,MnomEwondo, 92/0326, 040, 95,109, 8034, 8017 and 8061.Highly toxic cassava: Madaga, Mbida and Mbani, Minbourou, Le Blanc, Champion, 0110, 961,414 and Fonctionnaire.Results showed a wide variation existing among cassava leaf, stem and root of the studied cultivars.There is not clear link between morphological characteristic and cyanide content: varieties such as Minbourou, Le Blanc, Ngon Ezele and Owona Ekani had the same morphological characteristics concerning the root, stems and petioles color and were, respectively, grouped into different classes: high and low cyanide content.However, literature has established a significant correlation between cyanide potential of cassava roots and leaves.The cyanide content was higher in younger leaves compared to older ones, suggesting that cyanide potential of roots drops as plant ages .These characteristics, which include leaf morphology, stem color, branching habit and storage root shape and color, could not be clearly link to cyanide content, but may influence cassava yield and resistance to insect pests and diseases .A proper understanding of these variations in plant characteristics would assist the selection of cassava types with the desired traits and will contribute to improved crop establishment and increased yields.Fig. 2 illustrates means of cyanide contents of cassava parenchyma and that of traditional processed products, from all 20 cassava cultivars.The evaluation of cyanide content of cassava cultivars showed that the toxicity of cassava dropped significantly with processing and was dependent on the operating process used.After cutting of the parenchyma and sun drying for five days, the residual cyanide content of Chips reduced by 46.71% for improved and 48.37% for local cultivars.This decrease was more significant when the parenchyma underwent a dry fermentation with toxicity reduction of 81.13% and 79.14%, respectively, for improved and local cultivars."Fufu" which resulted from a wet fermentation showed highest reduction of cyanide content by 89.42% and 94.08% for local and improved cultivars respectively.The processing of “Chips” which primarily involved peeling, cutting and drying fairly contributed to reduce cassava toxicity.However, these results differed from those of Dufour who reported cyanide content reduction of up to 80% by sun drying.According to this author, the maximum degree of cyanide reduction of 75.2% was observed with bitter white cassava roots where cyanide content was between 500 and 667 ppm.At the same time, Montagnac et al. obtained detoxification rate of 71.3% of initial value by peeling and grating.The operations of peeling and parenchyma skin grating were the first substantial step in the process of lowering cassava toxicity because cyanogenic glycosides are distributed in large amounts in the skin .However, it was noted that the cultivars exhibited various profiles of cyanogen elimination after sundrying.This could be due to the fact of the cassava sections which did not have the same size, more especially as it has been reported that when the cassava sections are small and spread out finely for drying, the cyanide escapes more easily into the air .Sun drying of cut out chunks of parenchyma seemed to be a less effective means in the release of cyanide because it does not allow intimate contact between the hydrolysis enzyme and its substrate which promotes rapid breakdown of linamarin to hydrogen cyanide gas that escapes into the air.Parenchyma is generally cut longitudinally and much of plant cells remain intact, with the linamarin imprisoned in the cell, while the linamarase is localized on the cellular wall .When sun drying is insufficient, the enzyme can be denatured or trapped in the matrix of the dried cassava, preventing the conversion of cyanogenic glycosides into cyanide which is volatile .This process facilitated detoxification indicating that cutting cassava parenchyma in slight sections for one night would allow a good detoxification of the foodstuffs .“Gari” is a traditional cassava processed product.The mean residual cyanide content of Gari samples was about 20 ppm and was superior to the safety level of 10 ppm and Codex STAN176-1989 recommended total cyanide content of 2 mg/kg in “Gari”.Cyanide contents of the studied cassava samples remained lower when compared to the results of Djoulde et al. on “Gari” coming from 25 localities of Cameroon, and the total cyanide contents of “Gari” samples sold in Port-Harcourt markets in Nigeria which reach the value of 30 ppm .It is worth noting that these studies indicated no specification on the cassava cultivar used.Due to the increasing demand of “gari” in local markets, producers often shorten certain steps in the production process .Processing of “Gari” involves the following operating units: peeling, grating, dry fermentation, drying and roasting are very effective in the release of cyanide because of the intimate contact between the linamarin and the hydrolysis enzyme in the wet and finely ground parenchyma.This intimate contact promotes rapid break down of linamarin to hydrogen cyanide gas that escapes into the air.This process contributed to reduce the toxicity to approximately 80% irrespective of the cultivar.It is important to note that dry fermentation is less efficient than wet fermentation ."Yeoh and Sun and Agbor-Egbe and Mbome reported better detoxification rate by peeling and grating of the root's internal structure, followed by fermentation, drying and roasting.Generally, white Gari has higher cyanide content when compared to yellow Gari.Chiedozie and Paul reported that the addition of palm oil in “Gari” processing caused reduction in cyanide content.However, the relative low elimination rate obtained from “Gari” processing could be explained by the environmental and fermentation conditions.Cassava transformation into «Fufu» resulted in residual cyanide content of 9.8 ± 6.95 ppm and 6.54±7.22 ppm for improved and local cultivars respectively.These values are in conformity with WHO legislation who recommends the level of safety at 10 ppm.Cyanide elimination rates of the present study was similar to those reported by Iwuoha et al. who showed that soaking for five to six days at a pH of 4 to 4.5 reduced the cyanide up to 94.7%.Agbor-Egbe and Mbome obtained a reduction of cyanide content of 90% after soaking for 72 h. Soaked Cassava increases the process of cyanide elimination because the roots which are completely submerged in water can support bacterial growth allowing the production of linamarase .Moreover, since cyanide is water soluble and volatile, the operations of soaking, followed by manual crushing and then sun drying could have resulted in the lowest cyanide content observed in “Fufu” , although “Fufu” of higher cyanide contents have been sampled in certain villages of Cameroon and markets of Nigeria .This was usually due to high demand in local markets, causing the producers to neglect the various step of fermentation, which according to them, was regarded as a waste of time with little effect on cassava detoxification .However, a method of standardized transformation of bitter cassava roots with contents of cyanide > 400 ppm, consisting of peeling, scraping of the external layer of the parenchyma, fermentation and oven drying at 60 °C was tested successfully in Burundi .Furthermore, Toummou et al. reported in comparing the effect of traditional and improved cassava processing in cassava derived products that the concentration of cyanogenic compounds in chips of Cassava from improved processing obtained after grating of cassava root without skin is significantly lower than the levels in traditional processing.The objectives of this work were to determine cyanide content of 20 main cassava varieties collected in the locality of Ongot in Cameroon and evaluate the impact of traditional processing of cassava-derived foodstuffs on this content.Accordingly, cassava varieties of this locality can thus be classified as “moderately” to “Highly” toxic.Cassava varieties Pola Rouge Beul, Macoumba, Man Mbong, Ngon Ezele, Owona Ekani, Mnom Ewondo, 92/0326, 040, 95,109, 8034, 8017, 8061 are moderately toxic and can be used directly for the humans consumption after peeling, scraping the skin and boiling.Madaga, Mbida and Mbani, Minbourou, Le Blanc, Champion, 0110, 961,414 and Fonctionnaire are highly toxic and can be used for flour, starch and ethanol processed.This study highlighted that traditional processing reduce cyanide content of products and that elimination rate depended on the process involved: Fufu, Gari and Chips.Wet fermentation is thus an effective way of detoxifying cassava.The level of detoxification with Chips processing was not that discounted.Thus, efforts remain to be made with Chips which presented the lowest levels of cyanide detoxification and furthermore with Gari to reach level recommended by the Codex Alimentarius.Reduction in the cyanogenic potential of cassava could be realize during each processing step, resulting in an almost detoxification of the product.While all methods abate the cyanide levels, the effectiveness of these methods depend on the processing steps and the sequence used and is often time-dependent.Moreover, based on the results presented here, in order to increase detoxification, future investigations are needed to scrape the skin of the parenchyma after peeling before continuing the transformation operations, since it was showed that cyanogenic glycosides are distributed in large amounts in its skin.Finally, improved varieties 92/0326, 040, 95,109, 8034, 8017, 8061 can be recommended to farmers due to their high production yield and less cyanide content.Youchahou Njankouo Ndam: Formal analysis, Investigation, Writing - original draft.Pauline Mounjouenpou: Conceptualization, Methodology, Writing - review & editing, Supervision.Germain Kansci: Writing - review & editing.Marie Josiane Kenfack: Formal analysis, Investigation, Writing - original draft.Merlène Priscile Fotso Meguia: Investigation, Writing - original draft.Nina Sophie Natacha Ngono Eyenga: Investigation, Writing - original draft.Maximilienne Mikhaïl Akhobakoh: Investigation, Writing - original draft.Ascenssion Nyegue: Writing - review & editing.
Cyanide is a toxic substance found in several plants roots amongst which cassava. The objectives of this study were to quantify cyanide contents in the roots of the main cassava varieties cultivated in Cameroon and evaluate the effect of some traditional cassava processing methods on their initial contents. Ten local and ten improved varieties of cassava samples were collected in the locality of Mbankomo, Ongot village, Centre region. These roots were processed into traditional foods: “Chips”, “Gari” and “Fufu”. The cyanide content was determined in the parenchyma, cortex, and cassava derived foods. Local varieties had cyanide contents varying from 79.34±3.58 to 181.33±0.48 ppm, while contents in improved varieties varied from 61.03±9.44 to 118.04±7.16 ppm. Cyanide content quantitative classification revealed that studied cassava varieties fell within the range “moderately” to “highly” toxic. Results showed no clear link between morphological characteristic and cyanide contents of the studied cassava varieties. Although cyanide contents were higher in the cortex varying from 117.80±11.32 to 210.07±9.15 ppm for local, and from 98.43±15.49 to 155.44±12.11 ppm for improved varieties. Processing of cassava into different traditional foods contributed to reduce cyanide content. Elimination rates were as a function of the process involved: 47%, 80% and 91%, respectively, for “Chips”, “Gari” and “Fufu”. Cassava processing reduce cyanide content, however extent of reduction varies from one product to another.
31
Light conditions affect rhythmic expression of aquaporin 5 and anoctamin 1 in rat submandibular glands
Circadian rhythms, which measure time on a scale of 24 h, regulate various physiological functions such as sleep cycle, blood pressure, hormone secretion, metabolism and salivary secretion in mammals .The rhythm is orchestrated by a master clock and several peripheral biological clocks.The master clock, which is located in the suprachiasmatic nucleus of the hypothalamus, generates 24-hour circadian rhythms.In mammals, light resets the circadian timing of the master clock to synchronize with environmental conditions.Peripheral clocks in organs are regulated by a master clock .Peripheral biological clocks are regulated independently by clock genes.The intracellular clock mechanism of the clock genes is based on transcriptional and translational feedback loops, which are called transcription translation oscillating loops .Among clock genes, aryl hydrocarbon receptor nuclear translocator-like protein 1, period circadian protein homolog 2, cryptochrome circadian clock and circadian locomotor output cycles kaput are essential for TTLs.The key transcription factors CLOCK and BMAL1 form heterodimers which interact with the enhancer box sequences in the promoters of Per and Cry genes, which drive the positive transcription of the TTLs .The PER and CRY proteins interact, translocate into the nucleus and inhibit the activity of CLOCK-BMAL1 heterodimers, which promotes the transcriptional repression of the TTLs .The master and the peripheral clocks in most tissues are controlled by this intracellular feedback loop.Dysregulation of clock gene expression results in diverse pathological conditions, such as sleep diseases, mental illness, cancers, metabolic syndromes, cardiovascular disorders and tooth development disorder .In recent years, the role of the circadian clock in the peripheral organs, such as heart, kidney and liver, has been investigated .Multiple studies have suggested that the clock genes of peripheral clocks regulate physiological function in organs .However, little is known about their roles in salivary glands.The most potent entraining signal of circadian rhythm in mammals is light.Light induces a phase shift of the master clock in the SCN.Light entraining information reaches the SCN via the retinohypothalamic tract, which is the principal retinal pathway .The SCN then relays this entraining information to peripheral clocks through endocrine signals and neural circuits.The phase of submaxillary Per1 expression is controlled by light and food entrainment .Light can synchronize peripheral clocks in mice through a Syt10-and CamK2-driven deletion of Bmal1 in the SCN .These studies suggest that light conditioning affects peripheral clocks and physiological function in organs.Saliva plays an essential role in maintaining the integrity of the oral structures, in prevention of oral disease and in controlling oral infection.The importance of saliva in preventing the development of bacterial plaque formation .The major salivary glands, submandibular glands and the parotid and sublingual glands normally contribute over 90% of the total volume of unstimulated saliva .The secretion of water and ions transport in SGs can be divided into two pathways: transcellular and paracellular transport pathways, which are driven by changes in water channel gating action and transmembrane osmosis .Aquaporin 5 and Anoctamin 1 play an important role in water secretion and ion transport .For driving the salivary secretions, AQPs regulate the transmembrane water movement in response to osmotic gradients.AQP5 is the major aquaporin expressed on the apical membrane of the intercalated ductal cells and acinar cells in SGs .ANO1 is a transmembrane protein which functions as a Ca2+-activated chloride channel.ANO1 are localized on the apical membrane and control the Cl− efflux of apical in SGs.CaCCs are essential for the vectorial transport of electrolytes and water in the retina, airways, proximal kidney tubule epithelium, dorsal root ganglion sensory neurons and salivary glands .The salivary flow rate and salivary substances such as Na+, Cl−, K+, HCO3− and α-amylase secretion rate follows a circadian rhythm .The unstimulated salivary flow rate is extremely low during sleep are known.Recent studies have shown a circadian rhythm of clock genes and amylase 1 mRNA in submandibular glands .Clock proteins and Bmal1 and Per2 mRNAs localized in the mucous acini and striated ducts was determined by in situ hybridization .These results suggest that clock genes play an important role in circadian oscillation of salivary secretion.However, rhythmic expression patterns of the clock genes, Aqp5 and Ano1 in SGs under different light condition remain to be investigated.The purpose of this study was to reveal the effect of light conditioning on the peripheral clock in SGs.We examined temporal rhythmic expression patterns of the clock genes Aqp5 and Ano1 in rat SGs under light/dark and dark/dark conditions.Six-week-old male Wistar rats were used for this study.Only male rats were chosen to avoid the effect of sex-related hormonal differences.Rats were maintained for 2 weeks on a light/dark-cycle of 12 h light and 12 h dark prior to all experiments, and food and water were available ad libitum.To determine the effects of light exposure, we kept the rats in constant darkness under a dark/dark cycle for 48 h before sampling.All experiments were performed in conformity with zeitgeber time with 8:00 set as ZT0.This study was approved by the Ethics Committee of Tokyo Dental College after the review by Institutional Animal Care and Use Committee and carried out the Guidelines for the Treatment of Experimental Animals at Tokyo Dental College.All animals were treated in accordance with the Council of the Physiological Society of Japan and the American Physiological Society.We isolated the glands before lights on at the transition states DD to LD.At DD to LD, Rats were anesthetized and the SGs were extracted in the dark to avoid the effect of light stimulation.Total RNA from submandibular glands at ZT0, 6, 12, 18, 24, 30, 36, 42 and 48 h were isolated with RNAiso Plus.RNA concentration and quality were determined using a spectrophotometer NanoDrop-2000.We used the same quantity of total RNA for all series of sqPCR analyses.Total RNA from SGs was subjected to real-time semi-quantitative RT-PCR analysis.Expression level of the internal reference gene was measured using One Step SYBR® PrimeScript™ RT-PCR Kit II.Probes labeled with 6-carboxyfluorescein was used.The primers used were gene-specific primers for β-actin, Bmal1, Per2, Clock, Cry1, Ano1 and Aqp5.The comparative Ct method was used for sqRT-PCR analysis.We assessed the candidate gene expression relative to that of β-actin using the Thermal Cycler Dice real time system software version 5.11.The SG tissue were harvested at CT0, 6, 12, 18, 24, 30, 36, 42 and 48, and homogenized in ice cold radioimmunoprecipitation assay lysis buffer.Protein samples concentration were calculated using the DC protein assay kit based on the Lowry method.For each sample, 10 μg protein was electrophoresed on 10% SDS-PAGE gel, transferred to Immobilon-P Transfer Membrane membrane and analyzed using the Mini Trans-Blot® Transfer Cell.PVDF membrane were blocked with 5% skimmed milk for 1h, and probed overnight at 4 °C with anti-ANO1, anti-Aquaporin 5 and anti-β ACTIN.Horse-radish peroxidase-conjugated polyclonal goat anti-rabbit immunoglobulins was used for 1 h at room temperature.Protein bands were visualized with the ECL chemiluminescence WB Detection Reagents, and documented using the Image Quant LAS-4000.Quantification of bands were performed by using Image Quant TL 7.0 software.All sqPCR data are displayed as the mean ± SD.Circadian rhythms during 48-h periods were statistically analyzed by one-way analysis of variance and p < 0.05 were considered significant differences, with the Bonferroni test for post hoc comparisons when significance was determined by analysis of variance.All western blot results were represented as mean ± SD from five independent experiments.P-values were calculated by one-way ANOVA and significant differences observed at p < 0.05.The Bonferroni test for post hoc comparisons was performed and p < 0.01 were considered significant differences.Rhythmicity was analyzed by CircWave version 1.4 and the significance of rhythmicity was evaluated at a 95% confidence level.We observed rhythmic mRNA expression patterns of Bmal1, Per2, Clock and Cry1 in SGs under both DD and LD conditions.We examined temporal relative expression of the clock genes mRNA in the SGs every 6 h from ZT0 to ZT48.Relative expression levels of Bmal1 mRNA were significantly higher at ZT0, ZT24 and ZT48 and were lower at ZT12 and ZT36 in the LD condition.Bmal1 mRNA showed significant rhythmic expression both LD and DD conditions.The peak times of Bmal1 expression in LD and DD overlapped.Temporal relative expression of Per2 mRNA showed significantly higher expression at ZT12 and ZT36 and lower expression at ZT0, ZT24 and ZT48 in the LD condition.Relative expression levels of Per2 mRNA showed similar results in the DD condition.Bmal1 expression showed antiphase with the expression pattern of Per2 with a 12 h phase difference.Clock mRNA did not show significant rhythmic expressions and a clear phase variation could not be observed in its expression peaks both LD and DD conditions.Cry1 mRNA showed significant upregulation at ZT24 and ZT48 and lower expression at ZT6 and ZT30.The phase of expression of Cry1 mRNAs deviated by 12 h from the phase of Per2 expression peaks in LD and DD conditions.The peak-to-peak periods of Bmal1, Per2 and Cry1 were maintained for 24 h, and peak times were consistent between LD and DD conditions.The expression levels of Bmal1, Per2 and Cry1 were considered rhythmic by CircWave in both LD and DD conditions.We observed temporal mRNA expression profiles of Aqp5 and Ano1 in the saliva.Temporal relative expression profiles of Aqp5 and Ano1 mRNAs in SGs were examined every 6 h from ZT0 to ZT48.The relative expression levels of Aqp5 mRNA were higher at ZT12 and ZT36, whereas they were lower at ZT0, ZT24, and ZT48 under the LD condition.Aqp5 mRNA showed significant rhythmic expression under both LD and DD conditions.The expression pattern of Aqp5 mRNA differed under the LD and DD conditions.Aqp5 mRNA expression was upregulated at ZT6 and ZT30, but was significantly downregulated at ZT0, ZT24, and ZT48 under the DD condition.The peak time of Aqp5 expression under the DD condition occurred 6 h earlier than that under the LD condition.The same phase was observed for Ano1 mRNA expression, with ZT12 and ZT36 showing the highest, and ZT0, ZT24, and ZT48 showing the lowest expression under the LD condition.Ano1 showed rhythmicity, with a significantly higher expression observed at ZT6 and ZT30 and a lower expression observed at ZT0, ZT24, and ZT48.The peak times of Ano1 expression under the DD condition occurred 6 h earlier than those under the LD condition.The expression levels of Aqp5 and Ano1 were considered rhythmic through CircWave analysis under both LD and DD conditions.Western blot analysis revealed the circadian oscillation of AQP5 and ANO1 expression in the rat SGs under DD conditions.The expression of ß-ACTIN single band was shown in 42 kDa, and did not exhibit any circadian patterning during 48 h.A single band was detected for AQP5.AQP5 expression was normalized with ß-ACTIN, a constitutively expressed internal control, every 6 h for a 48 h period.The expression levels of AQP5 protein were higher at CT6 and CT30, whereas they were lower at CT0, CT24, and CT48.A wide single band was detected for ANO1.The circadian expression of ANO1 showed significant oscillation patterns peaking at CT6 and CT30.We demonstrated that Bmal1, Per2, Clock, and Cry1 show rhythmic circadian expression in SGs under LD and DD conditions.The phases and peaks of Bmal1 and Per2 expression profiles showed opposite rhythms, which were shifted by 12 h and repeated every 24 h.The Cry1 expression peaks were shifted compared to those of Per2 and Bmal1.It was reported that the temporal expression profile of the Clock in the SCN does not show any circadian rhythm.This is in agreement with our results.The phases and peaks in the expression profiles of genes examined by us were similar to those in other organs and SCN .Our results under the LD condition are consistent with those of previous studies .Clock gene mRNA expression phases are maintained in SGs under the DD condition.These results suggest that SGs have peripheral clock mechanisms with negative feedback loops.The 48 h in continuous darkness did not change clock gene mRNA expression phases in SGs.Our results suggest that the phases of clock genes in SGs might be not rapidly affected by the light condition.Aqp5 and Ano1 showed rhythmic circadian expression similar to that of Per2 under the LD condition.Expressions of Aqp5 and Ano1 followed a circadian rhythm pattern in SGs under the DD condition.The temporal expression pattern of Aqp5 showed similar patterns and peak times as that of Ano1 under both the LD and DD conditions.Our results suggest that exposure to light shifts the peak expression time of Ano1 and Aqp5 in the rat SGs.However, we did not conduct a flash exposure experiment in this study under DD condition, therefore it is unclear whether light stimulation can cause a shift in the peripheral clock of the SGs.We showed that AQP5 and ANO1 protein expression displayed rhythmic circadian oscillations.There was no time-lag between the peak time of protein and mRNA expression under DD condition.The result indicated that Aqp5 and Ano1 mRNA translated promptly without any delay, and Aqp5 and Ano1 Peak shift was maintained not only in mRNA but also in protein.Upregulation of Aqp5 and Ano1 mRNA expression drive changes in transmembrane osmosis and water channel gating in SGs .In Aqp5 knock-out mice, more than 60% saliva production was reduced and the tight junction proteins and water permeability was decreased expression of compared to the wild-type .The intracellular Ca2+ concentration, which additionally showed circadian rhythm were controlled by the activation of ANO1 .Ano1 disruption by siRNA transfection in mice significantly reduced the salivary flow rate induced by muscarinic-cholinergic stimulation .These results suggest that the circadian rhythm in water secretion may be controlled with water permeability, which is influenced by the circadian oscillation of Aqp5 and Ano1 expressions.In the nocturnal period, Rats increase intake of water and food.The temporal expression peaks of Aqp5 and Ano1 correlated with their feeding and drinking behavior .In previous our studies, we examined DNA sequences to confirm the relationship between clock genes and Aqp5 and Ano1 expression.The enhancer box binding sequence, which acts as the binding site of the BMAL1-CLOCK heterodimer to the promoter region was found on rat Aqp5 and Ano1 .The existence of an E-box in the promoter region and the maintenance of rhythmic expression cycles under the DD condition are characteristics of clock-controlled genes .Our results suggest that Aqp5 and Ano1 are putative CCGs and target gene of the BMAL1-CLOCK heterodimer in SGs.Phase shifts of Aqp5 and Ano1 under the DD condition indicated that light condition may be one of the synchronization factors in SGs.There are two main factors involved in the synchronization of peripheral clocks to environment conditions: light and feeding.For example, timed food uptake can primarily reset the liver clock and thereby regulate liver metabolism .In the present study, all experiments were carried out with food and water available ad libitum.Therefore, light was considered to be the main factor affecting synchronization.Light from the retina reaches the SCN through the RHT and entrains the master clock.Then, master clock transmits timing information to peripheral clocks along neuronal and endocrine pathways.We observed a differential expression in peak time of Aqp5 and Ano1 mRNAs between the LD and DD conditions.However, no time shifts in clock gene peak expression were observed.These results are inconsistent with entrain pathways through SCN because clock genes in the peripheral clock were not shifted under the DD condition.Therefore, it is presumed that there exist additional pathways through which peak time are synchronized independently of the SCN.Several reviews on the synchronization of the peripheral clocks have recently been published describing that light can reach peripheral clocks via several routes.Additional pathways exist through which peripheral clocks are synchronized independently of the SCN clock .There were shift in only part of the waveform of Aqp5 and Ano1.In addition to the mechanism that shifts the entire waveform like pathway1, there may be pathway2 in the SGs that provides quick adjustments to the partial ambient light environment.In conclusion, we show circadian rhythmic expression of Bmal1, Per2, Cry1, Aqp5 and Ano1 mRNAs in LD and DD conditions.We show different circadian rhythmic expressions of Aqp5 and Ano1 between the LD and DD conditions.Maintaining the rhythm of Aqp5 and Ano1 even in the absence of light stimulus indicates that Aqp5 and Ano1 may be controlled by clock genes as CCGs.Clock genes may regulate the rhythmic expression of Ano1 and Aqp5 mRNA and may control osmic gradients in SGs.Ryouichi Satou: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Maki Kimura: Performed the experiments; Contributed reagents, materials, analysis tools or data.Yoshiyuki Shibukawa, Naoki Sugihara: Conceived and designed the experiments.This work was supported by JSPS KAKENHI Grant Number JP19K18953.The authors declare no conflict of interest.Data associated with this study has been deposited at GenBank under the following accession numbers: β-actin NM_031144.3, Bmal1 NM_024362.2, Per2 NM_031678.1, Clock NM_021856.1, Cry1 NM_198750.2, Ano1 NM_001107564.1, Aqp5 NM_012779.1.No additional information is available for this paper.Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2019.e02792.
Circadian rhythms regulate various physiological functions and are, therefore, essential for health. Light helps regulate the master and peripheral clocks. The secretion rates of saliva and electrolytes follow a circadian rhythm as well. However, the relationship between the molecular mechanism of saliva water secretion and the peripheral circadian rhythm in salivary glands is not yet clear. The transmembrane proteins aquaporin5 (Aqp5) and anoctamin1 (Ano1) are essential for water transport in the submandibular glands (SGs). The purpose of this study was to reveal the effect of light conditioning on the peripheral clock in SGs. We examined temporal expression patterns among clock genes, Aqp5 and Ano1, in rat SGs under light/dark (LD) and dark/dark (DD) conditions. We observed circadian rhythmic expression of Bmal1, Per2, Cry1, Aqp5, and Ano1 mRNAs under both LD and DD conditions. The expression levels of Aqp5 and Ano1 peaked 6 h earlier under the DD condition than under the LD condition. Maintenance of the circadian rhythm of Aqp5 and Ano1 expression even under the DD condition indicates that Aqp5 and Ano1 may be controlled by clock genes; such genes are called clock-controlled genes (CCGs). Western blot analysis revealed the circadian oscillation and peak shift of AQP5 and ANO1expression under DD conditions. Clock genes may regulate the rhythmic expression of Ano1 and Aqp5 and may control osmic gradients in SGs.
32
Screening tool to evaluate the vulnerability of down-gradient receptors to groundwater contaminants from uncapped landfills
In 2010 there were 1908 operational municipal landfill facilities in the U.S. which received 135 million tons of waste and more than 10,000 closed landfills in the U.S.Problems associated with open and closed landfills include gas emissions, contaminated leachates, physical hazards, aesthetic issues, and others.These issues can extend beyond the landfill boundaries and affect surrounding urban, agricultural and undeveloped areas.The range of possible contaminants includes volatile organic chemicals; dissolved organic matter; inorganic macro-components such as calcium, magnesium, manganese and sulfate; heavy metals such as cadmium, chromium, lead and zinc; and xenobiotic organic compounds such as aromatic hydrocarbons, phenols and pesticides.Further, a closed landfill may be targeted for redevelopment for a variety of other purposes.For example, of 55 redeveloped landfill sites in Florida, 56.4% have been developed as recreational facilities, 27.3% for commercial use, 9.1% for residential development, and 7.3% for schools.Therefore, it is essential to determine whether landfill leachate contaminants will negatively affect the underlying land and surrounding areas.This requires information about the composition and concentrations of groundwater contaminants, an understanding of subsurface hydrologic conditions, and tools to unify these factors to provide assessments of the risks these contaminants pose to nearby receptors, such as streams, wetlands and existing or proposed residential areas.Landfill risk assessment is an evolving science, and there is not currently a universally accepted integrated risk assessment methodology that can be applied to landfill gas, leachate and degraded waste.Butt et al. reviewed 19 publications that prescribe landfill risk assessment procedures, and concluded that none addressed all issues related to risks associated with landfills and possible remedies.Nonetheless, regulatory guidelines have been issued that describe steps that should be taken when conducting landfill risk assessments.The USEPA Ecological Risk Assessment Guidance for Superfund and “Principles” follow-up document describe an eight-step process for performing landfill risk assessments:screening-level problem formulation and ecological effects evaluation,baseline risk assessment problem formulation,study design and data quality objective process,field verification of sampling design,site investigation and analysis phase,Steps 1 and 2 define a screening process, where a decision is made to either take no action or to proceed with a full risk assessment.The New Jersey Department of Environmental Protection published a risk assessment document based on this USEPA approach.Here, they defined an ecological evaluation as the preliminary screening phase, which is followed either by no further action or a full environmental risk assessment.The EE includes assembling information about the site and potential environmentally sensitive receptors, and determines whether contaminants of concern and a migration pathway are present.The purpose of this investigation was to develop and evaluate a screening method for assigning preliminary levels of concern for the potential input of contaminants from landfills to nearby human or ecological receptors; to document the attributes and limitations of the transport model used in the method; and to demonstrate the method.The intent was to create a method, based on an idealized modeling approach, whereby preliminary levels of concern can be applied to landfills quickly and efficiently based on minimal input data.The screening method provides a formalized implementation of the NJDEP EE.Water-quality data from landfill monitoring wells are used to identify contaminants of concern, concentrations are compared to regulatory levels, and groundwater contaminant transport is quantified at a screening level with the transport model.The level of concern is stated as “unknown” if water-quality and receptor data, number or location or wells, or values of parameters needed to simulate transport are insufficient.Otherwise, the level of concern is defined based on the model-simulated concentrations of contaminants at the receptors under steady-state conditions.If this screening process indicates that substantial concentrations of contaminants could reach receptors, then additional monitoring, modeling or remedial action may be appropriate.Analytical and numerically-based applications for predicting transient and steady-state contaminant transport are available, such as those listed by the Colorado School of Mines Integrated Groundwater Modeling Center and the USEPA Center for Subsurface Modeling Support.Data requirements, user expertise, and time and effort required to develop models are extensive for many of these applications, and therefore their use as rapid screening tools is not practical.However, models based on the approximate, analytical solution of Domenico are used in contaminant-transport screening applications such as Quick Domenico, Biochlor, Bioscreen, and Footprint.These models are run as Microsoft Excel spreadsheet applications, and provide rapid estimates of contaminant concentrations in plumes downgradient of sources.This approach was used in the screening method presented here.An improved, more capable version of the Quick Domenico spreadsheet implementation was developed, with additional features that were specifically required for rapid screening of any number of landfills with lengthy lists of potential contaminants and down-gradient receptors of various types and distances from the source, and for efficient archiving of all simulation inputs and results for future reference or modification.The screening method is illustrated by evaluating levels of concern posed by 30 closed, uncapped and unlined landfills in the Pinelands National Reserve in southern New Jersey.The PNR occupies more than one million acres in seven counties.This area is protected from unrestricted development to preserve its unique and fragile ecosystem.Most of the PNR is underlain by the Kirkwood-Cohansey aquifer system, which underlies 3000 square miles of the New Jersey Coastal Plain.The Cohansey Formation is an unconfined sand-and-gravel aquifer with discontinuous interbedded clays.The Kirkwood Formation is a fine to medium sand.Rhodehamel reported that the hydraulic conductivity of the Kirkwood-Cohansey aquifer system ranges from 90 to 250 ft/d, and values for specific locations were obtained from published regional groundwater flow models and the transmissivity ranges from 4000 to 8300 ft2/d.The storage coefficient ranges from 3 × 10−4 to 1.0 × 10−3.Site characterization and contaminant data were provided by landfill operation and closure documents and monitoring well reports provided by the NJDEP.Potential receptors were identified as streams, wetlands and residences near the landfills as determined by existing GIS coverages.Understanding the residual effects of these landfills on underlying groundwater and down-gradient streams, wetlands and residences is of interest to the New Jersey Pinelands Commission, which in conjunction with the NJDEP has regulatory responsibility over these landfills.It is anticipated that results from this screening method will assist the Commission with future land-use decisions and in identifying the need for implementation of engineering controls.A groundwater solute-transport model based on the analytical model of Domenico was used to simulate subsurface contaminant transport from landfills to down-gradient receptors.It assumes first-order decay, linear sorption–desorption, constant source strength, and steady groundwater flow.The approximate analytical solution of Domenico, unlike exact analytical and numerical solutions, is amenable for use as a spreadsheet application.There have been many analytical and numerical solutions to Eq.Simplifying assumptions are commonly made to facilitate solutions with lesser data requirements or simpler solution algorithms.The Domenico model is one such case, in which an approximate analytical solution to Eq. is achieved with readily obtainable or estimable porous media, water quality and hydrologic data.Development of the algorithms, assumptions, and limitations are described by Domenico and Robbins and Domenico.W and Z are the source width and thickness.The contaminant source is considered to be the footprint of the landfill, and is present at a constant strength.The initial condition is zero contaminant concentration outside the source.The integral term cannot be simplified and must be numerically integrated in the y and z directions.This solution, also presented by Wexler, includes a more rigorous accounting for dispersion, but involves greater computational complexity.Unlike the Domenico solution, this solution cannot be implemented conveniently for use as spreadsheet-based application in a rapid screening tool.The Domenico and Sagar models were compared via dimensionless analysis by Guyonnet and Neville.Type curves were used to assess the differences in model results under a wide range of parameter values.Discrepancies were negligible along the plume centerline for flow regimes dominated by advection and mechanical dispersion and increase with lateral distance from the centerline.Differences also increased with increases in the solute decay coefficient, but only for concentrations away from the plume centerline.Their analysis also showed that the two solutions converge with increasing Peclet number values.Srinivasan et al. determined that the Domenico solution is equivalent to an exact analytical solution in cases where longitudinal dispersivity is zero.In settings where longitudinal dispersivity is not zero the potential error in estimated down-gradient concentrations is likely to increase as the dispersivity increases.The authors also asserted that the longitudinal extent of a plume may be underestimated as a result of the residence time of particles along the plume centerline being over-predicted.This inaccuracy, however, is minor when a reasonable value for longitudinal dispersivity is used.West et al. evaluated the differences between solute concentrations predicted by the Domenico and exact-solution models.They observed concentration errors of 2.5% near the contaminant source for a 3-D simulation for a constant αx value of 10 m, and the error increased to −24% 1000 m down gradient of the source.Evaluations of Srinivasan et al. and West et al. have value in that they elucidate the limitations of an estimated solution to transport problems and express the magnitude of potential error that can be introduced.West et al. commented that an exact analytical solution is desirable when available.However, the utility of the Domenico solution and its family of applications lies in the ease of use, limited data and parameter-value requirements, and computational requirements that can be met with a spreadsheet application.Therefore, it is reasonable to use Domenico-based methods to estimate contaminant transport if the inherent limitations are considered, and only under conditions for which errors fall within acceptable levels for the intended application.The USEPA, which supports the use of several Domenico-based products acknowledges the stated limitations, and advocates the use of these methods in cases where transport is advection-dominated, and not dispersion-dominated.They further state that error is at a minimum when the Peclet Number is greater than 6.This would apply to highly permeable porous media, such as that encountered in Coastal Plain areas along the East Coast of the U.S, including much of Southern New Jersey.Therefore, a transport modeling tool which employs the Domenico estimated solution of solute transport is appropriate in settings where high hydraulic conductivities are encountered.The Domenico approach has been applied successfully to simulate groundwater-contaminant transport for other purposes, such as assessment of the role played by biodegradation of hydrocarbons in a contaminant plume that affected a tidal river, and as a screening model to estimate the maximum extent of benzene, toluene and xylene plumes.A Microsoft Excel-based implementation of the Domenico analytical model was developed by the Pennsylvania Department of Environmental Protection.The algorithms in QD include two features not included in the original Domenico model: a solute retardation factor which allows interaction between solutes and the organic carbon fraction of the aquifer material, and limitation of vertical dispersion to the downward direction.The latter is appropriate because liquid dispersive transport is not possible above the water table.A revised spreadsheet was prepared in this investigation.The simulation algorithms of QDM are identical to those of QD.Advanced features of QDM as compared to QD include:Up to 50 model simulations developed and archived on a single spreadsheet.Automatic calculation of several parameters that require user input in QD.Automatic calculation of time required to reach steady-state conditions.Inclusion of a library of regulatory levels for selected contaminants.All of these advanced features were necessary for this investigation, and all have value in future Domenico-based applications or revisions of existing applications.The first feature was necessary for this screening method for two reasons: a great many simulation scenarios are needed to screen many contaminants at many land-fill sites that are migrating toward many down-gradient receptors, and archiving all model scenarios such that they can be easily retrieved is necessary, as the use of this screening method is primarily regulatory.The second and third advanced feature were for convenience, as dispersivities, linear velocity and time to reach steady are calculable within the Domenico model, and users are freed from guessing at or approximating values for those parameters.The fourth advanced feature is essential for this application of the model, as contaminant concentrations relative to regulatory values are needed in the risk assessment.These features are not available in Quick Domenico or any of the other Domenico-based spreadsheet models that were evaluated, and this was the reason for developing QDM.The QDM spreadsheet template with documentation is available as a Supplementary Material.The QDM spreadsheet is divided into four sections:Section 1: User-entered parameter values.Section 2: Automatically calculated parameter values.Section 3: Model display area.Section 4: Simulation algorithm.Model parameters and corresponding spreadsheet cell ranges are shown in Table 1.All model input parameters for up to 50 simulations can be stored in QDM Sections 1 and 2, with the simulation number specified in column M.A simulation is run by entering the simulation number from Column M into cell B4.Then, all parameter values needed to run the model from Sections 1 and 2 for the specified simulation number are copied into the appropriate cells in Section 3 and are accessed by the algorithm.Simulation results are then displayed in Section 3.The final steady-state concentration at the receptor is shown in cell K4, and as a percent of regulatory value in K5.A graph of concentration along the plume centerline as a function of distance from the contaminant source also is displayed.A 5 × 10 grid of concentrations along the flow path is shown in Cells A32-K40.Additional water-quality monitoring data collected from wells located between the source and the receptor can be entered into Cells A42-K44.These data can be compared to values in the grid to assess the quality of model predictions.Twenty-two parameter values are required to simulate transport in the QDM spreadsheet.Six of these can be literature values, two are obtained from regional groundwater flow models or from available information about the aquifer, two are distances obtained from measurement or GIS applications, six are calculated automatically by the spreadsheet, one is from monitoring well data, source thickness is estimated or measured, and the overall model domain dimensions are specified by the user.Dimensions of the area simulated are input by the user, and must contain the entire source and the receptor.Distance between the edge of the source and the receptor is required, as attenuation of contaminant concentrations occurs along the flow path.Source width is defined here as the longest dimension across the landfill in any direction, and the groundwater-contaminant plume is assumed to originate from a continuous source of uniform concentration having that width.GIS applications can be used to obtain these values.Source thickness is set by the user.It is the thickness of the contaminated zone where the contaminant is expected to be at or near its maximum concentration.Ten ft is a reasonable default source thickness for landfills unless more specific information is available.Values of source thickness greater than 10 ft do not substantially affect down-gradient solute concentrations, but values less than 10 ft may cause underestimation of contaminant flux.The preferred sources of hydraulic conductivity and gradient values are local or regional groundwater-flow models, and no default values can be suggested.If a flow model is not available, literature sources describing the aquifer properties may provide these parameters.They also can be obtained from field measurements, such as aquifer tests of on-site monitoring wells, but care should be taken as the hydraulic properties proximal to the landfill may not be representative of the entire model flow path.Hydraulic gradients can be estimated from simultaneous water-level measurements from multiple wells near or on the landfill.Estimates of hydraulic conductivity based on characteristics such as effective particle diameter and void ratio have been shown to be similar to measured values for uniform sand and gravel, but are not suitable for typically heterogeneous field conditions, as hydraulic conductivity values for a given particle size fraction can vary by three or more orders of magnitude.The solution of the Domenico model is insensitive to values of effective porosity, soil bulk density and organic carbon fraction after steady-state conditions have been reached, and default values of 0.358, 1.7 and 0.001, respectively, are suggested.These values are typical for sandy soils, and are conservative.A higher organic carbon value may be used if soil data are available; for example, the average organic carbon fraction for the New Jersey Pinelands “c” horizon is 0.0053.This will only substantially affect the concentration profiles of contaminants with large Koc values.Contaminant properties are obtained from literature sources.Values of λ and Koc for a given contaminant are highly variable and depend upon porous media source, temperature, experimental conditions, and other factors.The State of Pennsylvania compiled an extensive list of λ and Koc values which includes most commonly-detected landfill contaminants.This list can be supplemented with λ and Koc values from other references or experimentally.Care should be taken to ensure that selected values were obtained under geochemical conditions similar to those at the source.Interaction with organic carbon in aquifer material attenuates and slows the transport rate of contaminants which have non-zero Koc values.In the absence of analytical data, a conservative value of 0.001 is recommended.Time to reach steady state, retardation rate, and contaminant transport velocity are calculated within the QDM spreadsheet.A procedure to assign levels of concern was developed, based upon simulated steady-state concentrations of contaminants at receptor locations relative to regulatory standards.The methodology was based on the ecological evaluation process of New Jersey’s Ecological Evaluation Technical Guidance Manual.Four levels of concern are defined, based on availability of groundwater data concentrations of contaminants of concern at receptor locations.Level-of-concern threshold are:Level of concern = unknown,Data are insufficient to characterize the presence of COCs.Level of concern = low,COCs do not reach receptors at concentrations greater than the Practical Quantitation Limit.Level of concern = moderate,COCs reach receptors at concentrations greater than the PQL but less than 50% of any relevant regulatory standard.Level of concern = high,COCs reach receptors at concentrations greater than or equal to 50% of one or more relevant regulatory standards and/or receptors are within or adjacent to the landfill perimeter.Levels of concern for contaminant/receptor combinations are tabulated for each landfill, and the highest level for any combination is applied to the landfill.The screening tool was used to assess levels of concern for selected contaminants migrating from thirty closed landfills in the New Jersey Pinelands for which historical water-quality data were available.Each of the landfills ceased accepting solid waste in the early 1980s and all lacked engineering controls necessary to minimize shallow groundwater contamination.Selection of contaminants to consider depends upon the cause and nature of the source.For an uncontrolled release of known contaminants, only the spilled contaminant and degradation products would be modeled.For a source of unknown composition and quantities, such as the landfills in this example, criteria must be developed to include all contaminants likely to migrate from the source.A minimum of two wells was monitored for each of the 30 landfills, which provided information about the variability of the underlying groundwater.The monitoring program required by the New Jersey Department of Environmental Protection required that all analytes on an approved list must be monitored quarterly or annually, depending upon the analyte.Two of the landfills did not meet these criteria and were categorized as having unknown levels of concern due to insufficient data.For each landfill, both historic and current water-quality data were obtained from the NJDEP.Regulatory levels included drinking-water and other health-based standards for residential receptors, and groundwater and surface-water standards for stream and wetland receptors.The highest average daily concentration of each contaminant at each landfill and lowest appropriate regulatory standards were determined for use in QDM spreadsheets.Distances between landfills and receptors were determined with a geographic information system.An example of a landfill and closest stream, wetlands and residential receptors is shown in Fig. 4.As the direction of groundwater flow proximal to the landfill is uncertain due to changes in subsurface conditions associated with construction and operation of the landfill, the closest receptor was selected regardless of direction to the landfill.Thus for screening purposes, all potential receptors are assumed to be “down-gradient”.This is a conservative approach, appropriate for this screening tool; however, a user may consider excluding receptors which have been shown by site-specific flow models or field observation not to be down-gradient of the landfill.A separate QDM spreadsheet was prepared for each landfill, and simulations were developed for each contaminant/receptor combination.Results from landfill LF26 were used to demonstrate the use of the spreadsheet.Fig. 2 shows the results of simulations for residential areas with benzene as the contaminant, and Fig. 3 shows a summary of levels of concern for all contaminants simulated for that landfill.Sensitivity of simulated contaminant concentrations to the values of five model parameters was evaluated.Values of model parameters used in each sensitivity analysis are shown in Table 3.The Domenico modeling approach mandates that simulated concentration of a contaminant increases until steady-state conditions are approached.It was previously shown that contaminant concentrations are at steady state conditions after the time calculated by Eq.Simulations terminated before that time can be considered transient.For this screening tool, QDM simulations should be continued until the time specified by Eq. is reached, and thereafter contaminant concentration are insensitive to time of simulation.For each QDM simulation, the Domenico model specifies that a single longitudinal dispersivity be used over the entire flow path.Because dispersivity increases with distance, a value calculated from the distance between the source and the receptor would overestimate dispersion and underestimate contaminant concentrations.Therefore, a distance less than the distance between the source and the receptor should be used.Sensitivity of contaminant concentrations reaching receptors to dispersivity was evaluated by simulating transport of a conservative species to receptors 200, 500, 1000, 2000, 3000 and 4000 ft directly within the assumed flow path.Dispersivity was calculated for 25%, 50%, 75%, and 100% of each of those distances using Eq., and concentrations at the receptors were determined with QDM.Simulated concentrations were not found to be sensitive to the distance used to calculate longitudinal dispersivity, as shown in Fig. 6 for distances of 200–4000 ft. Fifty percent of the total distance between the source and receptor was therefore selected as a reasonable intermediate distance value for calculating dispersivity, which is calculated within the QDM spreadsheet using Eq.The first-order reaction rate constant of a contaminant affects the concentration at receptors, as shown in Fig. 7.A source width of 1000 ft and maximum distance of 4000 ft are used in this example.Here, concentration of the non-degrading species at the receptor remains near 100% of the source concentration up to distances of about 4000 ft, where it decreases by about 0.2%.Species with larger reaction rates attenuate to near-zero concentrations at much shorter distances, most notably ammonia, which would not be detectable at distance greater than 400 ft from the source.This analysis shows that concentrations of contaminants are highly sensitive to reaction rate.The large variability of literature values for this parameter leads to much uncertainty in QDM and other transport models that include reaction rates that have not been determined specifically for the field condition.It is, therefore, essential to select a literature value of λ that was determined under conditions comparable to the field conditions being modeled.The relations between source width, path length and down-gradient contaminant concentration also were explored.The source width for a range of path lengths was determined such that the concentration of chloride at the receptor was reduced by 5%.The/ ratio increases from about 2.8–6.4 as the path length increases from 50 to 4000 ft. This is important because source width is conservatively defined in this screening tool as the largest dimension of the landfill footprint, for which the underlying groundwater is assumed to have uniform contaminant concentrations.Thus, a QDM simulation would predict that conservative species released from a very large sanitary landfill would reach receptors undiluted at unrealistic distances.Additional well sampling at the source could be used to reduce the source width and generate a more representative source width.The sensitivity of down-gradient concentrations of conservative contaminants to the Koc values was evaluated by simulating transport of a conservative contaminant at a source concentration of 100 mg/L to a receptor 300 ft down-gradient and varying Koc from 0 to 1000.Regardless of the Koc value, concentration at the receptor was 50.0 mg/L.However, time to equilibrium increased from 799 to 3282 days as Koc was increased.When λ was increased to 0.01, the contaminant concentration at the receptor decreases as Koc increases from 0 to 1000.Therefore, while differences in Koc values among conservative species have no effect on the down-gradient simulated concentrations, attenuation of reactive contaminants increases with increasing Koc values.This is because slowing the solute transport via adsorption to background organic material allows more time for degradation to occur as the solute moves down-gradient.Chloride was simulated for LF-26 as it was for all 30 landfills, as it is always detected and functions as a conservative tracer.The selection of other contaminants to simulate with QDM was based on frequencies of detection and concentrations of contaminants relative to regulatory standards reported in 17 years of on-site monitoring-well data.Although 47 chemical species were detected in monitoring-well data at Landfill LF-26, most were detected in less than 10% of the samples, and/or at concentrations substantially less than regulatory standards.Benzene, arsenic, nitrate and ammonia were frequently detected at several monitoring wells, at concentration greater than current regulatory values.QDM results indicate that chloride concentrations at all three receptors are equal to its concentration at the landfill.This is controlled by the relation between receptor concentration and ratios between path length and source width, as shown in the sensitivity analysis.A width-to-path-length ratio of at least 3–6 is required for down-gradient dispersive attenuation to occur.Here, source width of 2978 ft is larger than any of the three path lengths.Likewise, arsenic does not attenuate, and receptor concentrations are all equal to source concentrations.Benzene, ammonia and nitrogen have λ values greater than zero.Attenuation is shown in Fig. 10.Level of concern categories assigned to landfill LF-26 are shown in Fig. 3.Arsenic and benzene exhibit high levels of concern for all receptors.Ammonia and nitrate have lower levels of concern for residential receptors than for streams and wetlands because they are governed by regulations with higher permissible concentrations, and the nearest residential receptor is farther from the landfill than the nearest stream or wetland.This example shows that there is a high level of concern, based on QDM simulations, and that all three receptor categories may be exposed to groundwater affected by Landfill LF-26 at concentration that exceed applicable environmental and health standards.Locations of the 30 closed, unlined landfills in the New Jersey Pinelands National Reserve are shown in Fig. 1.The QDM spreadsheet was used to estimate contaminant concentrations at the stream, wetlands and residential receptors nearest to each landfill.With respect to permeability and solute transport, all 30 landfills were deemed appropriate for Domenico-based modeling as Peclet numbers for all were greater than 6.A summary of screening-tool results for the 30 landfills evaluated in the New Jersey Pinelands is shown in Table 5.Monitoring-well data from 9 landfills indicate that groundwater immediately under the landfill footprints are free of contaminants at problematic concentration.Therefore, for these landfills no further transport simulation is warranted, and the modeled level of concern is low.For 3 others, contaminants at concentrations greater than or equal to practical quantitation levelss were detected monitoring well samples, however, QDM simulation indicated that concentrations at all receptors would be less than the PQLs.These landfills also are characterized as having low levels of concern.18 other landfills are expected to have contaminants reaching receptors at concentrations greater than 50% of regulatory values as determined by QDM.The most common contaminant that resulted in a landfill being categorized as a high level of concern was lead.Others problematic contaminants included barium, mercury and arsenic.Many landfills received high level of concern ratings because a nonreactive contaminant was present at high concentrations in landfill monitoring wells.In many cases the large source width relative to the distance between the source and the receptor resulted in little or no contaminant attenuation along the flow path, as expected based on the sensitivity analysis.A user of this screening method may choose to use a less conservative method of defining the source width, e.g. a fraction of the landfill surface closest to the well from which the highest concentration of a contaminant was measured.For use in a screening tool it may be preferable to use the more conservative approach used here, as the composition of groundwater under the entire landfill is rarely known.The Quick Domenico Multi-scenario spreadsheet implementation is based on the original Quick Domenico model.In developing QDM, four features were added to QD: automatic calculation of time to steady state and dispersivity, expression of contaminant concentrations relative to regulatory standards, and inclusion of up to 50 simulations in a single spreadsheet.The multi-scenario feature enabled the assessment of sensitivity for key model parameters, as many hundreds of simulations were prepared in the process.Similarly, a user can develop any number of simulations, changing each parameter incrementally, to examine a range of scenarios that might describe the field conditions.Flow-field geometry is a controlling variable for concentrations of non-degrading contaminants such as metals.A path length of 3–6 times the source width is required for the contaminant to begin attenuating.The conservatively defined source width dictates that concentrations of non-reactive contaminants will be predicted to arrive at near-by receptors at the same concentration as at the monitoring well.Down-gradient concentration is highly sensitive to contamination degradation rate.Degrading contaminants such as benzene also are sensitive to Koc values, whereas down-gradient concentrations of conservative contaminants such as mercury are not affected by the Koc value.Groundwater sampled in observation wells at nine landfills in the New Jersey Pinelands did not contain regulated contaminants in substantial concentrations.For these, QDM simulations are not needed, and the screening tool defines them as unlikely to impact down-gradient receptors.Three others were found to have substantial concentrations of one or more contaminants in underlying groundwater, but not at receptors, and these also were defined as having low levels of concern.For the remaining eighteen landfills, QDM simulations indicate that a high level of concern that one or more contaminants may reach receptors at concentrations greater than 50% of applicable regulatory standards.Further monitoring, modeling, and/or remediation procedures may be indicated.This screening tool provides a conservative assessment of the level of concern posed by a contaminant source to potential receptors.As with all model-based approaches, it is important to consider limitations, that it is an approximate solution of governing transport equations; that accurate values for many required parameters are difficult to obtain; that heterogeneity and non-steady flow are not considered; and that the source is considered to be uniform and infinite.Compared to exact solutions to transport governing equations, such as that given by Sagar, differences in modeled solute concentrations along the plume centerline are negligible and decrease with increasing Peclet numbers.However, Domenico modeled concentrations away from the plume centerline should be considered with caution, especially near the source.Given the rapid, convenient and soundly-based qualities of the QDM screening method, it is a powerful tool for initial conceptual assessment of levels of concern for landfills and can be used to evaluate other surface and subsurface point sources of contaminants.
A screening tool for quantifying levels of concern for contaminants detected in monitoring wells on or near landfills to down-gradient receptors (streams, wetlands and residential lots) was developed and evaluated. The tool uses Quick Domenico Multi-scenario (QDM), a spreadsheet implementation of Domenico-based solute transport, to estimate concentrations of contaminants reaching receptors under steady-state conditions from a constant-strength source. Unlike most other available Domenico-based model applications, QDM calculates the time for down-gradient contaminant concentrations to approach steady state and appropriate dispersivity values, and allows for up to fifty simulations on a single spreadsheet. Sensitivity of QDM solutions to critical model parameters was quantified. The screening tool uses QDM results to categorize landfills as having high, moderate and low levels of concern, based on contaminant concentrations reaching receptors relative to regulatory concentrations.The application of this tool was demonstrated by assessing levels of concern (as defined by the New Jersey Pinelands Commission) for thirty closed, uncapped landfills in the New Jersey Pinelands National Reserve, using historic water-quality data from monitoring wells on and near landfills and hydraulic parameters from regional flow models. Twelve of these landfills are categorized as having high levels of concern, indicating a need for further assessment. This tool is not a replacement for conventional numerically-based transport model or other available Domenico-based applications, but is suitable for quickly assessing the level of concern posed by a landfill or other contaminant point source before expensive and lengthy monitoring or remediation measures are taken. In addition to quantifying the level of concern using historic groundwater-monitoring data, the tool allows for archiving model scenarios and adding refinements as new data become available.
33
The effect of ultrasound treatment on the structural, physical and emulsifying properties of animal and vegetable proteins
Proteins perform a vast array of functions in both the food and pharmaceutical industries, such as emulsification, foaming, encapsulation, viscosity enhancement and gelation."This functionality arises from the complex chemical make-up of these molecules.Proteins are of particular interest in food systems as emulsifiers, due to their ability to adsorb to oil-water interfaces and form interfacial films."The surface activity of proteins owes to the amphiphilic nature these molecules possess, because of the presence of both hydrophobic and hydrophilic regions in their peptide chains.Due to proteins larger molecular weight lending to their bulkier structure by comparison to low molecular weight emulsifiers proteins diffuse more slowly to the oil-water interface through the continuous phase.Once at the interface proteins undergo surface denaturation and rearrange themselves in order to position their hydrophobic and hydrophilic amino groups in the oil and aqueous phase respectively, reducing the interfacial tension and overall free energy of the system."Proteins provide several advantages for emulsion droplet stabilisation, such as protein–protein interactions at interfaces, and electrostatic and steric stabilisation due to the charged and bulky nature of these biopolymers.Ultrasound is an acoustic wave with a frequency greater than 20 kHz, the threshold for human auditory detection.Ultrasound can be classified in two distinct categories based on the frequency range, high frequency low power ultrasound, utilised most commonly for the analytical evaluation of the physicochemical properties of food, and low frequency high power ultrasound recently employed for the alteration of foods, either physically or chemically.The effects of high power ultrasound on food structures is attributed to the ultrasonic cavitations, the rapid formation and collapse of gas bubbles, which is generated by localised pressure differentials occurring over short periods of times."These ultrasonic cavitations cause hydrodynamic shear forces and a rise in temperature at the site of bubble collapse contribute to the observed effects of high power ultrasound.Ultrasound treatment of food proteins has been related to affect the physicochemical properties of a number of protein sources including soy protein isolate/concentrate and egg white protein; Arzeni, Pérez, & Pilosof, 2012; Krise, 2011).Arzeni, Martínez, et al., Arzeni, Pérez, et al., studied the effect of ultrasound upon the structural and emulsifying properties of egg white protein and observed an increase in the hydrophobicity and emulsion stability of ultrasound treated EWP by comparison to untreated EWP.In addition, Krise reported no significant reduction in the primary protein structure molecular weight profile of EWP after sonication at 55 kHz for 12 min.Similarly, Karki et al. and Hu et al. observed no significant changes in the primary protein structure molecular weight profile of ultrasound treated soy protein.Furthermore, Arzeni, Martínez, et al. described a significant reduction in protein aggregate size for soy protein isolate.However, the effect of ultrasound treatment upon gelatin, either mammalian or piscine derived, pea protein isolate or rice protein isolate has yet to be investigated.Gelatin is a highly versatile biopolymer widely used in a myriad of industries, from the food industry for gelation and viscosity enhancement, and the pharmaceutical industry for the manufacture of soft and hard capsules.Gelatin is prepared from the irreversible hydrolysis of collagen under either acidic or alkaline conditions in the presence of heat, yielding a variety of peptide-chain species.Gelatin is a composite mixture of three main protein fractions: free α-chains, β-chains, the covalent linkage between two α-chains, and γ-chains, the covalent linkage between three α-chains.Gelatin is unique among proteins owing to the lack of appreciable internal structuring, so that in aqueous solutions at sufficiently high temperatures the peptide chains take up random configurations, analogous to the behaviour of synthetic linear-chain polymers.Egg white protein is a functional ingredient widely used in the food industry, due to its emulsifying, foaming and gelation capabilities, and utilised within a wide range of food applications, including noodles, mayonnaise, cakes and confectionary.EWP is globular in nature with highly defined tertiary and quaternary structures.The main protein fractions of egg white protein include ovalbumin, ovotransferrin and ovomucin, as well as over 30 other protein fractions.Pea protein isolate is a nutritional ingredient used in the food industry owing to its emulsifying and gelation properties, and additionally its hypoallergenic attributes.PPI, a pulse legume, is extracted from Pisum sativum, and is the main cultivated protein crop in Europe.The major protein fractions found in PPI are albumins and globulins, the major fractions in pulse legumes are legumin, vicilin and convicilin.Other minor proteins found in pulses include prolamins and glutelins.Soy protein isolate is of particular interest to the food industry, as it is the largest commercially available vegetable protein source owing to its high nutritional value and current low cost, and a highly functional ingredient due to its emulsifying and gelling capabilities, however, this functionality is dependent upon the extraction method utilised for the preparation of the isolate.SPI, extracted from Glycine max, is an oilseed legume grown primarily in the United Sates, Brazil, Paraguay and Uruguay.Similar to pulse legumes, like PPI, the major protein factions in oilseed legumes are albumins and globulins, the dominant fractions in SPI are glycinin and β-conglycinin a trimeric glycoprotein.Rice protein isolate is a food ingredient of great importance, reflected by the large annual consumption of rice, 440 million metric tonnes in 2009.Up until recently the protein component of rice was usually discarded, as the starch component yielded greater commercial value.Despite rice proteins being common ingredients in gels, ice creams and infant formulae, few studies have been conducted on these proteins to ascertain emulsifying, foaming and gelling capabilities.RPI is extracted from Oryza sativa, a cereal grain, and is cultivated primarily in Asia.Similar to PPI and SPI, RPI has four main protein fractions albumin, globulin, glutelin and prolamin, which are water-, salt-, alkali- and alcohol-soluble, respectively.In this work, three animal proteins, bovine gelatin, fish gelatin and egg white protein, and three vegetable proteins, pea protein isolate, soy protein isolate and rice protein isolate, all of which are composite mixtures of a number of protein fractions, were investigated in order to assess the significance of high power ultrasound treatment on industrially relevant food proteins.The objectives of this research were to discern the effects of ultrasound treatment upon animal and vegetable proteins, in particular changes in physicochemical properties, measured in terms of size, molecular structure and intrinsic viscosity.Furthermore, differences in the performance of proteins as emulsifiers after ultrasound treatment was assessed in terms emulsion droplet size, emulsion stability and interfacial tension.Oil-in-water emulsions were prepared with either untreated or ultrasound treated BG, FG, EWP, PPI, SPI and RPI at different concentrations and compared between them and to a low molecular weight emulsifier, Brij 97.Bovine gelatin, cold water fish gelatin, egg white protein from chickens, Brij® 97 and sodium azide were purchased from Sigma Aldrich.Pea protein isolate, soy protein isolate and rice protein isolate were all kindly provided by Kerry Ingredients.The composition of the animal and vegetable proteins used in this study is presented in Table 1, acquired from the material specification forms from suppliers.The oil used was commercially available rapeseed oil.The water used in all experiments was passed through a double distillation unit.Bovine gelatin, fish gelatin and rice protein isolate solutions were prepared by dispersion in water and adjusting the pH of the solution to 7.08 ± 0.04 with 1 M NaOH, as the initial pH of the solution is close to the isoelectric point, 5.32, 5.02 and 4.85, for BG, FG and RPI, respectively.BG, FG, EWP, PPI, SPI and RPI were dispersed in water to obtain solutions within a protein concentration range of 0.1–10 wt.%, where all the animal proteins were soluble at the range of concentrations, whilst the vegetable proteins possessed an insoluble component regardless of hydration time.Sodium azide was added to the solution to mitigate against microbial activity.The temperature of the protein solutions was measured before and after sonication by means of a digital thermometer, with an accuracy of ±0.1 °C.Prior to ultrasound treatment, the temperature of protein solutions was within the range of 5–10 °C, whilst the temperature BG and FG solutions was within a temperature range of 45–50 °C, above the helix coil transition temperature.After ultrasonic irradiation, the temperature of all protein solutions raised to approximately ∼45 °C.The pH of animal and vegetable protein solutions was measured before and after sonication at a temperature of 20 °C.pH measurements were made by using a SevenEasy pH meter.This instrument was calibrated with buffer standard solutions of known pH. The pH values are reported as the average and the standard deviation of three repeat measurements.The size of untreated and ultrasound treated animal proteins was measured by dynamic light scattering using a Zetasizer Nano Series, and the size of untreated and ultrasound treated vegetable proteins was measured by static light scattering using the Mastersizer 2000.Protein size values are reported as Z-average.The width of the protein size distribution was expressed in terms of span, where Dv0.9, Dv0.1, and Dv0.5 are the equivalent volume diameters at 90, 10 and 50% cumulative volume, respectively.Low span values indicate a narrow size distribution.The protein size and span values are reported as the average and the standard deviation of three repeat measurements.Cryogenic scanning electron microscopy was used to visualise the microstructure of untreated and ultrasound treated proteins.One drop of protein solution was frozen to approximately −180 °C in liquid nitrogen slush.Samples were then fractured and etched for 3 min at a temperature of −90 °C inside a preparation chamber.Afterwards, samples were sputter coated with gold and scanned, during which the temperature was kept below −160 °C by addition of liquid nitrogen to the system.The molecular structure of untreated and ultrasound treated animal and vegetable proteins was determined by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, using a Mini-Protean 3 Electrophoresis System, where proteins were tested using the reducing method.100 μL of protein solution at a concentration of 1 wt.% was added to 900 μL of Laemmli buffer glycerol, 0.01% bromophenol blue) and 100 μL of β-mercaptoethanol in 2 mL micro tubes and sealed.These 2 mL micro tubes were placed in a float in a water bath at a temperature of 90 °C for 30 min, to allow the reduction reaction to take place.A 10 μL aliquot was taken from each sample and loaded onto a Tris-acrylamide gel.A molecular weight standard was used to determine the primary protein structure molecular weight profile of the samples.Gel electrophoresis was carried out initially at 55 V for 10 min, then at 155 V for 45 min in a running buffer.The gels were removed from the gel cassette and stained with Coomassie Bio-safe stain for 1 h and de-stained with distilled water overnight.The concentration ranges used for the determination of the intrinsic viscosity of BG, FG, EWP, PPI, SPI and RPI were 0.1–0.5 wt.%, 0.25–1.5 wt.%, 1.5–3 wt.%, 0.5–0.8 wt.%, 1.5–3 wt.% and 0.5–2 wt.%, respectively.The validity of the regression procedure is confined within a discrete range of ηrel, 1.2 < ηrel < 2.The upper limit is due to the hydrodynamic interaction between associates of protein molecules, and the lower limit is due to inaccuracy in the determination of very low viscosity fluids.A value of ηrel approaching 1 indicates the lower limit.The viscosity of the protein solutions was measured at 20 °C using a Kinexus rheometer equipped with a double gap geometry.For the determination of intrinsic viscosity by extrapolation to infinite dilution, there must be linearity between shear stress and shear rate, which indicates a Newtonian behaviour region on the range of shear rate used in the measurements.The Newtonian plateau region of the BG, FG, EWP, PPI, SPI and RPI solutions at the range of concentrations used, was found within a shear rate range of 25–1000 s−1.Thus, the values of viscosity of the protein solutions and that of the solvent were selected from the flow curves data at a constant shear rate of 250 s−1, which were subsequently used to determine the specific viscosity, ηsp, the relative viscosity, ηrel, and the intrinsic viscosity, .At least three replicates of each measurement were made.10 wt.% dispersed phase was added to the continuous aqueous phase containing either untreated or sonicated animal or vegetable proteins or Brij 97 at different concentrations, ranging from 0.1 to 10 wt.%.An oil-in-water pre-emulsion was prepared by emulsifying this mixture at 8000 rpm for 2 min using a high shear mixer.Submicron oil-in-water emulsions were then prepared by further emulsifying the pre-emulsion using a high-pressure valve homogeniser at 125 MPa for 2 passes.The initial temperature of EWP, PPI, SPI and RPI emulsions was a temperature of 5 °C to prevent thermal denaturation of proteins from high pressure homogenisation, whilst denaturation may still occur due the high shear during high pressure processing.The initial temperature of BG and FG emulsions was at a temperature of 50 °C to prevent gelation of gelatin during the homogenisation process.High pressure processing increases the temperature of the processed material, and consequently, the final temperatures of emulsions prepared with EWP, PPI, SPI and RPI, and gelatin, after homogenisation were ∼45 °C and ∼90 °C, respectively.The droplet size of the emulsions was measured by SLS using a Mastersizer 2000 immediately after emulsification.Emulsion droplet size values are reported as the volume-surface mean diameter.The stability of the emulsions was assessed by droplet size measurements over 28 days, where emulsions were stored under refrigeration conditions throughout the duration of the stability study.The droplet sizes and error bars are reported as the mean and standard deviation, respectively, of measured emulsions prepared in triplicate.The interfacial tension between the aqueous phase and oil phase was measured using a tensiometer K100 with the Wilhelmy plate method.The Wilhelmy plate has a length, width and thickness of 19.9 mm, 10 mm and 0.2 mm, respectively and is made of platinum.The Wilhelmy plate was immersed in 20 g of aqueous phase to a depth of 3 mm.Subsequently, an interface between the aqueous phase and oil phase was created by carefully pipetting 50 g of the oil phase over the aqueous phase.The test was conducted over 3600 s and the temperature was maintained at 20 °C throughout the duration of the test.The interfacial tension values and the error bars are reported as the mean and standard deviation, respectively, of three repeat measurements.Cryogenic scanning electron microscopy was used to visualise the microstructure of pre-emulsions using untreated and sonicated proteins.One drop of pre-emulsion was frozen to approximately −180 °C in liquid nitrogen slush.Samples were then fractured and etched for 3 min at a temperature of −90 °C inside a preparation chamber.Afterwards, samples were sputter coated with gold and scanned, during which the temperature was kept below −160 °C by addition of liquid nitrogen to the system."Student's t-test with a 95% confidence interval was used to assess the significance of the results obtained.t-test data with P < 0.05 were considered statistically significant.The effect of duration of ultrasonic irradiation on the size and pH of BG, FG, EWP, PPI, SPI and RPI was initially investigated.0.1 wt.% solutions of BG, FG, EWP, PPI, SPI and RPI were sonicated for 15, 30, 60 and 120 s, with an ultrasonic frequency of 20 kHz and an amplitude of 95%.Protein size and pH measurements for untreated, and ultrasound treated BG, FG, EWP, PPI, SPI and RPI as a function of time are shown in Fig. 1 and Table 2.The size of the vegetable proteins isolates presented in Fig. 1 prior to sonication are in a highly aggregated state due to protein denaturation from the processing to obtain these isolates.Fig. 1 shows that there is a significant reduction in protein size with an increase in the sonication time, and the results also highlight that after a sonication of 1 min there is minimal further reduction in protein size of BG, FG, EWP, PPI and SPI.This decrease in protein size is attributed to disruption of the hydrophobic and electrostatic interactions which maintain untreated protein aggregates from the high hydrodynamic shear forces associated with ultrasonic cavitations.However, there is no significant reduction in the size of RPI agglomerates, irrespective of treatment time, due to the highly aggregated structure of the insoluble component of RPI, ascribed to both the presence of carbohydrate within the aggregate structure and the denaturation of protein during the preparation of the protein isolate, restricting size reduction by way of ultrasound treatment.The pH of all animal and vegetable protein solutions, with the exception of RPI, decreased significantly with increasing sonication time.Equivalent to the protein size measurements, after a treatment time of 1 min the pH of protein solutions decreased no further.The decrease in pH of animal and vegetable protein solutions is thought to be associated with the transitional changes resulting in deprotonation of acidic amino acid residues which were contained within the interior of associated structures of untreated proteins prior to ultrasound treatment."Our results are in agreement with those of O'Sullivan, Arellano, et al. and O'Sullivan, Pichot, et al., who showed that an increased sonication led to a significant reduction of protein size and pH for dairy proteins up to a sonication time of 1 min, as with animal and vegetable proteins, with an ultrasound treatment of 20 kHz and an amplitude of 95%.The stability of sonicated animal and vegetable proteins solutions as a function of time was investigated by protein size and protein size distribution of sonicated BG, FG, EWP, PPI, SPI and RPI.Animal and vegetable protein solutions with a concentration of 0.1 wt.% were ultrasound treated at 20 kHz and ∼34 W cm−2 for a sonication time of 2 min, as no further decrease in protein size after a sonication time of 1 min was observed.The protein size and span values of sonicated animal and vegetable proteins were measured immediately after treatment and after 1 and 7 days, in order to assess the stability of protein size and protein size distribution.Protein size measurements and span values obtained from DLS and SLS for untreated and ultrasound treated BG, FG, EWP, PPI, SPI and RPI are shown in Table 3.As can be seen from Table 3, ultrasound treatment produced a significant reduction in the size and span of BG, FG and EWP.However, 7 days after sonication an increase in the size and the broadening of the distribution was observed for BG, FG and EWP.The effective size reduction of the ultrasound treatment to BG, FG and EWP on day 7 was 85.6%, 80% and 74.25% respectively.In the case of PPI and SPI, the results in Table 3 show that ultrasound treatment significantly reduced the aggregate size and a broadening of the protein size distribution.The size distribution of PPI and SPI after ultrasound treatment is bimodal, one population having a similar size as the parent untreated protein, and the other population is nano-sized.The span of the distribution and protein size on day 7 for PPI and SPI was quite similar to that after immediate sonication, representing an effective protein size reduction of 95.7% and 82.3% for PPI and SPI respectively.This significant reduction in aggregate size of both PPI and SPI from ultrasound treatment allows for improved solubilisation and prolonged stability of these vegetable protein isolates to sedimentation.Our results are in agreement with those of Jambrak et al., who observed a significant reduction in the size of SPI aggregates.Arzeni, Martínez, et al. also observed a decrease in the protein size for sonicated SPI but an increase in size for EWP treated by ultrasound, whereby this increase in size of EWP aggregates is associated with thermal aggregation during the ultrasound treatment.The reason for the observed decrease in the protein size of BG, FG, EWP, PPI and SPI is due to disruption of non-covalent associative forces, such as hydrophobic and electrostatic interactions, and hydrogen bonding, which maintain protein aggregates in solution induced by high levels hydrodynamic shear and turbulence due to ultrasonic cavitations.The observed increase in size for BG, FG and EWP after 7 days is thought to be due to reorganisation of proteins into sub-aggregates due to non-covalent interactions.In the case of PPI and SPI, the static size observed is due to the more defined structure of the PPI and SPI aggregates in comparison to the fully hydrated animal proteins, which allows for greater molecular interactions and mobility.In order to validate these hypotheses, cryo-SEM micrographs were captured of untreated and 7 days after sonication of BG, EWP, SPI and PPI solution at 1 wt.% for all proteins tested.Untreated BG in solution appears to be distributed into discrete fibres, which is consistent with the literature, describing gelatin as a fibrous protein, whilst BG treated by ultrasound appears to be in the form of fibrils of the parent untreated BG fibre, where the width of the fibres and the fibrils is equivalent, yet the length of the fibrils is shorter than the untreated BG fibres.In the case of untreated SPI large aggregates of protein can be seen, composed of discrete entities, whereas sonicated SPI has a notably reduced protein size, with a monodisperse size distribution.Similar results were observed for FG, EWP and PPI.These results are in agreement with previously discussed observations, and adds evidence to the hypothesis that ultrasound treatment causes disruption of protein aggregates, that subsequently reorganise themselves into smaller sub-associates.The molecular structure of untreated and ultrasound treated animal and vegetable proteins was investigated next.Protein solutions at a concentration of 1 wt.% were ultrasound treated for 2 min at 20 kHz, with a power intensity of ∼34 W cm−2.Electrophoretic profiles obtained by SDS-PAGE for untreated and ultrasound treated BG, FG, EWP, SPI, PPI and RPI, and the molecular weight standard, are shown in Fig. 3.No difference in the protein fractions was observed between untreated and sonicated BG, FG, EWP, SPI, PPI and RPI.These results are in concurrence with those reported by Krise who showed no difference in the primary structure molecular weight profile between untreated and ultrasound treated egg white, with a treatment conducted at 55 kHz, 45.33 W cm−2 for 12 min.Moreover, the obtained protein fractions are in agreement with the literature for gelatin, EWP, SPI, PPI and RPI."The intrinsic viscosity, , was obtained by the fitting of experimental viscosity data to the Huggins' and Kraemer equations, for untreated and ultrasound irradiated animal and vegetable protein solutions, as shown in Fig. 4 for EWP and PPI.The other proteins investigated as part of this study display similar behaviour to EWP."The values of and the Huggins, kH, and Kraemer, kK, constants for each of the proteins investigated in this study are listed in Table 4.Intrinsic viscosity, , demonstrates the degree of hydration of proteins and provides information about the associate hydrodynamic volume, which is related to molecular conformation of proteins in solution.A comparison of the between untreated and ultrasound treated animal and vegetable proteins demonstrates that ultrasound treatment induced a significant reduction in the intrinsic viscosity of BG, FG, EWP, PPI and SPI in solution, and consequently a significant reduction in the hydrodynamic volume occupied by the proteins and the solvents entrained within them.These results are in agreement with the reduction in associate size and cryo-SEM micrographs, however, for the case of RPI, there is no reduction in the intrinsic viscosity, which is consistent with the previous size measurements.Gouinlock et al., Lefebvre and Prakash reported intrinsic viscosity values of 6.9 dL/g for gelatin, 0.326 dL/g for ovalbumin and 0.46 dL/g for glycinin, respectively.These values differ to those obtained in this work untreated BG, EWP and SPI.These differences may be a consequence of the complexity of EWP and SPI solutions, which are composed of a mixture of protein fractions rather than single component ovalbumin and glycinin, and in case of gelatin, differences may arise due to variability in preparation of the gelatin from collagen, which determines the molecular weight profile of the resulting gelatin.Extrinsic variations in solvent quality greatly affect the determination of intrinsic viscosity and further accounts for the differences between the single fraction proteins and the multi-component proteins investigated in this study.Extrinsic factors affecting intrinsic viscosity include temperature, pH, initial mineral content and composition, co-solvents, additional salts and their concentration.Furthermore, the large of both BG and FG by comparison to the other proteins investigated as part of this study is due to the random coil conformation of these molecules in solutions, which consequently entrain more water giving a larger overall hydrodynamic volume.Intrinsic viscosity of a protein solution can be used to indicate the degree of hydrophobicity of the protein.The intrinsic viscosity of protein associates in solution is dependent on its conformation and degree of hydration, which dictate the amount of hydrophobic residues that are within the interior of protein associates.A decrease in the intrinsic viscosity also leads to dehydration of amphiphilic biopolymers, increasing the hydrophobicity of the biopolymer and thus reducing the energy required for adsorption of amphiphilic biopolymers to the oil-water interface.Thus, the significant reduction of intrinsic viscosity induced by ultrasound treatment, expresses an increase in the degree of hydrophobicity of BG, FG, EWP, PPI and SPI."The Huggins' and Kraemer coefficients are adequate for the assessment of solvent quality. "Positive values of the Huggins' coefficient, kH, within a range of 0.25–0.5 indicate good solvation, whilst kH values within a range of 0.5–1.0 are related to poor solvents.Conversely negative values for the Kraemer coefficient, kK, indicate good solvent, yet positive values express poor solvation.The values for the kH and kK are both negative, with the exception of untreated PPI exhibiting a positive kH value, indicating good solvation when considering kK, yet unusual behaviour in the case of kH."Nonetheless, negative values of kH have been reported in the literature for biopolymers with amphiphilic properties, such as bovine serum albumin, sodium caseinate, whey protein isolate and milk protein isolate, all dispersed within serum.Positive kH values are associated with uniform surface charges of polymers, indicating that untreated PPI aggregates have a uniform surface charge, and after ultrasound treatment conformational changes occur yielding an amphiphatic character on the surface of the ultrasound treated PPI, observed by the negative kH value.It is also important to observe that the relation kH + kK = 0.5, generally accepted to indicate adequacy of experimental results for hydrocolloids, was not found for any of the proteins investigated in this study.This effect is thought to be associated with the amphiphatic nature of the proteins used in this study yielding negative values of kH and kK."Similar results have been reported in the literature for other amphiphilic polymers.In addition, the values of kH and kK tend to decrease after ultrasound treatment indicating improved solvation of proteins.Oil-in-water emulsions were prepared with 10 wt.% rapeseed oil and an aqueous continuous phase containing either untreated or ultrasound irradiated BG, FG, EWP, PPI, SPI and RPI, or a low molecular weight surfactant, Brij 97, at a range of emulsifier concentrations.Emulsions were prepared using high-pressure valve homogenisation and droplet sizes as a function of emulsifier type and concentration are shown in Fig. 5.The emulsion droplet sizes were measured immediately after emulsification, and all exhibited unimodal droplet size distributions.Emulsions prepared with sonicated BG, EWP and PPI at concentrations <1 wt.% yielded a significant reduction in emulsion droplet size by comparison to their untreated counterparts.At concentrations ≥1 wt.% the emulsions prepared with untreated and ultrasound treated BG, EWP and PPI exhibited similar droplet sizes.The decrease in emulsion droplet size after ultrasound treatment at concentrations <1 wt.% is consistent with the significant reduction in protein size upon ultrasound treatment of BG, EWP and PPI solutions which allows for more rapid adsorption of protein to the oil-water interface, as reported by Damodaran and Razumovsky.In addition, the significant increase of hydrophobicity of ultrasound treated BG, EWP and PPI and the decrease in intrinsic viscosity would lead to an increased rate of protein adsorption to the oil-water interface, reducing interfacial tension allowing for improved facilitation of droplet break-up.The submicron droplets obtained for untreated PPI are in agreement with droplet sizes obtained by those measured by Donsì, Senatore, Huang, and Ferrari, in the order of ∼200 nm for emulsions containing pea protein.Emulsions prepared with the tested concentrations of untreated and ultrasound treated FG, SPI and RPI yielded similar droplet sizes, where emulsions prepared with 0.1 wt.% FG yielded emulsion droplets ∼5 μm, and both SPI and RPI yielded ∼2 μm droplets at the same concentration.Furthermore, at similar concentrations PPI yielded smaller emulsion droplets than those prepared with SPI, making SPI a poorer emulsifier, in agreement with the results of Vose.This behaviour was anticipated for RPI, where no significant reduction in protein size was observed, yet unexpected when considering the significant reduction of protein size observed for both sonicated FG and SPI.Moreover, the significant increase in hydrophobicity of ultrasound treated FG and SPI expressed by the decrease in intrinsic viscosity would also be expected to result in faster adsorption of protein to the oil-water interface, however it appears that the rate of protein adsorption of ultrasound treated FG and SPI to the oil-water interface remains unchanged regardless of the smaller protein associate sizes and increase in hydrophobicity, when compared with untreated FG and SPI.Even though ultrasound treatment reduces the aggregate size of SPI, proteins possessing an overall low molecular weight, such as EWP, are capable of forming smaller emulsion droplets than larger molecular weight proteins as lower molecular weight species have greater molecular mobility through the bulk for adsorbing to oil-water interfaces.The submicron droplets achieved for untreated FG are consistent with droplet sizes obtained by Surh, Decker, and McClements, in the order of ∼300 nm for emulsions containing either low molecular weight or high molecular weight fish gelatin.At protein concentrations >1 wt.% for emulsions prepared with either untreated or ultrasound treated EWP, SPI and RPI micron sized entities were formed.Unexpectedly, emulsions prepared with PPI did not exhibit the formation of these entities, even though the structure of PPI is similar to that of SPI.The degree and structure of the denatured component of PPI likely varies to that of SPI and accounts for the non-aggregating behaviour of PPI.Emulsions being processed using high pressure homogenisation experience both increases in temperature and regions of high hydrodynamic shear, both of these mechanisms result in denaturation of proteins.These micron sized entities are attributed to denaturation and aggregation of protein due to the high levels of hydrodynamic shear present during the homogenisation process, as thermal effects were minimised by ensuring that the emulsions were processed at a temperature of 5 °C, and the outlet temperature was less than 45 °C in all cases, lower than the thermal denaturation temperatures of EWP, SPI and RPI.Hydrostatic pressure induced gelation of EWP, SPI and RPI has been reported in the literature and the formation of these entities is attributed to the high shear forces exerted upon the proteins while under high shear conditions, whereby the excess of bulk protein allows for greater interpenetration of protein chains under high shear yielding the formation of discrete entities composed of oil droplets within denatured aggregated protein.Unexpectedly, emulsions prepared with a higher concentration of protein yielded a significant reduction in entity size in comparison to those prepared with the lower concentration.This behaviour is ascribed to an increased rate of formation and number of aggregates formed at higher concentrations during the short time within the shear field.Emulsion droplets sizes for all animal and vegetable proteins investigated are smaller than that of the size of the untreated proteins.Be that as it may, the reported proteins sizes represent aggregates of protein molecules and not discrete protein fractions.Native ovalbumin and glycinin have hydrodynamic radii of approximately 3 nm and 12.5 nm respectively, in comparison to size data presented in Table 3, whereby the EWP and SPI have Dz values of EWP and SPI of approximately 1.6 and 1.7 μm, respectively.This disparity in size is due to the preparation of these protein isolates whereby shear and temperature result in the formation of insoluble aggregated material, in comparison to the soluble native protein fractions."Proteins in aqueous solutions associate together to form aggregates due to hydrophobic and electrostatic interactions, however in the presence of a hydrophobic dispersed phase the protein fractions which comprise the aggregate disassociates and adsorb to the oil-water interface, which accounts for the fabrication of submicron droplets presented in this study.The emulsion droplet sizes presented in Fig. 5, which were shown to be dependent on the emulsifier type, can be interpreted by comparing the interfacial tension of the studied systems.Fig. 5 presents the interfacial tension between water and rapeseed oil, for untreated and ultrasound treated BG, FG, PPI and SPI, and Brij 97, all at an emulsifier concentration of 0.1 wt.%.In order to assess the presence of surface active impurities within the dispersed phase, the interfacial tension between distilled water and rapeseed oil was measured.Fig. 6 shows that the interfacial tension of all systems decreases continually as a function of time.In light of these results, the decrease of interfacial tension with time is attributed primarily to the nature of the dispersed phase used, and to a lesser degree the type of emulsifier.Gaonkar explained that the time dependent nature of interfacial tension of commercially available vegetable oils against water was due to the adsorption of surface active impurities present within the oils at the oil–water interface.Gaonkar also reported that after purification of the vegetable oils, the time dependency of interfacial tension was no longer observed.No significant differences were observed in the obtained values of interfacial tension between untreated and ultrasound treated FG and RPI.These results are consistent with droplet size data, where no significant difference in the droplet size was observed.Significant differences were shown for the initial rate of decrease of interfacial tension when comparing untreated and ultrasound treated PPI.Ultrasound treated PPI aggregates are smaller than untreated PPI and have greater hydrophobicity accounting for the significant reduction of initial interfacial tension, enhancing droplet break-up during emulsification.Significant differences in the equilibrium interfacial tension values were observed when comparing untreated and sonicated BG, EWP and SPI.These results are consistent with the observed significant reduction in emulsion droplet size for BG and EWP and adds evidence to the hypotheses that aggregates of sonicated BG and EWP adsorb faster to the interface due to higher surface area-to-volume ratio and increased hydrophobicity, significantly reducing the equilibrium interfacial tension, yielding smaller emulsion droplets.No significant reduction in emulsion droplet size was noted for SPI, despite the observed reduction in equilibrium interfacial tension of SPI which may be a consequence of alternative protein conformations at the oil-water interface.These hypotheses were explored by cryo-SEM of pre-emulsions, to allow for visualisation emulsion droplet interface, prepared with untreated and ultrasound treated BG and SPI at an emulsifier concentration of 1 wt.% for all pre-emulsions tested.Emulsion droplets of pre-emulsions prepared with untreated BG show fibres of gelatin tracking around the surface of the droplets whereas emulsion droplets of pre-emulsions prepared with ultrasound treated BG show the smaller fibrils of gelatin at the interface of the droplets, yielding improved interfacial packing of protein, accounting for the lower equilibrium interfacial tension and the decrease in droplet size.The droplet surfaces of pre-emulsions prepared with ultrasound SPI appear to be are smoother by comparison to the seeming more textured droplet interfaces observed for pre-emulsions prepared with untreated SPI.These findings are consistent with the interfacial tension data, where a significant reduction of the equilibrium interfacial tension upon sonication of BG and SPI was observed, and accounted for by visualisation of the improved interfacial packing of protein.The stability of oil-in-water emulsions prepared with untreated and sonicated BG, FG, EWP, PPI, SPI and RPI, and Brij 97 for comparative purposes, was assessed over a 28 day period.Fig. 8 shows the development of droplet size as a function of time for emulsions prepared with untreated and ultrasound irradiated BG, FG, PPI and SPI, as well as Brij 97, at an emulsifier concentration of 0.1 wt.%.Emulsions prepared with untreated BG exhibited a growth in droplet size, and this coalescence was also observed for emulsions prepared with 0.5 wt.% untreated BG, while emulsions prepared with higher concentrations of untreated BG were stable for the 28 days of the study.However, it can also be seen that emulsions prepared with ultrasound treated BG were resistant to coalescence over the 28 days of the study, and had the same stability of Brij 97.The behaviour exhibited by 0.1 wt.% ultrasound treated BG was observed at all concentrations investigated in this study.This improved stability of ultrasound treated BG by comparison to untreated BG is thought to be associated with an increase in the hydrophobicity and improved interfacial packing of ultrasound treated BG by comparison to untreated BG as observed by a decrease in the equilibrium interfacial tension and cryo-SEM visualisation.In contrast, results in Fig. 8b show that emulsions prepared with both untreated and ultrasound treated FG display coalescence, yet ultrasound treated FG displayed a notable decrease in emulsion stability by comparison to untreated FG.The emulsion stability of untreated and ultrasound treated FG is analogous to untreated BG, where coalescence was observed at concentration of 0.5 wt.%, and stable emulsions were achieved with higher emulsifier concentrations.This decrease in emulsion stability after ultrasound treatment of FG is thought to be associated with a weaker interfacial layer of ultrasound treated FG by comparison to untreated FG allowing for a greater degree of coalescence, accounting for the decrease in emulsion stability.Emulsions prepared with either untreated or sonicated EWP, PPI, SPI and RPI, and Brij 97 were all stable against coalescence and bridging flocculation over the 28 days of this study.This stability was observed for all concentrations probed in this study of untreated and ultrasound treated EWP, PPI, SPI and RPI investigated, as well as for Brij 97.In all cases no phase separation was observed in the emulsions, whilst emulsions with droplet sizes >1 μm exhibited gravitational separation with a cream layer present one day after preparation.Furthermore, the d3,2 is lower in all cases at an emulsifier concentration of 0.1 wt.% for ultrasound treated proteins by comparison to that of their untreated counterparts, as previously discussed.This study showed that ultrasound treatment of animal and vegetable proteins significantly reduced aggregate size and hydrodynamic volume, with the exception of RPI.The reduction in protein size was attributed to the hydrodynamic shear forces associated with ultrasonic cavitations.In spite of the aggregate size reduction, no differences in primary structure molecular weight profile were observed between untreated and ultrasound irradiated BG, FG, EWP, PPI, SPI and RPI.Unanticipatedly, emulsions prepared with the ultrasound treated FG, SPI and RPI proteins had the same droplet sizes as those obtained with their untreated counterparts, and were stable at the same concentrations, with the exception of emulsions prepared with ultrasound treated FG where reduced emulsion stability at lower concentrations was exhibited.These results suggest that sonication did not significantly affect the rate of FG or RPI surface denaturation at the interface, as no significant reduction in the equilibrium interfacial tension between untreated and ultrasound irradiated FG or RPI was observed.By comparison, emulsions fabricated with ultrasound treated BG, EWP and PPI at concentrations <1 wt.% had smaller emulsion sizes than their untreated counterparts at the same concentrations.This behaviour was attributed to a reduction in protein size and an increase in the hydrophobicity of sonicated BG, EWP and PPI.Furthermore, emulsions prepared with ultrasound treated BG had improved stability against coalescence for 28 days at all concentrations investigated.This enhancement in emulsion stability attributed to improved interfacial packing, observed by a lower equilibrium interfacial tension and cryo-SEM micrographs.Ultrasound treatment can thus improve the solubility of previously poorly soluble vegetable proteins and moreover, is capable of improving the emulsifying performance of other proteins.
The ultrasonic effect on the physicochemical and emulsifying properties of three animal proteins, bovine gelatin (BG), fish gelatin (FG) and egg white protein (EWP), and three vegetable proteins, pea protein isolate (PPI), soy protein isolate (SPI) and rice protein isolate (RPI), was investigated. Protein solutions (0.1-10 wt.%) were sonicated at an acoustic intensity of ~34W cm-2 for 2min. The structural and physical properties of the proteins were probed in terms of changes in size, hydrodynamic volume and molecular structure using DLS and SLS, intrinsic viscosity and SDS-PAGE, respectively. The emulsifying performance of ultrasound treated animal and vegetable proteins were compared to their untreated counterparts and Brij 97. Ultrasound treatment reduced the size of all proteins, with the exception of RPI, and no reduction in the primary structure molecular weight profile of proteins was observed in all cases. Emulsions prepared with all untreated proteins yielded submicron droplets at concentrations ≤1 wt.%, whilst at concentrations >5 wt.% emulsions prepared with EWP, SPI and RPI yielded micron sized droplets (>10mm) due to pressure denaturation of protein from homogenisation. Emulsions produced with sonicated FG, SPI and RPI had the similar droplet sizes as untreated proteins at the same concentrations, whilst sonicated BG, EWP and PPI emulsions at concentrations ≤1wt.% had a smaller droplet size compared to emulsions prepared with their untreated counterparts. This effect was consistent with the observed reduction in the interfacial tension between these untreated and ultrasound treated proteins.
34
To target or not to target? Definitions and nomenclature for targeted versus non-targeted analytical food authentication
Historically, most analytical approaches for food authentication have been targeted towards single analytes, such as Sudan dyes in spices or melamine in milk powder.These targeted methods detect only one compound at a time and, therefore, often provide only limited information and insufficient consumer protection against food fraud and adulteration.Considering the thousands of adulterants that can potentially be added to a food product, this represents a highly inefficient authentication strategy unless one specific adulterant is suspected.More cost-effective non-targeted analyses are therefore gaining territory in the food sector such as non-targeted metabolomics and spectroscopy.This is also driven by an increasing focus on highly complex authentication issues such as geographic origin or agricultural production methods, which increases the demand for novel analytical methods.Esslinger, Riedl, and Fauhl-Hassek categorized food fraud into the following: origin substitution with cheaper similar ingredients, and, extension of food.This highlights the large variation in food authenticity issues and indicates that evaluation of a single analytical parameter is often insufficient to reliably verify the authenticity of a product across these categories and different food commodities.More recently, advanced analytical approaches such as profiling and fingerprinting accompanied by multivariate statistics have shown potential in this respect.This development is evident from the increasing number of scientific papers based on non-targeted methods for food authentications during the past decade.The percentage of food authentication studies using non-targeted methods increased from 34% in 2007 to 42% in 2016.The total number of food authentication studies using targeted and/or non-targeted analytical methods thereby increased by 300% in the given period.The complexity of analytical food authentication has resulted in a unique interdisciplinary research field attracting scientists from the areas of food and plant science, molecular biology, analytical chemistry etc.Consequently, food authentication has brought together a mixture of different traditions and terminologies.Scientific articles on food authentication often include different approaches and nomenclatures that are inconsistently used.Validated, harmonized and standardized analytical methods, common procedures for data evaluation, interpretation and reporting are particularly important in legal disputes for non-targeted approaches.This importance was further emphasized in a recent comprehensive review article recommending several actions in relation to spectroscopy-based food authentication including: “Develop guidelines and ultimately legislation to standardise language, development and validation procedures” and “Adopt a common nomenclature for ease of comparison and interpretation of results”.The aim of this scientific opinion is to contribute to the common understanding of targeted and non-targeted analyses through novel definitions and nomenclature.In order to improve the general understanding of definitions, concepts, and the applicability across different research areas, we have included examples of methods from different scientific areas.This opinion paper, therefore, presents biological, chemical, and microscopy-based examples of targeted and non-targeted approaches while discussing the associated possibilities and limitations for analytical food authentication.Several reviews have focused on targeted versus non-targeted analysis, and there seems to be a consensus about the overall principles.However, the terms profiling, signature, and fingerprinting are not clearly defined, and they are inconsistently used by different authors.Some researchers do e.g. not distinguish between profiling and fingerprinting.Others use the term non-targeted profiling instead of fingerprinting.Yet others use elemental fingerprints and elemental profiling equivalently.In the following, we will define and discuss these terms and provide information about the consequences of using different kinds of analytical markers.Fig. 2 graphically presents the principles of targeted versus non-targeted authentication together with a conceptual presentation of primary versus secondary analytical markers.Analytical markers: An analytical marker, or simply a marker, is a predefined analytical target linked directly or indirectly to the authentication issue.The choice of marker is the starting point of analytical food authentication and we therefore introduce the concept of primary and secondary markers.A primary marker provides a result that directly addresses a specific authenticity issue.A primary marker is often a chemical compound such as a Sudan dye in spices or melamine in milk powder and it often relies on specific legal limits.Primary markers are therefore often used when extension of food is suspected.In contrast, a secondary marker does only indirectly provide information about the authenticity of a product.Secondary markers can be chemical elements, isotope ratios, metabolites, chemical breakdown products and derivatives, or macromolecules such as DNA, lipids, proteins, and sugars.Secondary markers can indirectly authenticate e.g. the geographic origin, agricultural production methods, or species.To illustrate this indirect nature of secondary markers, imagine a DNA based method applied to a sample of beef suspected to contain horsemeat.The presence of horse DNA could originate from meat but also from other body parts.The reported result should, therefore, always match the analytical marker; a comment can then elaborate on the result and include the appropriate assumptions.A reporting example could be: Result = horse DNA present in the sample.Comment = the horse DNA corresponds to more than 1% horsemeat when compared to certified reference materials containing 1% horsemeat in beef.However, the DNA could originate from other sources than meat.Another example based on a secondary marker is nitrogen isotope ratio analysis for authenticity testing of organically grown vegetables.The nitrogen isotope value of different fertilizer types is reflected in crops grown with these.The aim of organic authenticity testing is therefore often to reveal illegal use of synthetic fertilizers.However, isotope values from crops grown with e.g. legume-based green manures are very similar to values obtained from plants grown with synthetic fertilizers.A legal fertilization strategy may therefore be interpreted as illegal use of synthetic fertilizers.In addition, nitrogen isotope analysis can never confirm that all regulations of organic plant production have been correctly followed.Secondary markers are often evaluated using established and internationally acknowledged threshold values or conversion factors, such as when using the total nitrogen content to estimate the protein content.However, it is important to stress that the analysis of nitrogen for protein quantification only offers an estimate.This was unfortunately overlooked for some time allowing the Chinese milk scandal in 2008 to develop.In this case, protein was substituted with melamine.The Kjeldahl and Dumas combustion-based protein analyses, which were widely used for official control, did wrongly attribute the nitrogen content to protein.Nonetheless, qualitative or quantitative results from targeted methods provide information about the product authenticity based on legal limits or established thresholds.Targeted analysis: A targeted analysis covers detection or quantification of one or more pre-defined analytical target.The analytical targets are chemically or biologically characterized and annotated with established importance prior to data acquisition.They can be either primary or secondary markers.Profiling: A profiling analysis targets multiple secondary markers.The exact number of targets to constitute a profile is to our knowledge not scientifically described or internationally harmonized.As a rule of thumb, we suggest that >2 targets are required to use the term “profiling”.In general, profiling is often richer in information and provides increased classification power compared to a single or dual-marker based targeted method.The profile can be used to calculate a value to be tested against a threshold limit, or it can be used for comparison to a database as for non-targeted analysis.Signature: The term “signature” is most frequently used in the mass spectrometry community for profiles comprising data from stable isotope or multi-element analysis.A signature is here suggested to be the same as a profile.Non-targeted analysis: Non-targeted analysis, also referred to as “fingerprinting”, simultaneously detects numerous unspecified targets or data points.These analyses are often qualitative and they are particularly important when no primary or secondary markers are defined or available.In the literature the terms “untargeted”, “un-targeted”, “nontargeted” and “non-targeted” are used equivalently.For clarity, we suggest “non-targeted”, which is used in the majority of the food authentication literature.Fingerprinting: A fingerprint refers to the display of multiple non-targeted parameters comprising information from an analytical method.Fingerprints are often denoted according to the analytical method used and include nuclear magnetic resonance and mass spectrometry fingerprints; however, fingerprints could also consist of data compiled from other analytical methods or combined from complementary analytical methods.During the past decade, the analytical toolbox for food authentication has expanded rapidly.Analytical methods that were previously only available for research purposes are now entering commercial and governmental laboratories for routine control.To demonstrate the concepts of the authentication approaches and the nature of different markers, examples for different authentication categories are provided in Table 1.In this section, focus is on selected analytical techniques within analytical chemistry, microscopy, and molecular biology and their use for targeted and/or non-targeted food authentication is described.Several analytical techniques and combinations of techniques exist within chromatography, spectrometry, and spectroscopy."Chromatography hyphenated with spectroscopic or spectrometric techniques can focus on metabolites and peptides, and are often used in a targeted approach to reveal fraudulent extension of food and species identification, respectively.Other more advanced spectrometric techniques include isotopic ratio mass spectrometry, thermal ionization mass spectrometry, and multi-collector-inductively coupled plasma mass spectrometry, which can generate stable isotope ratio profiles whereas atomic absorption spectrophotometry, inductively coupled plasma mass spectrometry, and inductively coupled plasma atomic emission spectroscopy generate multi-element profiles.These profiles can be used for food authentication in all the categories: origin, substitution, and extension.Spectroscopy methods, such as near infra-red, NMR, and ultraviolet spectroscopy, are commonly used to generate profiles and fingerprints for authentication in all categories.To succeed in the application of these techniques, substantial effort must be invested to establish adequate databases and threshold limits.Microscopy includes several techniques such as electron, optical, and scanning microscopy techniques.Microscopy has a great advantage in its capability to detect macromolecules and structural features including fat, protein, starch, and different morphological characteristics and cell types.In addition, fluorescent compounds or fluorescent labelled products can be detected.Microscopy can therefore be used as a targeted approach to look for specific compounds, tissues, etc., or it can be used non-targeted where constituents are generally examined.It can be used for authentication in all the categories of origin, substitution, and extension.The disadvantages with some of the microscopy methods are the intense human training needed to obtain adequate competences and the subjectivity in the reporting of results.Fortunately, software programs are increasingly addressing these issues.Techniques within molecular biology focus on DNA and proteins.In DNA based methods PCR and to a lesser extent isothermal amplification are used for DNA amplification prior to detection.To follow the previously described definitions of targeted and non-targeted methods, the authentication approach must be dictated by the DNA primers used in the amplification and not by the specific techniques used for detection, or whole genome sequencing).This means that amplification of ≤2 DNA sequences is targeted whereas amplification of >2 DNA sequences should be termed profiling.A non-targeted method should therefore include several random DNA primers that are not annotated to a specific sequence.This is different to how the term DNA fingerprinting is commonly used in molecular biology, where methods targeting amplified fragment length polymorphism markers or simple sequence repeat markers are often termed fingerprinting.However, to align the nomenclature across molecular biology and chemistry it is essential that a fingerprint constitutes unspecified amplicons/products.Most DNA based methods are targeted and used for species-, breed-, variety-, sex-, and GMO identification.Nonetheless, denaturing gradient gel electrophoresis following PCR is increasingly used for profiling the microbial community of fruits to either authenticate the geographical origin or to discriminate between organic and conventional produce.Detection of proteins is also frequently used in food authentication and includes 2-dimensional electrophoresis, enzyme-linked immunosorbent assay, and polyacrylamide gel electrophoresis,.In species identification, especially ELISA is used as a targeted approach toward proteins and to a lesser extend fat.DNA and proteins are vulnerable to heat and acid treatment, and alternative non-molecular biology methods might be necessary for the analyses of highly processed food products.Targeted methods are often quantitative and, in general, have a greater selectivity and sensitivity than non-targeted methods.The reliability of targeted methods is often supported by matrix matched CRMs, which is an important contribution to the method validity and a possible accreditation.Targeted methods are preferable in authenticity issues when the suspected target is a primary marker as it offers direct information about the product authenticity.An exception is in the category of extension, when foods are extended with already present ingredients, such as water and sugar in wine.In these case, secondary markers such as isotopes might be superior compared to the primary markers to distinguish innate water and sugar with added ones.Profiling is the best choice when no unique marker exists.The analysis of multiple markers also multiplies the resulting information and is often suitable for addressing complex authentication issues.In this way, profiling can e.g. authenticate organic eggs through different carotenoid profiles affected by feeding regimes.Nonetheless, organic production involves other aspects too, such as the absence of pesticides and synthetic fertilizers for plant production, which can be subjected to a targeted method); however, the vast number of possible illegal compounds in organic products renders it challenging to identify the ones to analyze.This example of complexity epitomizes the inadequacy of several targeted methods.Non-targeted methods embrace the complexity of modern food authentication and provide valuable information, which is displayed as a fingerprint.The strength of fingerprinting is its ability to detect multiple small changes in the food product and to extract these changes as valuable information through advanced multivariate statistics.A fingerprinting method is often qualitative and relies on the construction and use of an appropriate database.The database is used to compare the obtained sample fingerprint with that of authentic reference samples; i.e. the same method must be applied in each case.When a database is not available, a non-targeted analysis can be used for sample-to-reference comparisons.This can be a valid approach as the high number of unspecified targets or data points limit the possibility of random similarities among different samples.A fingerprinting method can, therefore, authenticate complex issues such as geographic origin and production methods.Non-targeted methods are also efficient when screening for unexpected chemicals, and MS and NMR fingerprints provide additional information than simply compliance or non-compliance.If set-up properly, especially MS fingerprints can in retrospect reveal information about the chemical structure of the marker.Structural knowledge of the marker is important in both food safety and authentication issues when regulatory action must be taken based on the results.In a court of law, identification of the chemical structure of a primary or secondary marker is an important asset in the judicial process as the “smoking gun” has been identified.Finally, combination of several targeted and non-targeted methods for one authentication issue can improve the reliability and robustness significantly and provide added value.The challenge is to select the most appropriate combination of methods, which can be based on recent reviewed literature.Food safety was previously the main focus point for authorities, policy makers, the industry, and consumers.The past decade, analytical food authentication has become a top priority when discussing and evaluating the integrity and safety of foods.Analytical methods that can offer fast, cost-effective and reliable food authenticity testing at several points in the food production chain are therefore urgently requested.Targeted methods still have much to offer but it is increasingly acknowledged that food is a complex matrix and should thus be treated and analyzed by techniques that can embrace this complexity.Non-targeted fingerprinting methods are still taking the initial steps into the food authenticity community and much more work is required to validate and harmonize these methods and the associated data interpretations.An essential prerequisite is a common understanding of the analytical principles of targeted versus non-targeted food authentications.Here we have proposed novel definitions and nomenclature of targeted and non-targeted authentication methods as a first step towards harmonization.
The use of non-targeted analytical methods in food authentication has rapidly increased during the past decade. Non-targeted analyses are now used for a plethora of different food commodities but also across several scientific disciplines. This has brought together a mixture of analytical traditions and terminologies. Consequently, the scientific literature on food authentication often includes different approaches and inconsistently used definitions and nomenclature for both targeted and non-targeted analysis. This commentary paper aims to propose definitions and nomenclature for targeted and non-targeted analytical approaches as a first step towards harmonization.
35
Assessment of industrial nitriding processes for fusion steel applications
9Cr reduced activation ferritic martensitic steels are foreseen as the main structural steels for future fusion reactors beyond ITER.Development of these classes of materials started in the early to mid-1990s across several research associations in Europe, Japan and in the United States of America.Inspired by the class of 9Cr steels such as grade P91 and P92 used in conventional and nuclear fission applications, the materials were developed towards low activation and high irradiation tolerance under neutron irradiation .EUROFER97 and its variants have been heavily investigated in the recent years.This has led to the availability of a large set of properties which includes code qualified data for a EU RAFM database with multiple records for tensile, impact, fracture toughness and fatigue lifetime .Advanced breeding concepts demand for an extension of the temperature range from 350 °C – 550 °C to 350 °C – 650 °C.These requirements lead to the fabrication of the oxide dispersion strengthened variant of EUROFER97 .Fine dispersed Y2O3 particles inside the matrix increase strength and creep resistance but lead to a loss in fracture toughness .The fatigue properties also benefit from the strengthening ODS particles.The cyclic stresses nearly double compared to conventional EUROFER97 while the cyclic softening is suppressed.The cycles to failure is also shifted to higher numbers .Generally the literature data on fatigue of 9Cr ODS steels are scarce and would benefit from more experiments for a more thorough investigation of the underlying mechanisms .The initial fatigue results on ODS steels indicate already that at lower strain amplitudes the dislocations are no longer able to overcome the nanoscaled ODS particles.Consequently, the mean free path of dislocations is strongly reduced and the formation of sufficiently long in- and extrusions that become later nanocracks at the surface, is strongly retarded.Much longer fatigue lifetimes are the natural consequence.From these findings it may also be derived more generally that the goal to increase the fatigue life-time could in principle be satisfied by the condition to increase substantially the density of any stable small obstacles on the glide planes of dislocations.Therefore, the major attempt of the present investigation was to validate whether nitride particles introduced into the surface of cast steels could provide also a remarkable increase of the fatigue lifetime.An eventual substitution of ODS steels by an alternative technology would be very beneficial as the fabrication process of high quality ODS steels is complex and the overall availability of large quantities of materials is limited.The nitriding of steel is a state-of-the-art technique to modify and harden the surfaces for improved wear resistance, corrosion resistance and fatigue lifetime .It is especially common for high-alloyed austenitic steels where martensitic hardening of the outer layers is not possible.Studies which applied hardening surface treatments observed an increase in fatigue lifetime due to the comprehensive stresses in the crack initiation zone caused by the hardened outer layer in constant amplitude loading .In addition, if applied accordingly, the nitriding treatments are capable of producing nanoscaled intermetallic phases.Such chromium and nitrogen rich precipitates may work as crack arrestor for micro cracks in the early stages of the fatigue .The existence and geometry of these small cracks are the dominant factor for fatigue damage and they are more important than other microstructural properties such as dislocation structure or overall fracture toughness .The need for extended operation windows for EUROFER97 and the downsides of the ODS materials motivated the present work.Nitriding of the surfaces of finished and semi-finished parts of RAFM steels may give rise to a compromise of the two classes of materials.The authors intend to demonstrate the potential of the surface treatments to significantly improve fatigue lifetime with only minor sacrifices to other properties.All measured effects of the nitriding layer on mechanical and microstructural properties will be compared against EUROFER97 and EUROFER-ODS.In this study four different surface and heat treatment conditions were examined.The base material of each of these conditions was the heat 993402 of the European reference steel for fusion applications EUROFER97-2.The chemical composition of this batch is 8.89% Cr, 0.18% V, 1.06% W, 0.53% Mn, 0.15% Ta, 0.037% N and 0.096% C.The investigation of the reference material is listed in .The requirement of a rapid radioactive decay behavior does not allow the use of some elements such as nickel, cobalt and molybdenum which are common in conventional high-temperature materials.These have to be substituted by elements which satisfy the requirements regarding the mechanical properties and do not degrade the decay noticeably.After a period of 100 years after the end of irradiation, nitrogen significantly influences the activity of RAFM steels by the decay of 14C, which is formed by neutron irradiation.However, the influence on the equivalent dose is negligibly small in the case of nitrogen contents present in the materials treated within this work, since the proportion of energy to be released per decay, in contrast to other elements, is very small.A chromium content of about 9% allows martensitic hardening without leading to segregations at longer exposure times at high temperatures.Vanadium and tantalum form high temperature resistant precipitates that inhibit prior austenite grain growth and increase the high temperature strength.The proportion of 1% tungsten is a compromise between increased strength on the one hand and the breeding ratio of the blanket and acceptable toughness values on the other hand .Table 1 lists the finally applied heat treatments and nitriding processes which were investigated in this work.The two selected heat treatments were chosen according to different characteristics.HT1 should have increased strength while the focus of HT2 is on high ductility and toughness.The high austenitizing temperature was chosen to increase creep strength.The aim of the two nitriding processes is to improve mechanical properties such as strength and fatigue behavior without losing ductility.For nitriding, the samples were sent to Gerster in a quenched and tempered condition.Here the first method was gas nitriding.At a temperature of 550 °C the samples were exposed to a nitrogen environment for 36 hours.Subsequent cooling also took place in nitrogen atmosphere.The following annealing at 750 °C in vacuum ensures comparability with other states.A further discussion of the materials in this state is omitted, since all characteristic properties remain significantly behind those of the reference material.The second method was the Hard-Inox-P method, which is a proprietary nitriding process from the portfolio of the company Gerster .In this case, high-temperature nitriding is performed at 1050 °C for a period of one hour in a vacuum furnace under a nitrogen partial pressure of 500 mbar.Quenching under nitrogen atmosphere to −80 °C and a freezing of one hour at that temperature is followed by a final heat treatment at a temperature of 750 °C in vacuum.For an approximate determination of the nitrogen content after the Hard-Inox-P method, the phase diagram shown in Fig. 1 was calculated using the Thermocalc database TCFe7.The expected components in the equilibrium state are plotted as a function of the nitrogen content.This may give insight on the phases, which could form during the process.The mass fraction of nitrogen is approximately 0.3% for the above-mentioned process parameters of the Hard-Inox-P process.Given the fact that the phase diagram is only valid for equilibrium conditions, these concentrations may only be applicable for the boundary layers.The chemical composition of EUROFER ODS steel is slightly different from that of the other samples.The following elements were alloyed to iron: 8.9% Cr, 1.1% W, 0.2% V, 0.14% Ta, 0.42% Mn, 0.06% Si, 0.11% C.The mechanical alloying with Y2O3 was realized by an industrial ball milling at PLANSEE SE, Reute, Austria.The two examined ODS steels differ in their content of oxide particles of 0.3 wt.% and 0.5 wt.% .Subsequently, the powder was molded by hot isostatic pressing into bars with a diameter of 60 mm and a length of 300 mm.The mechanical testing program comprised fatigue tests, Charpy impact tests and Vickers hardness measurements for the characterization of the surface layer.The fatigue tests were performed on two universal testing machines under vacuum.The strain rate in all experiments was ε̇ = 0.1%/s. Dwell time at the reversal points of the loading cycles was half a second.Monitoring of the test temperature of 550 °C was carried out at three points to ensure a homogeneous temperature distribution.The test was stopped at a stress level lower than 30% compared to the first cycle.Hardness measurement of the surface layer with HV0.1 was required to characterize the surface layer of the nitrided samples.The measurements were carried out and evaluated by an instrumented hardness testing machine.The impact tests were conducted by an instrumented Charpy machine with an automated sample tempering and loading mechanism.For the fatigue tests SSCS were used .The cylindrical sample geometry with a diameter of 2 mm and a gauge length of 7.6 mm was originally designed for use in IFMIF.The comparability of the results with standard samples was validated by various material models in finite element calculations .To have a greater resistance to crack initiation, the surface of the LCF samples were ground in axial direction to an average roughness Ra of 0.262 ± 0.033 µm.The geometry of the KLST Charpy impact test specimens is specified in the standard DIN 50115 .The length of the sample is 27 mm, the cross-section measures 3 mm x 4 mm.The notch angle is 60° with a depth of 1 mm and a radius of 0.1 mm.The samples were machined in L-S direction.The microscopic studies were performed of the cross sections of the tested KLST samples and of the LCF longitudinal sections.For scanning electron microscopy both the SEM "XL30 ESEM" with a LaB6 cathode FEI and the Zeiss "Merlin" with a field emission gun were used.For the chemical analysis of the boundary layer regions an EDAX Trident systems consisting of an Octane Super SDD EDS detector and a LEXS WDS detector was utilized.The preparation steps included mechanical grinding and polishing up to a mirror finish and a brief etching with a mixture of 400 ml ethanol, 50 ml nitric acid, 50 ml hydrochloric acid and 5 gr picric acid.A FEI Tecnai F20 transmission electron microscope with a field emission gun operating at 200 kV and an EDS detector was used for the materials characterization on nanoscale level.A specimen for TEM characterization was prepared using lift-out lamella produced on a Cross Beam Workstation, ZEISS Auriga.The area investigated by TEM had a distance of 10 µm from the sample surface.Fig. 2 shows the number of load cycles to failure depending on the applied load.In addition to the heat treated and nitrided EUROFER samples, the results of Tavassoli and the ODS variant of EUROFER are presented.Obviously, there is an increase in the life time compared to the values of Tavassoli who has summarized many fatigue tests on EUROFER in a temperature range from 500 °C to 550 °C from various research institutions for code qualification.When applying a lower tempering temperature, service lifetime is prolonged for all tested strain amplitudes.Thus, for heat treatment 1, the average lifetime is extended by a factor of 1.7 for related strain levels compared to heat treatment 2.The largest life extension could be achieved with the Hard-Inox-P treatment at low total strain amplitudes.This behavior is caused by fine precipitates in the near surface area, which pin small microcracks.However, for larger loads these precipitates can act as micro notches which promote the damage caused by cyclic loading.The same behavior is shown by the two ODS steels, which again show an increase in life time compared to the Hard-Inox-P treated samples for low loads.The higher the proportion of yttrium, the more pronounced is this behavior.The increase in lifetime of the two alloys with HT1 and HT2 compared to the Tavassoli data can possibly be explained by a hardness increase in the matrix which inhibits crack initiation.Also, the summarized literature data on EUROFER fatigue are averaged and represent a conservative image of the lifetime.Fig. 3 shows the softening behaviour of EUROFER in the initial state, after Hard-Inox-P treatment and with ODS particles at a total elongation of 0.7%.Here σA is the stress amplitude, A is a constant, N is the considered cycle and s the cyclic softening coefficient.Marmy and Kruml and Armas et al. studied LCF tests of EUROFER at a test temperature of 550 °C resulting in the cyclic softening coefficients of 0.0485 and 0.077.The softening coefficients of HT2 with an average of 0.0550 and the Hard-Inox-P treated samples with an average of 0.0582 are also in this interval.The softening behavior of the first heat treatment depends significantly on the load as can be seen in Table 2.The two EUROFER ODS steels do also not follow this law as can be seen in Fig. 3.While the softening is barely recognizable at an oxide content of 0.3%, no softening takes place at a higher oxide content of 0.5%.A similar good softening behavior could not be achieved by the Hard-Inox-P process.Although the stress drop is not as intense as in the comparative sample, it is more pronounced in case of the ODS steels.The hardness of the reference material was examined by and is 220 HV30."In contrast to expectations and manufacturer's instructions no hardness increase in boundary layer related areas could be measured for the nitration Hard-Inox-P.Irrespective of the distance to the specimen surface the hardness was at 221 HV0.1.The average hardness after the first heat treatment was considerably higher with 316 HV0.1, the second heat treatment showed a hardness value of 252 HV0.1.The divergence between the hardness values of the two heat treatments can be explained by the different tempering temperature and has frequently been investigated .The even lower hardness of the Hard-Inox-P treated samples may result from the lower austenitizing temperature.Here K describes the impact energy, a is the mean and b is half the distance between USE and LSE.T0 indicates the transition temperature and c the temperature ranges of transition.The determination of the parameters of this approach was carried out with a least squares fit.Table 3 shows the determined DBTT of all experiments:The DBTT of conventional EUROFER is adjustable just by a different heat treatment in a range from -120 °C to 51 °C.Similar results were found in the work by with a slightly lower DBBT shift of 100 K. Similarly, the upper shelf energy is reduced by 10% after tempering at a lower temperature.In addition, the transition area between USE and lower shelf energy has been increased by tempering at a lower temperature.By applying Hard-Inox-P process, the transition temperature shifts by 40 K to −81 °C influenced by the nanoscale precipitates in the surface layer, which will be described later.This method narrows the transition region.This leads to a similar behaviour as the reference batch 993402.The DBTT of the two ODS EUROFER steels resembles that of the first heat treatment, but showing a lower maximum impact energy.Consequently, at a low tempering temperature EUROFER loses significantly in toughness.The nanoscale particles in EUROFER-ODS deteriorate the resistance to dynamic stress.Treating the samples with the Hard-Inox-P method leads to only a slight displacement of the DBTT.The difference of the surface finish by Hard-Inox-P treatment is documented in the electron micrographs in Fig. 5.The surface of the initial EUROER material after longitudinal polish shows distinct grinding marks in the longitudinal direction while the Hard-Inox-P method reduces the severity of the scratches.Instead, small bright tiles in different density arrays decorate the surface.The roughness was slightly improved by nitriding, as roughness measurements showed.The prior austenite grain boundaries cannot be seen in the micrograph of the sample with Hard-Inox-P treatment.Instead, martensitic laths are dispersed and unstructured over the entire cross-sectional area as the optical micrograph in Fig. 6 documents.To answer the question of the effect of the high-temperature nitrogen method on the structure of the surface layer, the other images at higher magnifications give an insight.The recording with a magnification of 1500× by a secondary electron detector are shown in Fig. 6.The lath structure of the martensite can still be recognized in the form of many distributed precipitates near the surface.The two images 6 and are taken with the InLens detector and have a variety of finely dispersed, nanoscale precipitates with platelet and rod structure on the surface of the metallographic cut.Besides some larger precipitations, diverse precipitates less than 50 nm are visible.The penetration depth of the smallest precipitates was determined by further micrographs with increasing margin to about 100 µm.To better determine the distribution of elements in the edge region, the energy-dispersive x-ray spectroscopy was used.For the determination of the nitrogen distribution over the edge of the sample the wavelength dispersive X-ray spectroscopy was applied, which has a higher detection sensitivity than the energy-dispersive X-ray spectroscopy.Fig. 7 indicates that after the Hard-Inox-P treatment boundary layers show a lower concentration of iron associated with the establishment of higher chromium content.A correlation of the nitrogen distribution and the chromium precipitates cannot be made on the basis of these measurements as the comparison of images and illustrates.A noticeable local increase of the nitrogen content is not apparent from the WDS measurement.However, very finely divided rashes based on small white and gray dots especially in the boundary area are visible.This implies that the above-described nanoscale precipitates are mainly nitrides and that the measurement of carbon in the entire edge area did not exceed the detection sensitivity.The nitrogen influence on formation of precipitates was studied using 2-dimensional EDX elemental mapping in TEM.The distribution of all relevant compositional elements is shown in the maps.Two different kind of precipitates were detected in total: a Cr-rich M23C6 phase with236 composition and a MN phase withN composition.Remarkable is that N is present in all precipitates.The size of the M23C6 particles is larger than in EUROFER97 without N addition .Their distribution is well visible in the Cr map, part.The MN phase is well visible in the Ta map, because Ta is not contained in the M23C6 particles.This phase was detected as pure nitride.The presence TaC phase, which is typical for EUROFER97, was not detected.The mechanical characteristics of EUROFER may be varied by heat treatment within a broad range as shown by the comparison above.The material of the heat treatment 1 with a tempering temperature of 700 °C is stronger and harder as the mechanical tests show.The toughness decreases substantially as the increase in DBTT demonstrates.Life expectancy increases in contrast.The second heat treatment with a tempering temperature of 750 °C has a lower time to failure and a lower hardness.However, the DBTT is below −100 °C, as the Charpy impact tests show for a similar heat treatment.The two ODS steels show an additional lifetime increase for smaller loads and nearly no softening at all.Though as the ODS particle fraction increases, the lifetime for smaller loads and the resistance to softening are increased.In contrast, a higher proportion of particles has a negative effect on the dynamic cracking toughness as shown by the Charpy impact tests.A solution that combines both advantages has not yet been found even if an approximation could be achieved .The high temperature nitriding process Hard-Inox-P demonstrates a longer lifetime in particular at low loads in comparison to the two heat treatments and an acceptable loss in fracture toughness towards the reference material.The strength is increased slightly with a DBTT shift of 40 K. Furthermore, SEM and TEM images show the formation of M23C6 and fine MN precipitations, which may be responsible for the positive influence on the mechanical characteristic values.The advantage of Hard-Inox-P compared to ODS steels lies in the manufacturing.The Hard-Inox-P method is already state of the art as well as significantly cheaper and simpler.Thus, this procedure is promising for future fusion applications.For EUROFER, with the chosen process parameters and sample geometry, gas nitriding is unsuitable for the requirements of durability and toughness.Whether an adjustment of these variables has the desired effect can be doubted about due to the large discrepancy to the reference samples.Parameter variations of partial pressure, time and temperature profile have to be examined in a next step in order to evaluate the potential of the high temperature nitriding.An extension of the mechanical tests by creep and fatigue tests increases the statistical significance and provides information on further requirements for high-temperature materials in particular in the area of fusion power plants.
The 9Cr steels EUROFER and F82H-mod are the candidate materials for future fusion reactors. The extension of the operation limits including temperature, strength and toughness are still the scope of ongoing research. In a pulsed reactor operation, fatigue lifetime is one of the major properties for the steels. While the oxide dispersion strengthened EUROFER-ODS variant showed significant improvements in this area, the production costs and availability of large quantities of materials drastically limits its applications. In the present study, different surface nitriding treatments of EUROFER972 have been performed and the impact on microstructure, dynamic fracture toughness and high temperature fatigue has been analysed. Four different states of EUROFER including different heat treatments, nitriding of the surface and the ODS variant are tested and compared in this work. Low cycle fatigue tests show the improvements after certain treatments. Charpy impact tests and microstructural investigation by scanning electron microscopy and analytical transmission electron microscopy are also performed to compare the materials against the reference (EUROFER97). While conventional gas nitriding showed no beneficial effect on the material, the Hard-Inox-P treatment showed a significant improvement in the cycles to failure while retaining an acceptable toughness. Microstructural investigations showed the presence of very small chromium- and nitrogen-rich precipitates in the area close to the surface.
36
Combined Antiviral Therapy Using Designed Molecular Scaffolds Targeting Two Distinct Viral Functions, HIV-1 Genome Integration and Capsid Assembly
Human immunodeficiency virus type 1 infection continues to expand worldwide, despite all the efforts put into multiple therapeutic strategies.The current standard of care for HIV-infected patients is the highly active antiretroviral therapy, which successfully reduces the HIV plasma viral load to undetectable levels and slows down AIDS progression.However, the occurrence of drug-related side effects, multidrug-resistant isolates, latent viral reservoirs, and cryptic viral replication in lymphoid tissues associated with chronic inflammation and immune dysfunction are often observed during highly active antiretroviral therapy.1,To date, the development of a safe and effective HIV-1 vaccine has not yet proven successful.2,However, recent advances in genetic manipulation of hematopoietic/progenitor stem cells, along with the development of lentiviral vector-mediated delivery of potential therapeutic genes to nondividing cells, has provided optimism for the achievement of more realistic and promising strategies for life-long protection of HIV-1-infected individuals.3,A number of approaches involving viral enzymatic or nonenzymatic functions have been tested as potential targets for anti-HIV-1 gene therapy.These include the intracellular expression of anti-HIV-1 agents such as shRNAs targeting the conserved long terminal repeat region of the viral genome, or the CCR5 coreceptor, which was shown to be highly effective against R5-tropic HIV-1 replication in hu-BLT mice.4,U16TAR, a nucleolar-localizing TAR decoy, has been found to block the initiation of HIV-1 transcription by competing with native ligands necessary for the viral replication in human T cells.5,More recently, a protein-based approach has been utilized, using zinc-finger nuclease fusion proteins that specifically recognized the CXCR4 gene in CD4+ T-cells and CCR5 gene in hematopoietic stem cells, respectively, resulting in nonfunctional chemokine receptor cell expression, and inhibition of HIV-1 cell entry.6,7,Single chain variable fragment-based strategies have also been used to negatively interfere with several strains of HIV-1, through their binding to different viral proteins, such as Tat and Gag, and to various host cell proteins including CCR5, CXCR4, and cyclin T1.8,9,10,However, there are severe limitations and drawbacks to these different strategies.Those involving RNA-based antivirals present an inherent lack of stability and/or specificity.In addition, due to the high mutation rates of the viral genomic RNA, the regions targeted by these agents might rapidly become insensitive to their action.11,Chromosome condensation can also be an obstacle, as it might obstruct the access of the ZFN to site-specific double-strand breaks in the host cell genome.12,In the case of scFv and intrabodies, the cytoplasm is a reducing milieu which may not favorable to the proper folding and biological function of these molecules.13,We have recently developed two novel intracellular protein inhibitors of HIV-1 replication, abbreviated AnkGAG1D4 and 2LTRZFP, as alternative antiviral molecular scaffolds for anti-HIV gene therapy.2LTRZFP and AnkGAG1D4 have been designed to block HIV-1 replication at the early and late infection stages of the virus life cycle, respectively.2LTRZFP is a zinc-finger protein designed to block the integration and prevent the establishment of latent viral reservoirs.2LTRZFP is devoid of endonuclease activity, has the capacity to fold properly within the cytoplasm, and shows nuclear homing.Due to its high-affinity for the integrase recognition sequence at the 2-LTR circle junctions, 2LTRZFP blocks HIV-1 genome integration.14,15,The molecule of AnkGAG1D4 is an artificial ankyrin-repeat protein, which has, as do all members of the ankyrin family, a net superiority over scFv and intrabodies in terms of solubility and stability in the cytosolic reductive conditions.16,17,AnkGAG1D4 has been designed to target the HIV-1 capsid protein, and its binding site is localized in the N-terminal domain of the CA.A myristoylation motif has been added to the N-terminus of AnkGAG1D4, resulting in the localization of MyrAnkGAG1D4 to the plasma membrane, the site of HIV-1 assembly.AnkGAG1D4 has been shown to negatively interfere with viral assembly in HIV-1-infected SupT1 cells.18,19,The high mutation rate of the viral genome is responsible for the HIV-1 resistance to antiviral drugs.For this reason, the alteration of one single viral function is likely to be insufficient to confer full protection against chronic HIV-1 infection.20,Thus, the combination of multiple antiviral agents targeting two or more of the steps of the virus life cycle has a higher probability of providing a long-term protection against HIV-1 infection.The same rule applies to our anti-HIV-1 molecular scaffolds.Stable expression of MyrAnkGAG1D4 protein in noninfected cells will not prevent the infection by new incoming viruses and the integration of their cDNA into the host-cell genome, a process which can be efficiently blocked by the 2LTRZFP scaffold, thus limiting new infection.14,15,On the other hand, transduction of the 2LTRZFP gene into cells already infected by HIV-1 and carrying integrated proviral genomes, is reasoned to have minimal, if any antiviral effect.In such cells however, the production and release of the viral progeny would be significantly reduced by MyrAnkGAG1D4.19,Thus, the coexpression of at least two molecular scaffolds, acting at both early and late steps of the HIV-1 life cycle, would have the advantage of conferring HIV-1 resistance to heterogeneous cell populations composed of HIV-1-infected and noninfected cells.In this case, it is reasoned that whereas the application of the antiviral scaffold 2LTRZFP would serve as a preventive treatment, the AnkGAG1D4 would serve to control viral expression.The aim of the present study was thus to test the validity of this hypothesis, using coexpression of 2LTRZFP and MyrAnkGAG1D4 in human T-cells.Simultaneous expression of 2LTRZFP and MyrAnkGAG1D4 was achieved using a constitutively expressing human T-cell line transduced with a third-generation lentiviral vector carrying the two transgenes, and the antiviral effect tested by challenging with HIV-1 used at high doses.The antiviral activity of MyrAnkGAG1D4 was also evaluated in SupT1 cells preinfected with HIV-1.Simian immunodeficiency virus and simian-human immunodeficiency virus infections in macaques are accepted as highly relevant experimental models of HIV-1 infection in humans.They have been used to study HIV-1 pathogenesis, and for preclinical testing of drugs and vaccines.21,We found a significant antiviral activity of MyrAnkGAG1D4 in human primary CD4+ T-cells challenged with HIV-1, SIVmac, and SHIV.We suggest that molecular scaffolds such as designed ankyrins could be used in nonhuman primate models, and could contribute to the development of scaffold-based anti-HIV-1 gene therapy, as an alternative treatment for highly active antiretroviral therapy-resistant patients in the future.In previous studies, we have generated stable SupT1 cell lines expressing AnkGAG1D4 or 2LTRZFP, using a nonintegrative, episomal vector.14,15,18,19,This system had the inconvenience of requiring a powerful gene delivery method and a requirement for permanent drug selection to obtain a high transduction efficiency, and a stable and reasonable level of transgene expression.This represented a major drawback for a possible application of these molecular scaffolds in stem-cell gene therapy against HIV-1.To circumvent this obstacle, we herein utilized a HIV-1-based self-inactivating lentiviral vector to introduce anti-HIV-1 scaffold gene into human CD4+ T-cells, to generate stable cell lines for constitutive expression of the transgene, independent of drug selection.Three plasmid vectors were constructed to produce VSV-G-pseudotyped lentiviral vectors. CGW-2LTRZFPmCherry vector carried the gene coding for 2LTRZFP, fused to the mCherry reporter gene. CGW-MyrAnkGAG1D4EGFP vector carried the MyrAnkGAG1D4 gene, fused to the EGFP reporter gene.The enhanced green fluorescent protein was used as a reporter for gene expression. CGW-2LTRZFPmCherry-IRES-MyrAnkGAG1D4EGFP vector was constructed to coexpress both anti-HIV-1 scaffolds in single-transduced cells.The gene constructs were placed under the control of the MND promoter, or MND and the IRES region.The levels of expression and cellular localization of MyrAnkGAG1D4 and 2LTRZFP proteins were analyzed in HEK293T cells, using fluorescence confocal microscopy.The red fluorescence associated with 2LTRZFPmCherry was found to localize primarily to the intranuclear region, an observation that was consistent with the nuclear homing property of ZFP, in general, and our 2LTRZFP protein in particular14,15.The green fluorescent signal of the MyrAnkGAG1D4EGFP protein was found to localize within the cytoplasm and at the periphery of the cells.This result was expected since a plasma membrane addressing signal was inserted at the N-terminus of MyrAnkGAG1D4EGFP.Interestingly, in HEK293T cells coexpressing MyrAnkGAG1D4EGFP and 2LTRZFPmCherry, the two molecules trafficked independent of their preferred cellular compartments.Of note, the percentages of fluorescent signal-positive cells were 80, 90, and 60%, respectively.This result indicated that the SIN lentiviral vectors represented an efficient vector for the delivery of MyrAnkGAG1D4EGFP and 2LTRZFPmCherry genes along with, high-level expression of the antiviral scaffolds in target cells.The production of VSV-G-pseudotyped lentiviral vectors carrying the gene for 2LTRZFPmCherry, MyrAnkGAG1D4EGFP, or both anti-HIV scaffolds, was also performed in HEK293T cells cotransduced with vectors, or, along with two additional vectors required for genome packaging and particle production, and a third vector for VSV-G pseudotyping.10,Our next experiments were designed to evaluate the level of protection against HIV-1 of CD4+ T-cells expressing MyrAnkGAG1D4 alone, 2LTRZFP alone, or the two molecular scaffolds together.The human T-lymphocytic cell line SupT1, transduced by the lentiviral vectors described above, were maintained in culture for 30 days before HIV-1 infection.Highly enriched population of cells expressing either EGFP-fused MyrAnkGAG1D4 alone, mCherry-fused 2LTRZFP alone, or both were isolated by flow cytometry cell sorting.Cells positive for EGFP or mCherry were challenged with HIV-1 at a multiplicity of infection of 20;, and the degree of antiviral protection was evaluated in culture supernatants collected at D12, D21, and D30 postinfection using a p24 enzyme-linked immunosorbent assay.The monitoring of HIV-1-infected cells for the stability of expression of the antiviral molecular scaffolds showed that >80% of the cells were positive for EGFP or mCherry at D12 pi, and between 70 and 95% cells remain positive at D30 pi.In control SupT1 cells as expected, the supernatant fluids collected on D12 and D21 from the nontransduced cells infected with HIV-1 showed high p24 levels.The decrease in supernatant fluids collected from these cultures after D21 was due to the high cell mortality at D21.In contrast, the level of p24 antigen was undetectable in supernatant fluids collected from HIV-1 infected cultures of cells transduced with the CGW vectors carrying the genes for MyrAnkGAG1D4EGFP, 2LTRZFPmCherry, or both, until D21 pi.However, levels of p24 antigen increase in supernatant fluids collected between D21 and D30 in the SupT1/MyrAnkGAG1D4, but remained undetectable in SupT1/2LTRZFP and SupT1/2LTRZFP/MyrAnkGAG1D4 transduced cells.These findings confirmed our starting hypothesis that an efficient and long-term protection against HIV-1 infection at high MOI could not be conferred by one single type of molecular scaffold, like those targeting the late steps of the virus life cycleAnkGAG1D4), but also required the blockage of another step of the virus life cycle.This could be achieved by 2LTRZFP, which was found to efficiently inhibit the viral integration, even in SupT1 cells infected with a high dose of HIV-1, as shown by Alu-gag quantitative real-time polymerase chain reaction.Due to the complete inhibition of viral integration and replication observed with 2LTRZFP, there was no virus escape which would allow us to evaluate the possible anti-HIV-1 function of MyrAnkGAG1D4 at the postintegration steps.MyrAnkGAG1D4 would theoretically block the virus assembly process in HIV-1-infected cells.Therefore, we used SupT1 cells preinfected with HIV-1 as the cellular model to measure the contribution of MyrAnkGAG1D4 to the global antiviral effect produced by the combined vectors.We next tested the antiviral effect of MyrAnkGAG1D4 in SupT1 cells preinfected with HIV-1 at low MOI 1, in order to mimic the scenario of chronically produced virus.The status of HIV-1-infected cells was verified by p24 ELISA and levels of virus in the supernatant fluids using standard RT-PCR assays.Supernatant fluids collected on day 11 postinfection gave p24 values of 25 ng/ml and 4.5 × 107 genomic RNA copies/ml for the viral load.HIV-1-infected SupT1 cells were then harvested on D11 and while one aliquot was transduced with the CGW lentivector carrying the gene for MyrAnkGAG1D4, the other aliquot was nontransduced and served as control.Following HIV-1 infection, low levels of p24 antigen were observed in the supernatants fluids from the SupT1/MyrAnkGAG1D4 until D21 postinfection, i.e., 10 days after vector transduction.This result was confirmed by the viral load assays, which showed a 80- and 100-fold reduction in samples collected on D17 and D19 postinfection, respectively as compared to nontransduced HIV-1-infected cells.The levels of ankyrin expression in SupT1/MyrAnkGAG1D4 cells were found to be very high until D27 postinfection, at 90% EGFP-positive cells.In control, nontransduced cells, a peak of extracellular p24 antigen was observed in supernatant fluids collection on D19, with a decrease in p24 occurring after D21, due to cell apoptosis induced by HIV-1.The residual levels of extracellular p24 antigen after D21 raised the issue of the occurrence of viral mutants which would escape the MyrAnkGAG1D4-mediated inhibition of HIV-1 replication.In our previous report, we characterized the key amino acids in the HIV-1 capsid protein that play the important role in the AnkGAG1D4-NTDCA interaction.The arginine-18 in helix 1, and R132 and R143 in helix 7 are the key players of this interaction.22,We have aligned the amino acid sequences of helix 1 and helix 7 of the HIV-1 subtype B prototype with newly isolated strains, deposited in the HIV sequence database from 2008 to 2013.We found that R18 in helix 1, and R132 and R143 in helix 7, are highly conserved residues in the CA sequence of all isolates.These amino acid residues are known to be crucial for the morphogenesis of the virions and their stability, and the chances that mutations of these amino acids would result in viable viruses are minimal.However, the possibility of a viral escape from the MyrAnkGAG1D4 antiviral activity should be only definitely eliminated by challenging MyrAnkGAG1D4-expressing cells with various HIV-1 CA mutants.The explanation to the persistence of p24 antigen in the extracellular medium of HIV-1-infected SupT1/MyrAnkGAG1D4 cells more likely resided in the intrinsic properties of MyrAnkGAG1D4.MyrAnkGAG1D4 has been found to bind to the HIV-1 capsid with a moderate affinity.19,Thus, MyrAnkGAG1D4-expressing cells would still allow some HIV-1 to leak out in the extracellular milieu.This difficulty can be overcome in future applications of therapeutic ankyrins designed to inhibit the HIV-1 assembly.It will be possible to significantly enhance the affinity of MyrAnkGAG1D4 to the HIV-1 capsid by site-directed mutagenesis of important amino acid residues of the binding site of the ankyrin protein to increase its binding affinity to its viral target, as described by Sammond et al.23,Autologous gene-modified CD4+ T-cells in HIV-1-infected patient have been identified as one of promising alternative HIV treatments.24,In this strategy, CD4+ T-cells from a HIV-1-infected individual are isolated, anti-HIV-1 genes introduced, and cells expanded ex vivo before reinfusing them back into the body.However, the expanded CD4+T-cell population might contain cells of the HIV latent reservoir that would be ready to rereplicate after infusion.1,As a consequence, it is essential to select therapeutic gene which are able to control the expression of latent HIV-infected cells.In this context, the application of MyrAnkGAG1D4 to autologous adoptive T-cell transfer would have the advantage of preventing viral replication from both preinfected and naive cells.It was important to model in vitro the degree of antiviral protection of a human primary CD4+ T-cell population, which, in individuals subjected to gene therapy, would be heterogeneous in terms of antiviral scaffold expression.The in vivo distribution of therapeutic vector among the CD4+ T-cell population would not be homogeneous, with transgene expression in certain cells, but not in others, and scaffold expression would occur at various levels.Therefore, an experimental setup was designed to mimic this type of in vivo heterogeneity, and to study its influence on the cell susceptibility to HIV-1.Replicates cultures of primary CD4+ T-cells were stimulated in vitro with anti-CD3/CD28 antibodies, a treatment which is reported to downregulate CCR5 receptors at the cell surface, and results in a high-level resistance to R5-tropic virus.25,26,Anti-CD3/CD28-activated CD4+ T-cells were then transduced with the CGW-MyrAnkGAG1D4EGFP vector.The expanding population showed at least 95% cell viability and stably expressed MyrAnkGAG1D4EGFP protein over a long time period, with about 70% EGFP-positive cells, as monitored by flow cytometry.The 70% EGFP-positive cell culture was diluted with non-transduced primary anti-CD3/CD28 activated CD4+ T-cells, to obtain a final value of 30% EGFP-positive cells.The mixed population of MyrAnkGAG1D4-positive and MyrAnkGAG1D4-negative CD4+ T-cells was then challenged with X4-tropic HIV-1 at MOI of 500 ng of p24 per 106 cells.Controls consisted of nontransduced primary CD4+ T-cells infected with the same dose of HIV-1NL4-3.We verified that the percentage of MyrAnkGAG1D4EGFP-positive cells remained stable throughout the virus challenge, at approximately the same value as at time 0 of the experiment.Cells were collected at D10 and D15 postinfection, and the number of HIV-1-infected cells was determined by intracellular p24 protein assessment by flow cytometry.As shown in Figure 4c, the expression of MyrAnkGAG1D4 in only 30% human primary CD4+ T-cells provided a significant protection against X4-tropic HIV-1, with a 7.2-fold lower number of HIV-1-positive cells, as compared to controls.We then examined the capability of MyrAnkGAG1D4 to block the simian immunodeficiency virus and chimeric simian-human immunodeficiency virus infection in vitro.It has been shown that human T-lymphocyte cells SupT1 can support SIV and SHIV replication.27,Therefore, this cell line could be used as the host cells to test the ankyrin function.Moreover, since the MND promoter is able to drive the expression of transgenes in both human and macaque cells with the same efficiency,28 the levels of expression of ankyrins in human-derived cells would be similar to those which would occur in macaque-derived cells.SupT1 cells stably expressing MyrAnkGAG1D4EGFP were infected with SIVmac239 or SHIV-Bo159N4-p at the same doses.Cell culture supernatants were collected at D7, D11, and D18 postinfection.The concentration of extracellular p27 antigen was determined by ELISA, and that of viral genomes determined by qPCR.EGFP expression was verified by flow cytometry at D18 postinfection.As shown by ELISA, the p27 levels were significantly decreased at D11 and D18 in both SIV and SHIV challenges, compared to control SupT1 cells.Likewise, viral genomes were almost undetectable in the culture supernatant of MyrAnkGAG1D4EGFP-expressing cells at D18.These results indicated that MyrAnkGAG1D4 possessed a broad antiviral activity against HIV-1, SIV, and SHIV.Of note, flow cytometry analysis for the expression of the MyrAnkGAG1D4 showed that a high frequency of the SIV- and SHIV-infected cells expressed EGFP.This broad antiviral activity was not surprising, considering the CA sequence homology between HIV-1, SIVmac and SHIV, and the findings of our recent studies.We found that AnkGAG1D4 specifically interacted with helix 1 and helix 7 of NTDCA, the N-terminal domain of HIV-1 capsid protein.22,Helices 1 of HIV-1 and SIVmac239 NTDCA have 93.8% sequence homology, and 75% sequence identity, and helices 7 have 89.5% homology and 73.7% identity.AnkGAG1D4 therefore negatively interfered with the assembly process of SIV and SHIV as efficiently as with that of HIV-1 virus particles.This is in contrast with the 2LTR circle junction sequences of SIV and SHIV, which significantly differ from that of HIV, with only 59.1% homology.29,30,As a result of this heterogeneity, the 2LTRZFP protein would probably not bind to the 2LTR circle junctions of SIV and SHIV, or bind to them with a very low affinity.Thus, only the antiviral activity of MyrAnkGAG1D4 against with SIV and SHIV was evaluated in this experiment.Various proteins with anti-HIV-1 activity have been designed in recent years.This includes a zinc-finger antiviral protein, which promotes the specific degradation of viral mRNAs and has been shown to inhibit HIV-1 infection.31,However, the occurrence of escape mutation in the zinc-finger antiviral protein-targeted region of the mRNAs remains an important concern.Likewise, a CD4-specific designed ankyrin-repeat protein, which binds to the N-terminal immunoglobulin-like domain of human CD4, inhibits the entry of various HIV-1 strains.32,However, the high rate of blood clearance of this CD4-specific DARPin is still an obstacle to potential clinical applications.33,A gp120-specific DARPin, which recognizes the V3 loop of HIV-1 gp120 was found to function as a viral entry inhibitor,34 but the high mutation rate of gp120 represents a major challenge for its use in antiviral gene therapy, as well as for vaccine and drug development.35,In addition, zinc finger nucleases that target human CCR5 and CXCR4 receptors also inhibit the cellular internalization of both R5- and X4-tropic HIV-1.36,However, disruption of chemokine receptors could negatively affect immune cell function, and induce chronic inflammation.37,38,The antiviral activity of ZFNU3, a zinc-finger nuclease which excises the proviral DNA from the host chromosome,39 might be limited by the poor accessibility of the proviral DNA integrated into condensed regions of the chromosome.12,By comparison with these different types of anti-HIV-1 proteins and their limitations, HIV-1-targeted intracellular molecular scaffolds, such as the designed ZFPs and ankyrin-repeat proteins defined herein, could offer an alternative and a more reasonable approach for antiviral gene therapy.Their multiple advantages include their expression stability, absence of cytotoxicity, and the potential to manipulate their active site and binding affinity with their ligands, as well as the possibility to modify their intracellular trafficking to the most appropriate host cell compartment, for example via the addition of specific compartment-addressing signals, such as N-myristoylation or nuclear localization signal.The two molecular scaffolds which was the focus of our present study, 2LTRZFP and MyrAnkGAG1D4, both exerted their intracellular antiviral activity towards conserved structural elements of the HIV-1 virion, or critical viral functions which take place in specific host-cell compartments.The 2LTRZFP construct possesses nuclear homing property and has been shown to inhibit the integration of HIV-1 provirus.14,15,AnkGAG1D4, in its N-myristoylated version MyrAnkGAG1D4, was directed to the plasma membrane where it negatively interfered with the oligomerization of the Gag proteins and the assembly of HIV-1 capsid.18,19,The genes for both the molecular scaffolds 2LTRZFP and MyrAnkGAG1D4 were transduced into SupT1 cells or human primary CD4+ T-cells, using a single HIV-1-based, SIN CGW lentiviral vector.Plasmids for genome vector packaging and VSV-G-pseudotyping were provided in trans.The vector genome carried a scaffold attachment region element, to increase the transgene product expression and provide protection against gene silencing in human lymphoid and myeloid cells.40,The transgenes were placed under the control of the MND promoter, to enhance the expression of therapeutic genes in hematopoietic cells, including primary CD4+ T-cells and hematopoietic stem cells.41,We found that both molecular scaffolds were stably expressed in human T-cells.Interestingly, the simultaneously expressed 2LTRZFP and MyrAnkGAG1D4 proteins showed independent trafficking to their respective homing compartments, the nucleus for 2LTRZFP, and the plasma membrane for MyrAnkGAG1D4.We submit that SIN CGW lentiviral vectors are efficacious tools for multiple transgenes delivery and for the stable expression of the transgene products in human T-cells.Considering the problems of anti-HIV-1 drug adherence, viral-resistant strains, and latent viral reservoirs, new therapeutic approaches using anti-HIV-1 gene therapies combined with HSC-based therapies have emerged as a promising direction for curing HIV-1-infected individuals with a single-time treatment.HSCs are capable for self-renewal and differentiation into multilineage hematopoietic cell types, including T-lymphocytes and macrophages that have been shown to serve as the primary targets of HIV-1.Hence, transplantation of modified HSCs containing anti-HIV-1 genes in AIDS patient could produce new immune cells that would resist HIV-1 infection.Autologous hematopoietic stem cell transplantation is an alternative source of HSCs without the potential complications associated with graft-versus-host disease.42,However, there are many reports that CD34+ progenitor cells derived from granulocyte-colony stimulating factor-mobilized peripheral blood or bone marrow cells are susceptible to HIV-1 infection.43,44,Thus, autologous CD34+ cell transplantation of HIV-1-infected patients might introduce, as the Trojan horse, cellular reservoirs of HIV-1 with the risk of virus spread to various anatomic compartments.44,It therefore seems logical to consider the modification of autologous CD34+ progenitor cells by appropriate antiviral genes, to render them safe for transplantation, while preserving the essential cellular functions.Our molecular scaffolds, 2LTRZFP and MyrAnkGAG1D4 proteins, fulfilled the requirements for stem cell-based gene therapy against HIV-1 infection.When transduced in tandem by a single lentiviral vector of the SIN-CGW type, the two scaffolds will provide both preventive effects and control of viral production. 2LTRZFP, which blocks the early step of viral integration step, will protect noninfected cells from infection by new incoming viruses, although it would have no effect on HIV-1-infected cells already carrying integrated provirus, and MyrAnkGAG1D4 protein, which cannot protect noninfected cells against the viral integration of newly incoming HIV-1, can exert its antiviral effect on HIV-1-infected cells, by inhibiting the late step of assembly and release of progeny virions.Hence, transplantation of HIV-1-infected individuals with autologous HSCs modified to express both 2LTRZFP and MyrAnkGAG1D4 proteins, would confer robust antiviral immunity to the recipients, not only via the production of HIV-resistant cells, but any cell that was infected would be actively controlled and no virus would be produced.Experiments in nonhuman primate models will be necessary to further characterize the biosafety and antiviral activity of our molecular scaffolds.In the present study, we found that the MyrAnkGAG1D4 protein expressed in human SupT1 cells was a strong inhibitor of SIVmac and SHIV replication.Collectively, our data suggests that our two molecular scaffolds 2LTRZFP and AnkGAG1D4, coexpressed in target cells using a unique lentiviral vector carrying both genes in tandem, represented an novel class of intracellular antiretrovirals with interesting therapeutic potential, most notably in future hematopoietic stem cell-based anti-HIV-1 therapy.Human cell lines.HEK293T and SupT1 cells were obtained from the ATCC.6,10,14,19, "HEK293T cells were maintained in Dulbecco's modified Eagle's medium and SupT1 cells were grown in RPMI-1640 medium, both supplemented with penicillin, streptomycin, L-glutamine, and 10% fetal bovine serum.All cell cultures were maintained in a 37 °C humidified incubator containing 5% CO2.Human primary CD4+ T-cells.Whole-blood samples were obtained from healthy donors at the Normal Blood Donation Center after approval by The Scripps Research Institute Institutional Review Board, La Jolla, CA.peripheral blood mononuclear cells were isolated from peripheral blood by density-gradient centrifugation.6,10,Isolation of highly pure CD4+ T-cells was achieved by using EasySep Human CD4+ T-Cell Enrichment Kit.Determination of purity of over 95% was determined using flow cytometry with appropriate reagents.Primary CD4+ T-cells were maintained in RPMI-1640 medium, supplemented with 10% fetal bovine serum, interleukin-2, L-glutamine, nonessential amino acids, sodium pyruvate, penicillin, and streptomycin.They were stimulated with anti-CD3/CD28-conjugated Dynabeads, and reactivated with the anti-CD3/CD28 beads and IL-2 every 5 days during cell culture.Construction of lentiviral vectors CGW,10 a third-generation lentiviral vector, was used as the backbone vector to transfer the genes for our molecular scaffolds 2LTRZFP and MyrAnkGAG1D4 into target cells.This transfer vector carried three unique elements: the MND promoter, to ensure a high expression of the gene of interest;10 β-interferon scaffold attachment region element, which has been shown to improve transgene expression by inhibiting gene methylation and promote protection from gene silencing in both resting and activated hematopoietic cells, including human T and myeloid cells;40 and cellular internal ribosomal entry site, to control the bicistronic transgene production10 and allow for a cap-independent translation of both 2LTRZFP and MyrAnkGAG1D4 proteins in the same target cell.CGW-2LTRZFPmCherry.The fragment containing the 2LTRZFP gene was excised from the p156mPGK-ZFP-GFP vector,14 using Xba I and Xma I, and inserted into pUC18.The 2LTRZFP-containing fragment was rescued by excising with EcoR I and BstX I, and reinserted into the EcoR I and BstX I sites of the CGW lentiviral vector.The resulting fusion protein, 2LTRZFPmCherry, possessed the mCherry fluorescent protein at its C-terminus.CGW-MyrAnkGAG1D4EGFP.The AnkGAG1D4 gene-containing DNA fragment was rescued by PCR amplification from the pCEP4-MyrAnkGAG1D4GFP plasmid.19,The PCR fragment was cloned into the CGW vector linearized by digestion with EcoR I and BstX I.The fusion protein MyrAnkGAG1D4GFP carried a N-myristoylation signal at its N-terminus for targeting to the plasma membrane, and the in-phase EGFP sequence at its C-terminus.CGW-2LTRZFPmCherry_IRES_MyrAnkGAG1D4EGFP.This bicistronic gene-carrying vector was generated to coexpress both 2LTRZFPmCherry and MyrAnkGAG1D4EGFP proteins in a single cell.The DNA fragment containing the 2LTRZFPmCherry gene was excised from the CGW-2LTRZFPmCherry vector treated with EcoR I and BsrG I, and reinserted into the CGW-MyrAnkGAG1D4EGFP vector, upstream to the IRES element.In the resulting transfer vector, the 2LTRZFPmCherry gene was positioned downstream to the MND promoter, and the MyrAnkGAG1D4EGFP gene was placed downstream to the IRES element.Production of VSV-G-pseudotyped lentiviral vectors.VSV-G-pseudotyped lentiviral vector particles were produced in HEK293T cells, using four separate plasmids and the calcium phosphate cotransfection method, as previously described.10,45,HEK293T cells were seeded on 10-cm dishes, and cotransfected with the CGW transfer vector, the packaging construct pMDLg/pRRE, pRSV-Rev, and pMD.2G.Vector particles were harvested from the culture supernatant collected at 24 and 48 hours, and high-titer viral vector stocks prepared by ultracentrifugation.10,45,The viral vector titers, determined by infection of HEK293T cells with serial dilution of the samples, were expressed as the percentage of EGFP- or mCherry-positive cells.45,Generation of cells stably expressing 2LTRZFPmCherry and MyrAnkGAG1D4EGFP.SupT1 and HEK293T cells were transduced with the VSV-G-pseudotyped CGW vectors at the MOI of 1.Primary CD4+ T-cells were activated with anti-CD3/CD28 beads in media containing 100 U/ml IL-2 for 24 hours, and transduced with CGW-MyrAnkGAG1D4EGFP at MOI 4.For the transduction of HIV-infected cells, CGW-MyrAnkGAG1D4EGFP was added to HIV-infected SupT1 cells at MOI 0.7, subjected to spinoculation by centrifugation at 2,000×g at 32 °C for 1.5 hours, in a growth medium containing 8 µg/ml of Polybrene.The cells were then transferred to a humidified incubator, and maintained at 37 °C and 5% CO2 for 24 hours.The cells were then washed three times with fresh growth medium and further cultured in fresh growth medium and divided every 5 days for the primary T-cells or every 3 days for the cell lines.The efficiency and stability of transduction was determined by fluorescence microscopy and flow cytometry.SupT1 cells that expressed antiviral molecular scaffolds were isolated using a FACS sorter.Confocal microscopy.HEK293T cells transduced with each VSV-G-pseudotype lentiviral vector at MOI 1 were seeded on cover glass slides, and left overnight for culturing.Cells were fixed with 4% formaldehyde in PBS, and permeabilized with 0.2% Triton X-100.Then, the nucleus was counterstained with DAPI.Images were acquired using Nikon C2 Plus confocal microscope with 600× magnification, and analyzed using the NIS-Elements AR 4.20.00 software.Viral stocks.Replication-competent HIV-1NL4-3 virus was produced by transient transfection of pNL4-3 plasmid into HEK293T cells.Monolayers of HEK293T cells were transfected with 5 µg of the pNL4-3 plasmid, using Lipofectamine and Plus reagents, as previously described.14,After 5 hours, the transfection mixture was withdrawn, replaced by 10 ml of growth medium, and the cells were allowed to grow for 48 hours.HIV-1 virus was harvested from the culture supernatants, by filtration through sterile syringe filters with a 0.45-µm pore size.HIV-1 samples were aliquoted and kept frozen at −80 °C.The virus titer was determined using conventional p24 antigen ELISA, using the Genscreen ULTRA HIV Ag-Ab assay.The viral load was determined using the COBAS AMPLICOR HIV-1 Monitor test.Viral stocks of SIVmac239 and SHIV-Bo159N4-p were produced by infection of Concanavalin A-stimulated naïve rhesus macaque PBMCs, and infected cell cultures were maintained in the presence of IL-2, as previously described.48,HIV-1 infection.The level of antiviral protection conferred by our molecular scaffolds was evaluated by challenge with the X4-tropic HIV-1NL4-3.Control SupT1 cells and SupT1 cells stably expressing 2LTRZFP, MyrAnkGAG1D4 or both, were maintained in growth medium for at least 4 weeks before HIV-1 infection.The cells were incubated with HIV-1, added at MOI 20, for 16 hours.The cells were then washed three times with serum-free medium, and resuspended in fresh growth medium.They were split at 3-day intervals, to maintain a cell density of approximately 106 cells/ml.HIV-1 replication was monitored in culture supernatants, using p24 antigen ELISA and viral load assay, as described above.The cell pellets were kept to determine the level of inhibition of proviral integration.The cell viability was assessed by the trypan blue dye exclusion staining method.For viral challenges, the SupT1 cells were infected with HIV-1NL4-3 at MOI 1, using the method described above.Then, on day 11 postinfection, cells were transduced by the CGW-MyrAnkGAG1D4EGFP lentiviral vector at MOI 0.7, as described above.For infection of primary CD4+ T-cells, the percentage of MyrAnkGAG1D4EGFP-positive cells was adjusted to approximately 30% by the addition of nontransduced cells, and the mixed cell cultures were infected with 500 ng of HIV-1NL4-3 p24 per 106 cells.Aliquots of HIV-1-infected cells were collected at 5-day intervals, and the progression of HIV-1 replication was detected by immunofluorescent staining of intracellular p24 protein and flow cytometry.The cells were fixed and permeabilized using BD Cytofix/Cytoperm Fixation/Permeabilization Solution Kit, and incubated for 30 minutes with PE-conjugated anti-p24 antibody.Cells were incubated with control IgG isotype antibodies to evaluate the level of nonspecific binding.Stained cells were then analyzed using the BD FACS Calibur LSR II flow cytometer, using the CELLQUEST software.HIV-1 integration assay.The number of viral genome copies integrated into the host DNA of control SupT1 and SupT1 cells stably expressing molecular scaffolds was determined by using a conventional Alu-gag qPCR assay, as described previously.49,50,Briefly, the DNA of HIV-1-infected control SupT1 and CGW-vector-transduced SupT1 cells was extracted, and a first round of PCR performed using a pair of primers specific for the Alu and gag sequences.The primers for the first-round amplification were the following: Alu forward, 5′-GCC TCC CAA AGT GCT GGG ATT ACA G-3′, and HIV-1 gag reverse, 5′-GTT CCT GCT ATG TCA CTT CC-3′.The reactions were performed in a total reaction volume of 25 μl, using a standard protocol.14,The second-round of RU5 kinetic PCR was performed on 10 μl of diluted first-round PCR product.The primer sequences were R_FWD, 5′-TTA AGC CTC AAT AAA GCT TGC C-3′ and U5 _REV, 5′-GTT CGG GCG CCA CTG CTA GA-3′.The RU5 molecular beacon probe, which was labeled at its 5′ terminus with the 6-carboxyfluorescein, as the reporter dye and at its 3′ terminus with the BlackBerry quencher, had the following sequence: 5′-FAM-CCA GAG TCA CAC AAC AGA CGG GCA CABBQ-3′.The reactions were performed in a final volume of 25 µl containing DyNAmo probe qPCR master mix, 400 nmol/l RU5 primer, 400 nmol/l RU5 primer, and 140 nmol/l RU5 molecular beacon probe.The reactions were performed using the CFX96 real-time PCR system with the following program: 20-second hot start at 95 °C, followed by 50 cycles of denaturation at 95 °C for 3 seconds and annealing and extension at 63 °C for 30 seconds.Glyceraldehyde-3-phosphate dehydrogenase was used for quantifying the amount of DNA in each qPCR assay, by using the following GAPDH primer sequences: GAPDH_FWD, 5′-GAA GGT GAA GGT CGG AGT C-3′ and GAPDH_REV, 5′-GAA GAT GGT GAT GGG ATT TC-3′.The GAPDHTM molecular beacon probe was designed to contain the following sequence: 5′-FAM-CAA GCT TCC CGT TCT CAG CCT-BBQ-3′.The reactions were carried out as previously described.SIVmac and SHIV-1 infection.SupT-1 cells stably expressing MyrAnkGAG1D4 cells were infected with SIVmac239 or SHIV-Bo159N4-p at MOI 1.The culture supernatants were collected at day 7, D11, and D18 pi.Thereafter, the levels of p27 were determined by the ELISA assay.The copy number of viral genomic RNA present in the cell culture supernatants was evaluated by qPCR at D18
Designed molecular scaffolds have been proposed as alternative therapeutic agents against HIV-1. The ankyrin repeat protein (AnkGAG 1D4) and the zinc finger protein (2LTRZFP) have recently been characterized as intracellular antivirals, but these molecules, used individually, do not completely block HIV-1 replication and propagation. The capsid-binder AnkGAG1D4, which inhibits HIV-1 assembly, does not prevent the genome integration of newly incoming viruses. 2LTRZFP, designed to target the 2-LTR-circle junction of HIV-1 cDNA and block HIV-1 integration, would have no antiviral effect on HIV-1-infected cells. However, simultaneous expression of these two molecules should combine the advantage of preventive and curative treatments. To test this hypothesis, the genes encoding the N-myristoylated Myr(+)AnkGAG1D4 protein and the 2LTRZFP were introduced into human T-cells, using a third-generation lentiviral vector. SupT1 cells stably expressing 2LTRZFP alone or with Myr(+)AnkGAG1D4 showed a complete resistance to HIV-1 in viral challenge. Administration of the Myr(+)AnkGAG1D4 vector to HIV-1-preinfected SupT1 cells resulted in a significant antiviral effect. Resistance to viral infection was also observed in primary human CD4+ T-cells stably expressing Myr(+)AnkGAG1D4, and challenged with HIV-1, SIVmac, or SHIV. Our data suggest that our two anti-HIV-1 molecular scaffold prototypes are promising antiviral agents for anti-HIV-1 gene therapy.
37
Grassland futures in Great Britain – Productivity assessment and scenarios for land use change opportunities
Globally, grasslands are the dominant form of agriculture by land area, primarily utilised for the provision of feed for ruminants.In the United Kingdom, grasslands represent over two thirds of agricultural land area, broadly grouped into temporary, permanent and rough-grazing types.In 2015, UK grasslands supported 9.9 and 33.3 million heads of cattle and sheep, respectively."This provided 15.2 million tonnes of cow's milk, 0.9 and 0.3 million tonnes of beef and sheep meat, respectively, which represents a significant land resource used for food. "Grasslands also play an important role in supporting biodiversity and in delivering other benefits to society like carbon sequestration, biomass for bioenergy, and recreational opportunities.Environmental agencies are increasingly incorporating natural capital and ecosystem services into policy and management of agricultural landscapes.With the UK Brexit vote in the 2016 referendum, new options for the future of its farming need to be identified.The debate seeks to balance the arguments for intensifying production with those for incorporating wider sustainability criteria into land use planning."For grassland systems, the upland regions of the UK and beyond are particularly vulnerable in this regard. "These areas play a central role in the provision of regulating ecosystem services; however, the beneficiaries of these services are often far removed in distant urban areas.Continuously declining N-fertiliser inputs and stocking rates reflect the traditionally low-input, low-output business model of upland farmers that supports the provision of these societal and environmental benefits but presents economic challenges for them.Against this background a key requirement to inform development of farming policy is spatially explicit knowledge of current and future grassland productivity."Benchmarking and understanding the levels of dry matter yield and quality are important to optimise productivity for sustainable intensification within grassland systems.Making use of the productivity gap between, or closing the yield gaps within, grassland types could increase biomass production for various services and value chains, food, feed or bioenergy.This would provide opportunities to those areas of the country that might be considered preferential for change, given social, environmental and economic factors.Such knowledge allows policy makers to explore options in regions with lower productivity gains of changing grassland management practice that benefit biodiversity and other ecosystem services like carbon sequestration and water quality.The net primary productivity of grasslands can be measured from its annual dry matter production.An earlier presented process-based grass model increased our understanding of past experimental DMYs on temporary, permanent and rough-grazing grasslands.Key biophysical driving variables were up-scaled, building meta-models to estimate productivities for each grassland type.To assess future production, these estimates need to account for climate change, increased atmospheric carbon dioxide concentration and technological progress, e.g. better genetics and management.The impact of past climate change on grassland DMYs is uncertain: While it was found to be rather small or undetectable, forecasts are that future climate change is likely to improve productivity and quality of grasslands.Earlier scenarios found little change in DMYs for Scotland, in spite of increased being likely to stimulate pasture growth.Technological progress in terms of plant breeding and improved agronomy are most likely to continue increasing grassland productivity.An annual increase of the potential DMY of 0.25 to 0.76% seems possible.Actual on-farm grassland yield gains varied between countries and grassland types and they were low on permanent grassland due to less frequent reseeding.For semi-natural grasslands used for rough-grazing, productivity cannot be improved genetically but influenced by changing growing conditions, e.g. improving the hydrology or adjusting stocking density.Against this background, the objectives of this study were to estimate DMYs for all grassland types across the UK for current and future climates considering CO2 enrichment and technological progress; to assess and map the availability of total dry matter production constrained by grassland areas surveyed in 2010 across Great Britain; to identify productivity gaps in reference to current benchmark DMYs, particularly with respect to declining DMYs because of low N application rates; and to perform spatial analyses of the impacts of conversion between grassland types and to investigate changes in total grassland biomass production in Great Britain under varying land use options in comparison to ‘Business as Usual’.Throughout the paper, BAU refers to the distribution of grassland in 2010, with DMYs adjusted for climate change and technological progress in subsequent decades.The meta-models used here were derived from outputs of a process-based model calibrated using a comprehensive set of experimental DMY data measured in the 1970s and 1980s.These meta-models accounted for effects of weather, soil available water capacity and N input on DMY.Meta-models belong to the class of empirical models, that once calibrated can be as robust as the process-based models but are much less demanding in terms of input data.DMYs calculated using baseline weather, are referred to as baseline dry matter yields, calibrated and validated against observations in the 1970s and 1980s.However, since then climate has changed, atmospheric increased, and pasture species with higher growth potential and improved agronomy have been adopted for improved, i.e. temporary and permanent grasslands.The approach of Ewert et al. was followed to calculate the grassland DMYs from 2010s to 2050s, which accounts for the effects of these three yield determining factors: change in climate, carbon dioxide fertilisation effect due to rising atmospheric and technological progress.The meta-models encapsulate the effects of weather variables on DMYs using inputs of changed bioclimatic variables that reflect the weather-governed DMYs for any queried future decade.These variable changes fed directly into the meta-models, developed from scenario outputs generated by validated process-based growth models, to calculate future grassland productivities.Inputs were SAWC and bioclimatic variables of monthly temperature, precipitation, and global radiation under baseline and future climate change scenarios.The impact of climate change on grassland productivities is expressed as the percentage difference between DMY under the baseline climate and each climate scenario in decadal steps from 2010 to 2050.CO2 fertilisation effect,Most experimental evidence indicates that the growth of perennial ryegrass was stimulated by CO2 enrichment and consequently the DMY was increased by an average 0.06%/ppm .The percent increase was multiplied with the incremental increase of from the baseline to the respective later decades.Atmospheric has increased from 334 ppm in the 1970/80s to the present 400 ppm in 2015 at a rate of approximately 2 ppm per year due to anthropogenic forcing.The predicted atmospheric for the 2020s to 2050s were taken from the projections of the BERN model under low and medium CO2 emission scenarios in line with earlier studies.The atmospheric of past years were taken from the annual mean records of at Mauna Loa, Hawaii by Earth Systems Research Laboratory.The cumulative CFE for the various decades were calculated and applied accordingly.Innovations in technology to improve grassland productivity include breeding varieties with higher potential yield and improved farm scale management to fully reap the genetic potentials.Based on the results of multiple variety trials for perennial ryegrass the annual mean genetic potential DMY gain was set to the overall mean of 0.5%.This agrees with the average annual on-farm yield increase suggested for temporary grassland, while for permanent grassland an annual yield gain of 0.35% was assumed.These TP factors assume an optimum supply of all nutrients and standard cutting/grazing regime of four and two cuts, respectively.For rough grazing grassland, which is semi-natural with little agronomic inputs, no technological improvements in dry matter productivity were applied.Thus, the accumulated percentage increases above the Ybase were calculated and applied on each of the three types of grassland from 1980s to 2050s.The necessary inputs of monthly climatic variables for the baseline and for decades from 2020s to 2050s were obtained from the most recent UK climate projections.The monthly maximum and minimum temperature, precipitation and global radiation were initially available at 25 km × 25 km grid, which were harmonised into 1 km × 1 km grid for the whole UK.Relative to the baseline climate, seasonal precipitation and global radiation differed little between the low and medium emission scenario during the 2020s to 2050s across the UK.The global radiation increased most in spring, less so during summer and autumn.Overall, summer was likely to be drier while winters would be wetter in the future.Under both CO2 emission scenarios, the UK will be warmer in all seasons.Although absolute temperatures increase most in summer, the relative increase was greatest in winter and spring.These climatic data were used in combination with the spatially distributed soil available water content in the root zone obtained from the European Soil Database at 1 km × 1 km grid, as inputs for the meta-models to calculate the DMYs on temporary, permanent and rough-grazing grassland.The annual survey of the nitrogen applied per hectare to temporary and permanent grassland started in the 1960s.The average N applied increased steadily until the mid-1990s but declined then from the late 1990s onwards on both, temporary and permanent grassland until 2008 and remained unchanged since.The overall average N use during the recent decade came to 99 and 52 kg/ha on temporary and permanent grassland, respectively.The estimated coefficients were: a = 22.1696, b = 0.2373, c = −0.0001944, d = 0.000002117.This equation was used to calculate the yield gap caused by reduced N fertiliser usage compared to the respective best practice.Grassland areas were surveyed by Defra in 2010, and data are available at a 2 km × 2 km grid resolution.For Great Britain, UK without Northern Ireland, grassland covered 9.9896 million ha in total, of which 1.0246, 4.5333 and 4.4317 million ha were temporary, permanent and rough-grazing, respectively.This leaves about 2.5 million ha of grassland unaccounted for in this scenario analysis, as NI was not included in the Agricultural Census.Analysis explored five land use transition scenarios for GB covering the period 2010 to 2050 as deviation from BAU, conducted at a 2 km × 2 km grid resolution for compatibility with the survey data.The focus of the scenario analysis was on changes in management practices that would result in shifts between different grassland types.Transitions between grassland types did not increase or decrease the overall area of GB grassland.The likelihood that farmers will change management practices is determined by complex social and economic drivers arising from past and current experiences that serve to limit farm development pathways.Our analytic approach does not assume optimised transitions between grassland types determined by factors such as monetary returns, yields, or carbon stocks.Instead the target area for conversion in hectares for GB was calculated by implementing a stochastic algorithm that randomly assigned grassland conversion areas for each 2 km × 2 km grid cell until the target area for conversion was met.For each scenario 1000 permutations were conducted and changes in average yield per 2 km × 2 km grid cell and for GB total dry biomass production were calculated.Although the analysis considered conversion of different grassland types, plausible limits to this conversion were identified based on a subset of constraints defined in part by Lovett et al. for energy crops.The constraints are altitude, slope, and distribution of nitrate vulnerable zones across GB.In determining the location of land use transitions the stochastic algorithm preferentially chose to convert grassland in areas that were consistent with the logic of the scenario based on these constraints.For example, conversion of rough-grazing to permanent grassland, which implies greater agricultural inputs, initially focused on areas outside NVZ and where slopes were ≤15%.Where the target area for conversion specified within the scenario exceeded area available due to the constraints, the stochastic algorithm initially converted grassland outside the constrained areas before converting grassland within excluded 2 km × 2 km grid cells.The first four scenarios explored possible permutations of the transition between differing grassland types that could be achieved through changes in management practice.In the first two instances, a reduction in management intensity was examined and in the second instances an increase in intensity of production.For each scenario, the stochastic algorithm considered transitions of between 0 and 100% of 2010 area in 10% increments.This defines a combination of each scenario with 11 transitional steps from 0 to 100% at 10% increments, over which possible changes to the grassland management regime could occur, allowing examination of their implications for total GB grassland DM production.The final scenario examined a more complex set of management options informed by recent discussions focused on upland regions.In contrast to the other four, Scenario E did not explore change in grassland yield associated with changing management practices, rather the aim was to maintain GB grassland DM production at BAU levels.In areas defined as upland permanent grassland was converted to rough-grazing and the loss of total grassland DM production calculated.Conversion of permanent to temporary grassland in lowland areas was then carried out to compensate for the lost total dry biomass production.As with scenarios A–D, scenario E examined transition of between 0 and 100% of the specific grassland area in 10% increments using the same stochastic approach.‘Blanket’ DMYs were calculated with the meta-models at 1 km2 resolution across the UK, assuming a single grassland type for all land with SAWC information.The average blanket DMYs for the baseline and future weather indicate little difference between the emission scenarios.Future climatic changes have little effect on average DMYs within each grassland type.The weather-governed productivity is unlikely to be affected in the future and remained about 10.5 t/ha on temporary grassland.Productivity of permanent grassland and rough-grazing will be slightly reduced by future weather, likely due to increased variability of DMYs.By 2050, the productivity on rough-grazing grassland is likely to be reduced by about 10% with an increased coefficient of variation.Maps of the DMY show the technology- and -adjusted blanket productivity for all agricultural land in the 2010s applying a , equivalent to the medium emission scenario.The national average blanket DMYs in the UK are likely to increase between the 2010s and 2050s for improved grasslands but differences between the DMYs under low and medium emission scenarios are very small.For rough-grazing grassland the stimulus of rising cannot compensate the negative impacts of future weather.The productivity of rough-grazing grassland is unlikely to change by 2050, while yields on improved grassland types are likely to increase.Assuming no changes in land use intensity, the average DMYs were based on the actual areas of each grassland type and calculated from 2010s to 2050s.These benchmark productivities were very similar when calculated for the whole country and the census areas.The DMYs in 2010 represent the current benchmark productivities of 12.5, 8.7 and 2.8 t/ha on temporary, permanent and rough-grazing grassland in GB, respectively.By the 2050s these are likely to increase by up to 24% and 14% on temporary and permanent grassland, respectively.For all grassland types, the productivity is predicted to be more variable as CV% increases slightly.After overlaying the NUTS 1 regions with the grassland areas and the dry matter production per 1 km2 grid, the total grassland area and total dry matter production per region in Great Britain were calculated.Within Great Britain, the total grassland area in 2010 was partitioned to 45.7, 13.4 and 40.9% between Scotland, Wales and England, respectively.In terms of grassland type, Scotland contained 41.2, 21.0 and 72.0% while England shared 48.7, 56.6 and 23.0% of temporary, permanent and rough-grazing grassland, respectively.In terms of total DM production, the share was partitioned into 40.3, 45.3 and 14.4% for Scotland, England and Wales, respectively.Within England, the largest grassland area and availability of total DM production were in the South West, followed by the North West and the West Midlands.Defra reported areas of respective grassland types in 2010 for the whole UK totalling 12.54 million ha and the Agricultural Census in 2010 specified these areas for GB only with 9.99 million ha.The total DM availability for each grassland type was calculated by multiplying the respective grassland areas and their corresponding mean DMYs.The UK total potential availability of grassland biomass can reach 82 million tonnes.With 63% permanent grassland provided the largest proportion of this national total while temporary and rough-grazing grassland contributed equally to the remaining 37%.Without NI the annual biomass resource shrinks to 64.5 million tonnes, of which 40 million tonnes come from permanent grassland.The above projected grassland productivities consider all factors from 2010s to 2050s reflecting the attainable DMYs.The actual on-farm DMYs are usually smaller than the attainable yields due to other limitations, like fertiliser management.The modelled DM productivity for permanent and temporary grassland was based on best practice application rates of 150 and 300 kg N/ha, respectively.The annual N usage on grassland had dropped to ca. 99 and 52 kg N/ha on the temporary and permanent grassland, respectively, much below these recommended economic optimums of 150 and 300 kg N/ha for permanent and temporary grasslands, respectively.To estimate the productivity gaps on temporary and permanent grassland, the relative DMYs were calculated using these lower values and estimating the difference from the relative DMYs at the recommended N; see Fig. 1).The current N shortage resulted in a yield gap calculated from on-farm DMYs of about 45 and 39% below the attainable DMYs on temporary and permanent grassland, respectively.This corresponds to a total actual unused production of about 21 million tonnes DM, which could rise to 30 million tonnes by 2050.Out of the four scenarios describing conversion between grassland management options, only Scenario A, characterising changes in yield resulting from conversion of Permanent grassland to Rough-grazing, resulted in a decrease in total DM production in GB by 2050 compared to the 2010 BAU value.Even in this scenario, conversion of up to 20% of total area could be implemented while maintaining a comparable level to total DM production to the 2010 BAU.Total GB grassland DM production in Scenario B, which represents the other reversion scenario exploring reduced management intensity, showed increases out to 2050 compared to the 2010 BAU value even under the transition representing 100% area conversion.Scenarios C and D represent lowland and upland intensification of existing grassland management and describe a substantial increase from the 2010 BAU in total GB grassland DM production out to 2050.For example, unconstrained conversion of 100% of Rough-grazing to Permanent grassland would increase total GB grassland DM production from 63 million tonnes in 2010 to 107 million in 2050.In both cases constraints maps served to restrict the area over which increases in management intensity might practically be achieved to provide a more realistic assessment of increase in total GB grassland DM production.Based on constraints it would be possible to achieve 50% conversion of management intensity for Scenario C and 30% conversion for Scenario D, in both cases yielding an additional 18 million tonnes above the 2010 BAU benchmark, bringing the total production to ca. 90 million tonnes.Scenario E explored an alternative future where total potential DM production was maintained at calculated DMY in GB during the 2010s to 2050s, assuming optimal N.In this scenario, there was a reduction in the management intensity of permanent grassland in upland areas ≥ RG) to the west and north of GB, accompanied by conversion of permanent to temporary grassland in lowland regions ≥ TG) to maintain total GB grassland yield.Given the restriction imposed by the presence of NVZs in England, our stochastic algorithm selected for production of grassland that was still focused in the north and west of GB but represented a shift in management intensity from upland to lowland areas.In terms of land conversion, the abandonment of permanent grassland in upland regions would require an increase from up to 1.9 million ha of temporary grassland to compensate for lost yields.At more realistic conversion levels of 20–40%, there are options for substantial reductions in management inputs of uplands regions of GB that would require intensification of only 200 to 300 thousand ha, reseeding and fertilising permanent grassland in the lowlands more frequently to make up for lost yield in upland areas.Considering the global importance of grasslands, not only as a source of feed and food but carbon sink, ecological buffer and source or haven of biodiversity, our spatially explicit grassland yield model can provide a valuable evidence-base for policy making.The analysis considered most recent evidence about climatic and physiological control factors and assumed technological developments to be a continuation of past progress, a rather conservative assumption.Irrigation is unlikely to be introduced in UK grasslands and ignored in our analysis but could overcome more frequent summer drought in the future, reducing variation of DMYs.The most striking features of this analysis are the opportunities that arise from closing the yield gap and the evaluation of possible futures for changes in intensity of grassland management practices across GB.The impact of climate change on weather-governed DMYs is very small though slightly positive on temporary grasslands and marginally negative on rough-grazing grasslands.The largest impact of climate change is likely to be seen on permanent grasslands, with DMY declining by about 2.5 to 5% from 2020s to 2050s.This largely agrees with past findings that the impacts of past climate change on grassland DMYs was found to be small or undetectable.However, Cooper and McGechan emphasised that site differences in weather patterns will have greater effects on grass conservation and productivity than other predicted effects of climate change, which is reflected in increased yield variation.The effect of rising atmospheric on stimulating growth for C3-plant species such as perennial ryegrass was assumed to be more conservative than in these larger sets of experiments.For temporary and permanent grasslands, the effect of rising is intricately linked to technological progress, and the net effects are likely to be smaller than the additive gross effects.Only for rough-grazing grasslands can it be seen that rising compensates only marginally for the negative effects of weather.The relative DMY increase is likely to be slightly higher under the medium compared to the low emission scenario due to the difference in atmospheric .These CFE effects are lower than the difference of relative DMY increase between grassland types and certainly much smaller than the additive effects of and technology progress.Actual DMY increases due to increased depend on other interacting factors such as soil N fertility, water productivity and soil water stress.As seen in the results the actual additive increase of DMYs between 2010s and 2050s is up to 24% and 14%, for temporary and permanent grassland, respectively.This is smaller than the applied, potentially possible, joint CFE and TP factors between 2010s and 2050s, which were 28.0 and 22.0% for temporary and permanent grassland, respectively under the medium emission scenario.The efficiency to exploit the CFE and TP effects is lower under permanent than temporary grassland.Usually, permanent grasslands in the UK were kept under the same grass species longer than temporary grasslands because the introduction of new, better adapted cultivars and practices is slower.Compared with arable crops the rate of potential yield improvement was slower in pasture grass.This is particularly relevant to perennial ryegrass, which was considered in this study.Longer breeding cycles, inability to exploit heterosis in commercial pasture crop cultivars and selection in the absence of competing neighbour plants are reasons for a poor correlation with pasture sward performance.Agronomists will continue to improve practices that provide overall gains in grassland productivity.The applications of genomics, marker-assisted selection and use of genetically modified grass types are likely to accelerate genetic gains of future grassland productivity.Overall, the UK is well-positioned geographically and the rate of genetic gain achieved was among the top range of 4–5% per decade.These can include higher potential DMY, better quality and more resilience to biotic and abiotic stresses.The current scenarios ignored the exploitation of other high-yielding grassland species, like Italian Ryegrass, for temporary grassland which will allow a step change in grassland productivity.As in this paper, crop growth models can be used to benchmark on-farm crop production, quantifying the attainable yields for a given variety grown under defined climatic conditions and agronomic management.Thus, benchmark yields vary, are site- and soil type specific and evolve with time due to difference in weather and technological progress.We calculated the national benchmark DMYs in Great Britain by constraining the blanket grassland productivity to the surveyed areas of each grassland type in 2010.These BAU benchmark DMYs will increase until 2050 under both, low and medium CO2 emission scenarios by about 8 million tonnes due to rising atmospheric and technological progress.The UK total potential availability of biomass for feed in the meat and dairy sectors was the sum of the product of respective benchmark DMYs and grassland areas for the main grassland types.Theoretically, based on respective consumption rates for cattle and sheep, the DM under 2010 BAU could support 18.3 million heads of cattle on improved lowland grassland, and 16.3 million sheep from rough-grazing grassland.Allocating 30% of the permanent grassland to sheep, the potential herd sizes of cattle and sheep are larger than reported in the 2010 statistics.Theoretically, the gap in herd size corresponds to about 17 million tonnes of unused but potentially available feed from grassland.Considering lower consumption rates for calves and lambs and significant amounts of compound feed used, it is apparent that grassland is an underutilised feedstock for the livestock sector revealing a considerable yield gap between benchmark and actual on-farm DMYs."The actual on-farm DMYs in either of the two improved grassland systems follow Liebig's law of the minimum with yield-limiting factors.Here, we examined the likelihood that productivity gaps are caused by insufficient amounts of N fertiliser applied to temporary and permanent grassland in recent years.The current estimated yield gaps of 39–45% are much smaller than suggested by Erb et al. but we agree with their conclusion that “… future research will need to scrutinize the role of land management in non-forest ecosystems”.Indeed, this study showed that grassland types must be differentiated in terms of productivity and the yield gap may even be overstated, especially for permanent grasslands of which a substantial proportion will be grazed.Wastes from the livestock would return between 60 and 80 kg N/ha to the grassland, depending on its management intensity.Furthermore, the average yield gaps due to suboptimal N-fertiliser applications will vary across grasslands of different natural productivities due to differences in soil N mineralisation.Nevertheless, the return to recommended higher rates of N-fertilisation are likely to increase N2O emission by 1 to 2 kgN2O-N with a considerable Global Warming potential.However, recognising grassland distribution and spatially explicit management intensity per grassland type would most certainly improve recent estimates of N2O emission that assumed a blanket management covering all grassland systems.The annual statistics indicated that the total grassland area gradually declined for about 15 years from 1984, but remained steady until 2015.The herd size, however, continued to decline from 12.0 to 9.9 M heads for cattle and 42.1 to 33.3 M heads for all sheep, respectively.The decline in animal numbers confirms that temporary and permanent grasslands were an under-exploited resource, either not used for livestock or not performing to their full potential.Attributed to infrequent re-seeding, fertiliser application and inadequate soil pH, we believe that grassland management offers considerable opportunities for improvement.Closing the yield gaps between the attainable and actual on-farm DMYs is impeded by little empirical information about on-farm DMYs of different grassland types.They reported that the on-farm DMYs in intensively managed dairy systems ranged from 50 to 80% of the attainable DMY in Chile and from 60 to 80% in the Netherlands.The analyses presented in Scenarios A–D demonstrate the potential influence of changing management patterns on GB grassland DM production.Such information can inform development of land use policy.In the scenarios, use of biophysical and policy constraints to conversion provide a realistic view of changes that could be realised given specific policy drivers.For example, reduction in the intensity of management of 20% of permanent grasslands would have limited impact on total GB grassland DM production in 2050 compared to 2010 BAU.This is achieved through adoption of best practice fertiliser application and technological progress, coupled with changes in climate.Alternatively, increased intensity of management of grasslands could make an additional ca. 18 million tonnes per annum of biomass resource available.These additional resources – plus the 20 million tonnes from closing the yield gap – could be put to multiple uses depending on national priorities.For example, increasing the national herd to support food independence and exports, or as a resource for energy production through routes such as anaerobic digestion.A gap of 20 ∗ 106 tonnes could provide up to 12.5% of the total gas output or 25% of the gas imports to the UK in 2017 assuming standard conversion rates from grass biomass to biogas.Outside biophysical and policy constraints, Scenarios A–D assumed management practices may change across all GB.However, future policy will need to be designed to reflect differing regional priorities.This was explored in Scenario E, which considered reversion of improved grassland in upland regions of GB.Such a focused policy mechanism would support farmers for the delivery and protection of ecosystem services within grassland systems that are challenging from a production perspective.These ecosystem services include protection of water quality and carbon stocks, and in certain regions maintenance of landscape characteristics.In scenario E, grassland production would shift to more intensively managed lowland regions to maintain total GB grassland production.Overall, the scenario analysis demonstrates that closing the existing gaps of resource use efficiency present large challenges and opportunities for policy-forced changes.The analysis presented here finds substantial increases of future DMYs that can be achieved in UK grasslands, mainly through a combination of technological innovation and improved agronomy.Based on climate projections to 2050, yield increments are likely to be larger on temporary than on permanent grassland, with little change on rough-grazing, where rising atmospheric compensates adverse weather effects.Across a range of scenarios, we demonstrate considerable scope for maintaining or increasing total GB grassland DMY depending on different assumption about the percentage of grassland that would undergo change, management or land use.Such information is critical for policy makers in the UK who are currently engaged in debate around future pathways for farming and the countryside.The scenarios produced in this study were designed to illustrate the implications of large scale change.To inform policy, the analysis should be further refined to consider local, regional and national priorities for biodiversity and ecosystem services.Understanding priorities across different scales will allow for a more nuanced consideration of where and how management and land use practices could be altered to deliver benefits to society, and ultimately to consider the best mechanisms to support their delivery.
To optimise trade-offs provided by future changes in grassland use intensity, spatially and temporally explicit estimates of respective grassland productivities are required at the systems level. Here, we benchmark the potential national availability of grassland biomass, identify optimal strategies for its management, and investigate the relative importance of intensification over reversion (prioritising productivity versus environmental ecosystem services). Process-conservative meta-models for different grasslands were used to calculate the baseline dry matter yields (DMY; 1961–1990) at 1 km2 resolution for the whole UK. The effects of climate change, rising atmospheric [CO2] and technological progress on baseline DMYs were used to estimate future grassland productivities (up to 2050) for low and medium CO2 emission scenarios of UKCP09. UK benchmark productivities of 12.5, 8.7 and 2.8 t/ha on temporary, permanent and rough-grazing grassland, respectively, accounted for productivity gains by 2010. By 2050, productivities under medium emission scenario are predicted to increase to 15.5 and 9.8 t/ha on temporary and permanent grassland, respectively, but not on rough grassland. Based on surveyed grassland distributions for Great Britain in 2010 the annual availability of grassland biomass is likely to rise from 64 to 72 million tonnes by 2050. Assuming optimal N application could close existing productivity gaps of ca. 40% a range of management options could deliver additional 21 ∗ 106 tonnes of biomass available for bioenergy. Scenarios of changes in grassland use intensity demonstrated considerable scope for maintaining or further increasing grassland production and sparing some grassland for the provision of environmental ecosystem services.
38
Process optimisation of rotating membrane emulsification through the study of surfactant dispersions
Formulating dispersions of one liquid phase within another immiscible liquid remains an important area of research since these are readily incorporated within many foods, pharmaceutical, agrochemical and cosmetic products.Commonly cited examples from within the food industry include ice cream, mayonnaise and salad dressings, all of which are supplied to a global marketplace in large quantities.As such, there is increasing focus on the development of emulsification processes either to deliver improved product characteristics or to match expectation of current product quality but in more sustainable manner.Emulsions require the use of a surfactant to stabilise the droplet interface and as such, selection of an appropriate one is a key consideration for producing a microstructure with the desired droplet size distribution.There are two philosophies that can be adopted to create an emulsion.The majority of emulsification processes focus on the breaking down droplets into smaller entities through subjection to mechanical energy e.g. homogenisers, rotor–stator mixers, colloid mills.A number of disadvantages are associated with forming droplets in this way, primarily associated with a wide droplet size range due to non-uniform energy dissipation and low energy efficiency due to repeated droplet break up and re-coalescence.In the latter instance, surfactant concentration is often overcompensated in order to achieve favourable processing kinetics.It is widely accepted that surfactants are some of the most costly components within many formulations.Processes that require both excessive use of energy and costly ingredients are neither environmentally nor economically sustainable and thus attention is shifting towards alternative processes that can minimise their use.More recent approaches look to build up droplets individually and then add them to the continuous phase in a controlled manner until the desired volume fraction of the phase to be dispersed is obtained.This is the basis of membrane emulsification in which droplets are produced at individual membrane pore outlets, only detaching when the force holding the droplet at the membrane surface is overcome by a combination of forces determined by operating parameters such as transmembrane pressure and shear as well as by the physical properties of the phases e.g. density difference Peng and Williams, 1998; De Luca and Drioli, 2006.With careful operation of the membrane emulsification process, droplets can be eloquently crafted and as such narrow droplet size distributions are achievable which may improve functionality of an emulsion based product e.g. stability against Ostwald ripening or ensure uniform release rate of an active ingredient throughout the system.In combination with this benefit, the energy consumption is at least an order of magnitude lower than when adopting a droplet break down approach.With the current rising costs of energy and negative environmental consequences associated with excessive energy consumption, this therefore increases the appeal of low energy, sustainable processes such as membrane emulsification.Up until now, a number of drawbacks associated with membrane emulsification have perhaps held back the process from being implemented industrially.It is widely documented that the primary limitation is the low dispersed phase flux achievable.Attempts to maximise the flux through application of high pressure driving force lead either to coalescence or jetting of the dispersed phase, both of which reduce the level of control on the droplet size produced.Alternatively, a pre-mix membrane emulsification approach is used in which a coarse emulsion is passed through a membrane to break down droplets within pore channels.Whilst higher fluxes are achievable due to the generally lower viscosity, the requirement of multiple passes to ensure droplet uniformity negatively impacts the time and energy savings in comparison to the conventional approach.Furthermore, it is likely that fouling will occur as the mixture of oil, water and surfactant is broken down within the internal structure of the membrane.If one aimed to maximise the level of control over droplet formation, the advantages of energy saving are lost due to the long operating time.It is therefore very difficult to produce small, mono-dispersed droplets at a rate that is competitive with current emulsion production technologies.The key to solving this challenge is by ensuring rapid adsorption of surfactant to ensure early droplet detachment and stabilisation of the interface against coalescence.However, conventional approaches lead to membrane coalescence in the majority of cases irrespective of the surfactant type and concentrations used.The aim of this study is to investigate the coupled behaviour between the droplet size of oil-in-water emulsions and either the applied transmembrane pressure or the shear rate for a range of surfactant systems.Furthermore, a novel approach to ensure the rapid adsorption of surfactant is presented namely through positioning high hydrophilic–lypophilic balance, non-ionic surfactants within the dispersed phase rather than their common positioning within the continuous phase.This is subsequently compared with a pre-mix membrane emulsification approach as well as a rotor–stator high shear mixer both in terms of the emulsion droplet size produced but also the rate of production.The study will further understanding of membrane emulsification, enabling process optimisation to reduce droplet size, energy and surfactant consumption whilst maximising production rate simultaneously.Oil-in-water emulsions containing 10 vol.% of commercially available sunflower oil were produced.The aqueous phase was passed through a reverse osmosis unit and then a milli-Q water system.The emulsions were stabilised by a single surfactant in each case.The surfactants investigated were Tween 20, Brij 97, SDS and hydrolysed lecithin.These were either dissolved within the aqueous continuous phase or organic dispersed phase.The concentrations are expressed as weight percentages of the whole emulsion system.The experiments were performed using a tubular, hydrophilic SPG membrane of 6.1 μm mean pore size.The membrane dimensions were 10 mm outer diameter and 45 mm length, corresponding to an effective membrane surface area of 14.1 cm2.The wall thickness of the membrane was approximately 1 mm.The membrane was mounted on an IKA Eurostar digital overhead stirrer and positioned in the processing vessel.This vessel was interchangeable allowing for two different sizes to be used in order to vary the shear applied at the membrane surface.This altered the amount of continuous phase within the vessel since the membrane had to be submerged during process operation.Emulsion batch sizes between 25 and 110 g were produced.The membrane rotational speed in each experiment was varied between 100 and 2000 RPM.The transmembrane pressure was also investigated in the range of 0.2–1.5 bar.The schematic of the RME equipment setup is shown within an earlier publication.For typical emulsification operation, the oil phase was introduced to the inside of the membrane tube at the beginning of the experiment with the opening of the dispersed phase valve.Pressurisation of the dispersed phase storage tank with compressed air enabled the oil to permeate through the membrane to the outer continuous phase.Once the required mass of oil was added, the experiment was stopped by closing the dispersed phase valve.In the case of pre-mix rotating membrane emulsification, a TMP of 0.5 bar was used along with a membrane surface shear rate of 6.0 s−1.An initial 20 vol.% sunflower oil in water emulsion stabilised by 1 wt.% Tween 20 was formed and then subsequently passed through the membrane three times into an equal volume of distilled water.Observation of the droplet size decrease with each pass could therefore be observed but not without inadvertently diluting the dispersed phase volume fraction each time.Emulsions were also produced using a rotor–stator high shear mixer.The two phases were introduced within the 60 mm diameter vessel prior to emulsification.The emulsion batch size was 110 g in all experimental runs.The amount of energy input during processing was varied by altering the rotational speed of the impeller between 2000 and 10,000 RPM for 1.5 min, which roughly corresponds to the time required to add the dispersed phase during the membrane emulsification process at 0.5 bar.Droplet size distribution of all emulsion samples were measured using a Malvern Mastersizer with a hydro 2000 small volume sample dispersion unit.Droplet sizes were expressed as volume weighted mean diameter average of a triplicate of measurements.The error bars represent one standard deviation and where not visible are smaller than the symbols used.Interfacial tension values were measured using a goniometer Easydrop from Kruss.The pendant drop method was used to determine the interfacial tension at 20 °C between a droplet of dispersed phase formed from a 1.8 mm diameter needle within a cuvette containing the continuous phase.These measurements were taken over a period of 1800 s at 30 s intervals to acquire both initial and equilibrium interfacial tension values.The goniometer was also used to observe dynamic droplet formation with 1 wt.% of Tween 20.This was performed under a low and high injection rate of dispersed phase to emulate the effect of changing the applied transmembrane pressure during emulsification.Images were extracted at timescales representing initial size upon previous droplet detaching, a short arbitrary time afterwards and then finally the emergence of the droplet neck as it begins to detach from the needle.The energy consumed during process operation was calculated firstly by measuring the power draw using a commercially available plug-in energy meter at a given equipment rotational speed.Ten measurements were recorded whilst the membrane or impeller was fully submerged firstly within distilled water and then a 10 vol.% sunflower oil-in-water emulsion.Theoretically, the power draw will be higher in order to maintain the rotational speed within more viscous media, in this instance this was not observed since the viscosity differences were too subtle.As such, the values obtained were averaged to find the rate of energy consumption in Joules per second, which when multiplied by the processing time gives the energy consumed to operate the process.Fig. 1 shows the effect of transmembrane pressure on the resultant droplet diameter for systems in which the surfactant is dissolved within the continuous phase.What is clear is that there is a variance in the behaviour of the trend between 0.2 and 1.5 bar depending on both the type of surfactant used and whether a low or high concentration is used.For the low concentration systems, only Tween 20 exhibits a decrease across the pressure range investigated.The systems containing Brij 97, SDS and lecithin follow a steady increase in droplet size with increasing pressure.This is expected in the absence of coalescence as more mass is transferred to the droplet during the detachment stage.What separates the behaviour of Tween 20 from the other surfactants can be explained by considering the chemical properties associated with each of the surfactants used.It would perhaps be expected that Brij 97 and Tween 20 would exhibit similar behaviour across the pressure range since they are both non-ionic surfactants with similar HLB values.However, the molecular weights of the two surfactants are considerably different with Tween 20 being much larger/heavier at 1228 g mol−1 compared to Brij 97 at 357 g mol−1.Thus Brij 97 can move more freely throughout the bulk continuous phase towards the forming droplet interface due to less hydrodynamic resistance.On the other hand, Tween 20 is hindered by hydrodynamic resistance forces i.e. drag since it is larger and therefore is unable to adsorb as quickly to lower IFT and prevent coalescence.These suggestions are supported by the IFT values presented in Fig. 2a and b.As mentioned, since Brij 97 is a less effective surfactant at stabilising O/W emulsions as indicated by the HLB value, droplet diameters between 77.5 μm and 138.5 μm are formed which are larger than those observed with Tween 20.In addition, the use of ionic surfactants was also explored to consider the electrostatic effects on droplet formation.SDS is anionic with a high HLB value of approximately 40.This indicates it is an effective surfactant for stabilising forming oil droplets at the membrane surface and enabling detachment.A combination of electrostatic repulsive forces between adjacent forming droplets and low IFT values enabling droplets to detach earlier during formation virtually eliminate coalescence events and produce the smallest droplets.Lecithin is different from the other systems as it is a zwitterionic phospholipid and has a low HLB value of around 5.This indicates a preference to stabilise W/O emulsions rather than O/W produced in this case.A unique characteristic of lecithin is its ability to develop an elastic-like interface which may in turn prevent coalescence.The largest droplet diameters are formed since lecithin does not reduce the interfacial tension to as great an extent as the other surfactants.Furthermore, the rate of decrease is slow since the lecithin must first dissociate from vesicles formed in the bulk solution prior to adsorption at the forming droplet interface.This essentially lowers the effective concentration of free lecithin to stabilise the droplet since vesicle dissociation is the rate limiting step.Hence this combination of factors imply that droplets have to grow to much larger sizes in order to experience sufficient detachment force to overcome the higher retention forces.As a general observation, the droplet size average across the data set corresponds with the HLB value of the surfactant with higher values leading to smaller droplets as seen commonly within other literature.Focussing on the high surfactant concentration systems, different behaviour is exhibited by the Tween 20 and SDS systems than was observed at 0.1 wt.%.It is expected that increasing surfactant concentration generally enables formation of smaller droplet sizes since there are more surfactant molecules available for adsorption and hence the IFT is lower.For example, at 0.5 bar the droplet diameters of 0.1 wt.% and 1 wt.% of Tween 20 are 67.9 μm and 56.8 μm respectively.Similarly for SDS, these values are 53.6 μm and 28.6 μm respectively.However, there is a stark contrast in the behaviour of these systems across the pressure range.Tween 20 demonstrates a decrease followed by a plateau and then a slight increase which was not observed at the low concentration.The plateau region is attributed to droplet formation due to a spontaneous transformation in its shape in order to lower its Gibbs free energy.Such a phenomenon is more prevalent for high IFT systems since they are more thermodynamically unstable.Therefore, with an increase in surfactant concentration to 1 wt.% and hence a lower IFT, the region in which this phenomenon potentially occurs becomes much narrower and so an eventual increase in droplet size upon further increase of TMP is observed as predicted previously.In the case of SDS, beyond 0.5 bar the droplet size increases extremely rapidly from 28.6 μm to 103.3 μm.Beyond 0.8 bar, the droplet sizes produced are larger than those formed at low concentration.It is therefore expected that there is a change in the droplet formation mechanism from dripping to jetting which is inherent to high pressures and low IFT systems.This suggests that whilst lowering the IFT is beneficial if one wanted to produce smaller droplets, it limits the ability to operate at higher throughputs of dispersed phase whilst still forming droplets in a controlled way i.e. through a dripping mechanism.The effect of altering the shear rate at the membrane’s surface for different surfactant systems is shown in Fig. 3.Generally, increasing the shear rate through higher rotational speeds or narrower gap sizes leads to formation of smaller droplet sizes because the drag and centrifugal detachment forces are greater so droplets detach earlier from the membrane surface.This will also occur if the IFT can be reduced to low values quickly so the magnitude of the interfacial tension force is smaller.It is therefore unsurprising that the 1 wt.% SDS system produces the smallest droplet sizes between 27.5 μm and 58.2 μm followed by 0.1 wt.% SDS and 1 wt.% Tween 20.Furthermore, with higher rotational speeds which subsequently increase the continuous phase Reynolds and Taylor numbers, the transport of surfactant towards the interface is aided by a combination of diffusion and convection.As observed with the effect of TMP in Fig. 1, lecithin since it has a low HLB value produces the largest droplet sizes.The large error bars when using lecithin indicate droplet formation is quite erratic.This may be perhaps due to variant effects that the shear has on deforming the elastic interface which may subsequently promote forms of droplet–droplet interactions or alter the velocity profile locally to the membrane surface.Within the previous section in which the surfactant was dissolved within the continuous phase, a wide range of droplet sizes was produced.Excluding SDS from this analysis, droplet sizes ranged from 51.4 μm to 138.5 μm.Given the pore diameter of the SPG membrane was 6.1 μm, this means the droplet size to pore size ratio varied between 8.4 and 22.8, which is at the upper end of ratio values suggested by other authors.Since the hydrodynamics of the rotating membrane process are generally quite mild by comparison to a cross-flow membrane emulsification setup, the transport of surfactant to the forming droplet interface relies primarily on diffusion.It can therefore be concluded that with the surfactant in the continuous phase, the transport and subsequent adsorption of surfactant is too slow and thus coalescence occurs in most cases.This is supported by observations within the work of Wagdare and Marcelis in which 4 wt.% Tween 20 and 1 wt.% SDS were unable to single-handedly prevent coalescence of sunflower oil droplets produced from a silicon nitride membrane.For SDS, the surfactant is able to stabilise droplet interfaces more effectively but is prone to jetting except under low TMP conditions where the pore fluid velocity is minimised.This raises two fundamentally important questions.Firstly, ‘how can small droplets be produced quickly and in a controlled manner?’,and similarly ‘how can rapid adsorption of surfactant be ensured to minimise droplet coalescence in this process?’.Interestingly, a recent article by Gassin et al. considered the effects of the transfer of amphiphilic molecules across an O/W interface on the IFT between the two phases.They supported earlier findings suggesting that the IFT of a system could decrease below the equilibrium value at least in the initial stages depending on the partition coefficient of the surfactant and the kinetic rate to achieve adsorption equilibrium.This approach relies on surfactants that can be soluble in both aqueous and organic phases.Therefore, the use of non-ionic surfactants such as Tween 20 and Brij 97 and the zwitterionic surfactant lecithin are facilitated whilst SDS is excluded since it is insoluble in oil.It was hypothesised that by allowing surfactant to diffuse through a forming droplet interface during membrane emulsification, this would cause earlier detachment of droplets due to lower than expected IFT values whilst simultaneously limiting coalescence by enhancing the rate of adsorption.Thus, emulsion formation through membrane emulsification would be operated much more efficiently.Fig. 4 shows the effect of where the surfactant is positioned on the resultant emulsion droplet size.Significant differences in the droplet size produced can be seen with much smaller emulsion droplets produced when the surfactant is blended with the dispersed phase.For example, emulsions formed by using 0.1 wt.% Tween 20 and Brij 97 positioned within the oil phase are at least 3 times smaller than those formed with these surfactant conventionally placed within the aqueous phase for the same formulation/processing conditions.In this case, the droplet size to pore size ratio is much lower than previously observed, between 2.2 and 3.7.With 1 wt.% Tween 20 and 0.2 bar TMP, a ratio as low as 1.1 is achieved.Furthermore, 0.1 wt.% Tween 20 produces smaller droplets than a higher concentration of surfactant within the continuous phase.These two surfactants preferentiate towards being within the water phase, and so by diffusing out of the oil droplet to move into an aqueous environment, the IFT is seen to drop below the equilibrium value as shown by Fig. 5a.As an example, the IFT of 0.1 wt.% Tween 20 reaches 1.7 mN m−1 after 30 min but when placed within the water phase the value is 5.1 mN m−1 after the same time.It is anticipated that if left for a long enough period, the IFT values of the systems will converge to the same point.However, the RME process relies on droplet formation and detachment within a timescale <<2 s and thus a rapid decrease in IFT is beneficial.In the case of lecithin, this surfactant partitions in favour of being within the oil phase and is therefore ‘reluctant’ to diffuse out of the droplet and stabilise the forming interface.As a consequence, emulsions formed with lecithin in oil destabilised almost immediately most likely due to significant coalescence at the membrane surface.In terms of the effects of TMP, little variation is seen between 0.2 and 1.5 bar when Tween 20 and Brij 97 are positioned within the oil phase.Since the timescale for droplet formation and detachment is likely to be much shorter, any variations within dispersed phase flow will not drastically alter the volume contributed to each droplet during its detachment.For these systems, jetting does not occur because although the IFT is low, the slight increase in viscosity from blending 0.1 or 1 wt.% of surfactant into the 10 vol.% dispersed phase rather than the 90 vol.% of continuous phase leads to a lower dispersed phase pore fluid velocity such that the jetting point is not reached.It is likely that further increases in TMP beyond 1.5 bar would eventually result in the occurrence of droplet formation through jetting.The effect of shear rate at the membrane surface on droplet diameter when considering the surfactant position is presented in Fig. 6.Since the IFT of the non-ionic surfactant systems within oil is much lower than when in water, droplets are less resistant to shear and therefore detach earlier as smaller sizes.Only a small decrease is seen with increasing shear rate from 0.6 s−1 to 104.7 s−1.For example, when using 0.1 wt.% Brij 97, the droplet size varies between 15.2 μm and 21.5 μm compared to when the surfactant is placed within the aqueous phase.This emphasises that if the aim is to produce small droplet diameters, this can be achieved using less surfactant and less energy input if operating under minimal shear rates with Tween 20 or Brij 97 within the dispersed phase.This conclusion is also supported by Fig. 7 in which images were captured of droplet formation for low and high injection rates under quiescent continuous phase conditions.Small droplets can be produced from the needle with 1 wt.% Tween 20 and a low injection rate applied.In this case, the droplet detaches almost simultaneously as it forms since buoyancy overcomes the low IFT holding the droplet at the needle outlet.With a rotating membrane setup, the drag and centrifugal forces will inevitably lead to an even earlier detachment but perhaps reduce the extent of the size difference between the systems.To paraphrase, it is hypothesised that if the membrane surface shear rate was increased to much greater values than 104.7 s−1, the droplet size difference between the observed systems may be minimal.However, care is required when selecting operating parameters such as the applied TMP and shear rate in conjunction with inherent system properties such as IFT and viscosity as can be seen in Fig. 7d in which the disperse phase is injected as a jet of liquid with less controlled droplet formation occurring downstream and out of visual range.A number of publications have altered the approach of membrane emulsification by passing coarse emulsions through the membrane rather than a pure dispersed phase.This has led to additional benefits being cited such as high dispersed phase flux and lower energy consumption for producing high volume fraction emulsions.The logic underlining this approach is that droplets upon leaving pore outlets are already stabilised by the surfactant provided for the formation of the initial coarse emulsion and therefore nullifies coalescence effects.If this is the case, this logic would also be valid with the surfactant being supplied within the dispersed phase as discussed in the previous section.To test this hypothesis, an initial emulsion of 20 vol.% dispersed phase was formed either with Tween 20 within the continuous phase or dispersed phase using the conventional membrane emulsification approach.Each of these emulsions was then passed through the same, cleaned membrane into distilled water a further three times to observe the extent of droplets being broken down within the pore channels and the obtained results are presented in Fig. 8.As previously shown, the initial emulsion droplet size is lower with the Tween 20 in the dispersed phase due to the partitioning behaviour of the surfactant.What is interesting is the extent and rate of droplet size minimisation upon passing the emulsions through the membrane repeatedly.With the surfactant placed within the oil phase, the droplets experience only a negligible reduction in size beyond applying a single pass.Using 1 wt.% Tween 20 as an example, the initial droplet size of 15.4 μm is broken down to 6.1 μm, 4.5 μm and 4.3 μm upon applying further passes.If compared with 1 wt.% Tween 20, the break down is much more prominent from 58.8 μm to 15.1 μm, 6.7 μm and 5.9 μm.With further passes, it is likely that the systems will achieve the same droplet size value.Furthermore, a much more efficient adsorption of surfactant is achieved as demonstrated by 0.1 wt.% Tween 20 reaching smaller diameters than 1 wt.% in the water phase.The point is, through applying the surfactant within the oil phase, the need of multiple passes to achieve sufficient break down to the minimum droplet size is eliminated.In fact, the very nature of adopting a pre-mix setup can be questioned since fouling is a severe problem as shown in Fig. 9.In order to compare flow behaviour between the dispersed phase systems used, Fig. 9 is expressed as volume fraction of oil added to the final emulsion since the objective is to reach a pre-defined quantity of this material.There is no doubt that the flux of a pre-emulsion is much higher than of pure oil but a significant volume of that emulsion must pass through the membrane to arrive at the end point of the process.What is apparent is that the rate of mass transfer/addition for the pre-emulsion is not linear – that would be expected by Darcy’s law.This suggests an increase in resistance to flow over time which is likely to be caused by fouling.In the case of droplets slightly larger in diameter than the membrane pore channel, the shear exerted within the internal structure may not be great enough to overcome the droplet Laplace pressure.As a consequence, the droplet cannot deform sufficiently enough to pass through and thus it becomes trapped within the membrane, causing a blockage.However, much larger droplets will be broken up by the shear within the pore channel whilst smaller droplets will pass through unopposed.The flow behaviour of pure SFO in contrast to a pre-emulsion obeys a linear addition of material over time whilst a mixture of SFO and Tween 20 exhibits a slight reduction in the rate followed by a linear region.The surfactant may perhaps coat the membrane walls within pore channels during the initial stages of operation before the mixture starts acting as a bulk material.As expected the gradient of this linear region is lower than pure SFO since the viscosity is higher.With the requirement to pass the pre-emulsion through the membrane further times to achieve sufficient break down of droplets, it may be therefore more efficient to operate using a dispersed phase with lower flux but which ensures rapid adsorption of surfactant from a single pass i.e. using high HLB non-ionic surfactant within the oil.Finally, the energy density to form emulsions containing 10 vol.% of oil dispersed phase is considered at production rates varying between 3.7 kg h−1 and 6.2 kg h−1.This is with respect to where the surfactant is positioned for both the RME process but also a rotor–stator HSM.As can be seen in Fig. 10, there are fundamental differences between the processes both in terms of the energy consumed but also the behaviour of the systems investigated.Applying more energy through the rotation of the membrane or the impeller leads to formation of smaller droplets by enabling detachment/droplet break down.Generally, RME produces emulsions with at least one order of magnitude less energy but within most other literature, at the expense of either droplet size or rate of production.What is significant is that ensuring rapid adsorption of surfactant by positioning the Tween 20 within the oil phase results in droplet size ranges produced that are similar to those produced with high shear processing but with much less energy.Focussing on the HSM process, the differences of where the surfactant is positioned on droplet size are almost negligible.In this process, droplets are continuously broken down during operation and as such, mechanically induced convection rather than diffusion forces the IFT to equilibrium value.The effect of an initial decrease in IFT below equilibrium and the subsequent advantages in terms of facilitating droplet break down are therefore lost at the early stages of processing.Due to the variation in the approach by which droplets are formed, SDS appears a more appropriate surfactant during HSM processing since the electrostatic repulsion between droplet interfaces prevents re-coalescence within the continuous phase.Droplets can reach a minimum size of 7.4 μm at 10,000 RPM compared to 10.7 μm and 9.3 μm for Tween 20 and Tween 20 respectively.Additionally, the high HLB value and low equilibrium value of IFT allows for further reduction in droplet Laplace pressure and facilitates droplet break down.However, SDS is not as effective during the RME process since Tween 20 can achieve lower IFT values when it diffuses out of the droplet.Moreover, supplying surfactant in this way ensures it is provided at a rate proportional to the dispersed phase flow rather than being depleted from the continuous phase over time i.e. as it is needed.Whilst the production rate is reduced due to the increase in dispersed phase viscosity, the advantages in energy consumption and thus processing efficiency are still maintained.The effects of transmembrane pressure and shear rate have been investigated for four different surfactants and variable concentrations using a rotating membrane emulsification setup.In this work, a novel approach in which surfactant is provided via the dispersed phase, rather than its conventional positioning within the continuous phase was introduced.By allowing material to diffuse through the interface, this leads to a reduction in interfacial tension below the equilibrium value which is highly beneficial to the membrane emulsification process in order to prevent coalescence and allow early droplet detachment.However, this approach has only been successfully demonstrated for stabilising O/W droplets using high HLB non-ionic surfactants such as Tween 20 and Brij 97.When using a low HLB surfactant such as lecithin, droplets were not stabilised.Due to the partition coefficient of the lecithin used, this surfactant remains primarily within the oil phase and hence does not diffuse out of the droplet to the extent of the high HLB surfactants.Membrane emulsification with surfactant within the dispersed phase compares favourably to a pre-mix emulsification setup since droplet size minimisation that is achieved through multiple passes, is in this case obtained much earlier by ensuring rapid adsorption of the surfactant.Furthermore, the effects of membrane fouling are avoided at least during short term process operation although long term effects on the dispersed phase flux are currently unknown.By considering the positioning and type of surfactant, membrane emulsification can be competitive with a rotor–stator high shear mixer in terms of droplet size and production rate whilst still being favourable in terms of energy consumption by at least an order of magnitude.An expansion of this study would be to investigate a wider variety of surfactants beyond Tween 20, Brij 97 and lecithin as well as to observe whether advantages using this approach are upheld at higher dispersed phase volume fractions or a larger scale.
Abstract In this study, a rotating membrane emulsification setup incorporating a 6.1 μm pore diameter Shirasu porous glass membrane was used to produce oil-in-water emulsions. The processing conditions varied between 0.2 and 1.5 bar for the transmembrane pressure and shear rates at the membrane surface between 0.6 s<sup>-1</sup> and 104.6 s<sup>-1</sup> were generated. All emulsions consisted of 10 vol.% of sunflower oil stabilised by one of four different surfactants (Tween 20, Brij 97, lecithin and sodium dodecyl sulphate) of either 0.1 wt.% or 1 wt.% concentration. A novel approach for emulsification processing was introduced which incorporates high hydrophilic-lypophilic balance, non-ionic surfactants within the dispersed phase rather than the continuous phase. A reduction in droplet size by at least a factor of 3 for the same formulation can be achieved without significant hindrance on disperse phase flux. This therefore suggests a possible strategy for further process optimisation.
39
Performance of invasive alien fountain grass (Pennisetum setaceum) along a climatic gradient through three South African biomes
The probability of success of an invasive species into a new habitat may result from the environmental and biotic factors that prevail in that habitat.These factors govern the rates of survival, establishment and spread of the invader in a new habitat.The success of invasive species control depends on detailed knowledge of the key processes associated with their dispersal and regeneration.The availability of propagules and habitat are regarded as factors important for plant recruitment, and thus plant persistence and spread.Moreover, the ability of a species to persist under a wide range of climatic and edaphic conditions plays a major role in its invasive potential.Phenotypic plasticity is believed to facilitate biological invasions.Pennisetum setaceum is an apomictic, wind dispersed, C4 perennial bunch grass, native to Mediterranean parts of North Africa and the Middle East.Although its ecology is better known in Hawaii, where it is also invasive, little has been written about its ecology in its native range or in South Africa where it has the potential to promote fire in arid regions.Although P. setaceum reproduces mainly by seed, it forms pseudo-viviparous plantlets when inflorescences are inundated by water.Its successful spread is probably due to its popularity in horticulture, drought tolerance, unpalatability to animals, rapid growth and profuse seed production and the ability to thrive in a wide range of environmental conditions worldwide through phenotypic and reproductive plasticity.It has successfully escaped cultivation and has invaded and naturalized in a wide range of habitats worldwide including Hawaii, parts of southern Africa, Democratic Republic of Congo, Fiji and North America.It has been found to perform better where roads interchange with rivers in the western part of South Africa, probably as a result of extra moisture, nutrients and seed exchange between the two conduits.Although the grass performs well under high nutrient and water greenhouse conditions, the relative contribution of seed dispersal and recruitment, habitat and microsite limitation to invasion success in the field is unknown for P. setaceum in South Africa.The aim of this study was to identify factors affecting the seedling establishment and recruitment of transplanted P. setaceum in three biomes differing in rainfall seasonality, soil type and plant community, but where the species was already present.In order to explore inter-site variation in regeneration, a factorial transplant and disturbance experiment was established and monitored for 15 months.Key questions for this study were: Does seedling establishment benefit from reduced competition from indigenous vegetation?, Do seedling establishment and performance rates differ among habitat types?,And, is there an interaction between site and other factors influencing plant performance?,Our study demonstrated that the invasive alien P. setaceum is able to thrive and establish in three biomes with distinct climatic characteristics.We found evidence for microsite limitation at different stages of its regeneration process.At all three sites P. setaceum performed well under reduced competition from resident indigenous species although the establishment and performance rates differed between sites.However, other habitat conditions such as soil and moisture availability could override competition effects and lead to successful establishment.This study has demonstrated that P. setaceum has a high growth and invasion potential in historically disturbed habitats as well as in sites with current disturbances.Conservation authorities concerned with management of P. setaceum invasion need to give more attention to these historical disturbances that act as hotspots for seed production.P. setaceum is already present at these sites and will easily invade near natural areas if it is not managed effectively.Both biotic and abiotic factors and their interactions promote the establishment and growth of P. setaceum.We recommend reduction in human induced disturbances, especially land cover change, which reduce competition with indigenous species and hence promote P. setaceum establishment.Management efforts should also aim to reduce seed production and establishment of P. setaceum along roadsides that act as conduits into near-natural sites.This can best be done by maintaining as much indigenous cover along road verges as possible as competition reduction favours seedling survival.Finally, our results contribute significantly to our understanding of basic processes that affect emerging invaders, especially grasses in new environments in South Africa.Results confirm the status of this grass as an important emerging weed and invader that must be prohibited and controlled in South Africa.Three sites were selected in arid and semi-arid parts of South Africa covering the current distribution range of P. setaceum.The altitudinal gradient ranged from 190 to 1242 m a.s.l. Rainfall seasonality and precipitation differed at all these sites.The Karoo site was in the Karoo National Park near Beaufort West, in the mixed-rainfall season, semi-arid Nama-Karoo biome.This site was selected because P. setaceum occurs along the Gamka River, running through the park, with seeds invading from road shoulders outside the park as well as from neighbouring farms upstream.Park managers expressed the need to eradicate the grass in the park.The semi-arid summer rainfall Savanna site was situated at De Beers Mine dumps in Kimberley.This site was selected due to the abundance of P. setaceum on the mine dumps where it was probably previously used for mine stabilization.The grass has escaped from the dumps into the surrounding disturbed and semi-natural areas in and near Kimberley.The semi-arid winter rainfall Fynbos biome site was situated in the Renosterveld vegetation type at PPC De Hoek Cement Mine dumps near Piketberg.The area was selected because P. setaceum is present on mine dumps and on the roadsides around the town of Piketberg.The grass has escaped into the adjacent Piketberg Mountain and could increase the fire frequency in the area.The mine authority is keen to eradicate the grass from their property given a suitable alternative; indigenous species to stabilize the mine dumps.In February 2008, soil samples were collected at a depth of 5–20 cm from between the transect plots at all the study sites.The soil samples were pooled and oven dried at 80 °C for 2 days before analyses.The soils were of sandy texture except for Piketberg, which was loamy sand.The soils were alkaline except for three transects in Kimberley that were slightly acidic with relatively high CEC cmol/kg.Total nitrogen concentrations were relatively low whereas phosphorus concentrations were highly variable.The only exception was a transect at Kimberley which had 66% sodium base saturation and < 0 ppm phosphorus concentration.P. setaceum seeds were collected from all the study sites and mixed together.The seeds were sown in a trial experiment in a greenhouse at Stellenbosch University Agronomy department where the average temperature was 38 °C.There was no germination for four weeks and the experiment was terminated.Seeds were later grown in a Forestry department greenhouse with an average temperature of 25 °C.After germination and growth for two weeks, 846 uniformly sized seedlings with at least 3–4 leaves were transplanted individually into propagation bags and left for two more weeks before translocation to the sites.At each study site permanent pairs of 2 m2 plots 5 m apart were established.A total of 846 young P. setaceum seedlings were translocated to the study sites in the winter of 2007.In Kimberley and Piketberg, four transects with four pairs of plots resulted in 72 seedlings per transect and 288 seedlings per site.At these sites, two transects were in the historically disturbed sites and two in the semi natural areas away from the mine dump.The seedling sample size at Karoo National Park was 270.On each plot, nine seedlings were placed systematically, at 0.5 m apart and 0.5 m from the plot boundaries.All seedlings were given 500 ml of water immediately after being translocated.The seedlings were grown in 94 plots, half of which were cleared of vegetation and were studied over 15 months from May 2007 to August 2008.The number of leaves, basal diameter, length of the longest living leaf and the number of inflorescences were recorded every month for each seedling.The three sites were of different elevation and had different land use and soil characteristics.Percent rock cover was determined as the average rock cover for each plot.The effect of historical disturbance on seedling performance was determined by placing transects on the mine dump and at different distances away from the dump.The effect of water on seedling performance was determined by placing the plots along three transects at 0, 5, 10, 15 and 20 m from the river.All data were tested for normality with a Shapiro–Wilk test.When data were normal, repeated measure analysis of variance was used in STATISTICA 8 to analyse the performance of seedlings over the study period.The seedling performance was determined by measuring their height, basal diameter and the number of leaves every month during the study period.When the data were not normal, a non-parametric bootstrapping test was performed.Differences between means were considered significant for p < 0.05.Within-subject effects were the sampling date and the interactions of sampling date with the between-subject effects.The survival of transplanted seedlings was expressed as the mean number of surviving seedlings per treatment for all sites applicable to that treatment.One-way ANOVA was used to compare transplant survival and performance in weeded and unweeded plots as well as in disturbed and undisturbed plots.A Bonferroni post-hoc test was performed to test the differences between and within treatments over time.Spearman correlations were calculated to detect relationships between soil characteristics, microclimate properties and plant performance.Survival analysis was performed to compare the proportion of seedlings surviving over the study period at different study sites using Kaplan–Meier survival curves.A variety of habitat and environmental factors affected P. setaceum seedling survival and performance over the study period.Competition from resident species affected the performance of surviving seedlings.Seedling performance was also influenced by site disturbance history.Seedlings performed better on mine dumps than away from them.At KNP seedlings performed equally well regardless of the distance from the river seedling but survived longer away from the river than near it.Climatic variables from different study sites also affected both the survival rates and performance.Seedlings growing on plots from which competitors were removed were larger in basal diameter and height and had more leaves throughout the study period than those growing on unweeded plots.Basal diameter, number of leaves and height were positively correlated, and only ANOVA results for basal diameter are presented in this paper.The effect of competition on seedling survival was evident across all sites throughout the study period.There were no measurable differences in performance of seedlings for five months until September 2007, when the effect of resident vegetation had an influence on growth of the transplanted P. setaceum seedlings.Seedlings growing in the Karoo National Park performed better than those in other two sites.This was also the case for when performance was measured as basal diameter and height.Most transplanted seedlings that survived after six months remained alive at all sites for the rest of the study period.At all sites, more seedlings growing on plots cleared of resident vegetation survived than on unweeded plots and this effect was significant.At Kimberley and Piketberg, more seedlings survived on mine dumps than off and surviving seedling performance on these mine dumps was significantly better.More seedlings survived away from the river than those near the river at the Karoo National Park site.However, those that survived near the river performed better in basal diameter over the study period.The effects of site, plot type and mine dump remained significant for the duration of study period.The effects of resident vegetation removal and mine dump did not differ between Kimberly and Piketberg and over the study period.The effect of plot type did not differ with the distance from the river, or over the study period.Environmental stress in a new habitat has been suggested to affect the establishment of invasive species.Low-stress habitats are easily invaded because many aliens are better able than natives to take advantage of high resource availability.The transplanted P. setaceum seedlings were exposed to different types of environmental stresses imposed by the three study sites.The high performance and survival on the historically disturbed mine dumps could be as a result of resource facilitation and fluctuating resources levels that promote plant invasion and/or microhabitat limitation away from mine dumps.The low performance of seedlings on unweeded plots suggests competitive suppression by the established resident vegetation.Disturbance and competition had no effect on performance and survival of P. setaceum in the first five months of seedling transplantation.This suggests that early survival of transplanted seedlings was not related to competitive interactions in relation to historical disturbances.However, both survival and performance of seedlings after five months was positively affected by both historical and current disturbances.This indicates that competitive suppression by resident vegetation and disturbance effects are more important for mature P. setaceum seedlings.Soil disturbance has been suggested to promote invasion by increasing water and nutrient availability and other resources at disturbed and near-natural areas.Our results suggest that indigenous vegetation on undisturbed sites could suppress establishment of P. setaceum whereas its invasion is facilitated at both historically and currently disturbed sites.In the Karoo National Park, the seedlings performed equally well regardless of the distance from the river.The interaction between the distance from the river and the removal of resident vegetation over the study period did not influence the performance of species.This could be due to the amount of rainfall received in this area shortly after seedlings were transplanted.Although the amount of rockiness was not positively correlated with plant performance in general, most plants near rocks produced flowers and seeds before the rest of the seedlings in the Karoo National Park.Soil type plays a major role in the distribution and community structure of plants.Resource-poor soils appear to be more resistant to invasion, particularly in semi-arid systems.In our study, P. setaceum seedling survival and performance was minimal in saline soils along a transect near the Kimberley mine dump.Soil at this site had the highest levels of sodium/kg) and potassium/kg).This effect could not be detected until the sixth month when seedlings began to die off.Stohlgren et al. found a positive relationship between exotic species to percent soil silt and percent soil nitrogen.Most seedlings in our study died off at the soil with the highest clay content."The overall good survival rates of P. setaceum across three climatically distinct environments demonstrate the species' ability to adjust to different conditions prevailing at new locations.Despite differences in survival rates at early stages, P. setaceum seedlings that survived the six months persisted for the rest of the study period.This suggests that once the seedlings have overcome the critical seedling stage they are able to establish despite harsh environmental conditions.Flowering occurred after six months in the Karoo National Park; this could be as a result of extra moisture from the river where P. setaceum is prevalent.Seedlings at other sites took more than 12 months before flowering could occur.The interaction between abiotic and biotic processes at these sites played a major role in the survival rates of P. setaceum seedlings.
The knowledge of relative performance of plants across environmental gradients is critical for their effective management and for understanding future range expansion. Pennisetum setaceum is an invasive perennial grass found along roadsides and other disturbed sites in South Africa. The performance of this grass in response to competition, habitat characteristics and resources was experimentally tested in three biomes (Karoo, Fynbos and Savanna) of South Africa. A total of 846 young P. setaceum seedlings were translocated to study sites in May 2007. The seedlings were grown in 94 plots along random transects, of which alternate halves were cleared of vegetation. Despite a variety of environmental hazards at these sites, over 30% of the transplanted seedlings survived over 15. months. Competition from resident vegetation was a major factor limiting the establishment of seedlings. However, under adequate rainfall and historical disturbance (mine dump), competition effects were overridden. Survival of seedlings was greatest in the Karoo National Park, possibly because of summer rainfall that occurred shortly after translocation. Despite differences in the survival and growth rates, seedlings remained alive at all sites, especially if they survived the first six months after translocation. P. setaceum is capable of persisting across a broad range of environmental conditions. Management efforts should aim to reduce seed production and establishment along roadsides that act as conduits into protected sites. This could be best achieved by maintaining as much indigenous cover along road verges as possible, as seeds survive best where competition is low. © 2013 South African Association of Botanists.
40
Abrogation of EMILIN1-β1 integrin interaction promotes experimental colitis and colon carcinogenesis
The ECM glycoprotein EMILIN-1 interacts with α4β1 , and the closely related α9β1 integrins, via the globular homotrimeric C-terminus gC1q domain, promoting cell adhesion and migration .In addition, EMILIN-1 provides an ECM cue for a correct homeostatic proliferation ; accordingly, targeted inactivation of the Emilin1 gene induces dermal and epidermal hyperproliferation .The oncosuppressive activity of EMILIN-1 has been preclinically demonstrated in the context of the skin cancer, where we demonstrated that the absence of EMILIN-1 accelerates tumor development and increases the number and size of skin tumors .Colon cancers are among the first where the critical contribution of chronic inflammation and the microenvironment towards tumor progression had been acknowledged.The colitis-associated cancer is characterized by poor prognosis with a mortality rate of up to 15% .This can at least in part be ascribed to the continuous cycles of tissue destruction and repair and with the persistent oxidative damage that can trigger mutagenesis and cancer initiation .The development of aberrations that promote tumor initiation is strongly influenced by the contextual microenvironment in which they arise as demonstrated by preclinical models of dextran sodium sulfate-induced colitis .Chronic inflammation contributes to tumorigenesis through direct and indirect mechanisms.It acts directly on cells increasing their proliferation and invasion and affecting the processes elicited during tissue repair; on the other side, it also induces changes in the microenvironment triggering the deposition of extracellular matrix molecules and enhancing angiogenesis .In addition, inflammation is associated with striking changes in the lymphatic vasculature despite the molecular mechanisms involved in the regulation of this process are not completely understood.Proinflammatory cytokines, e.g., IL-1 and TNFα, are known to induce VEGF-C/D expression in infiltrating cells, thus supporting inflammatory lymphangiogenesis .The formation of lymphatic vessels in turn aids the resolution of inflammation allowing the drainage of tissue edema and the clearance of inflammatory cells .It is currently accepted that alterations of LVs are a well-established feature of human and experimental inflammatory bowel disease and can lead to a vast array of consequences, including persistence of the inflammatory process .However, the tissue responses to the interplay between local chronic inflammation, ECM remodeling, and lymphangiogenesis in determining the fate of CAC are still poorly understood.Another important direct role is played by EMILIN-1 in the growth and maintenance of LVs.Specifically, we have demonstrated that EMILIN-1 is part of the anchoring filaments, structural elements that connect lymphatic endothelial cells with the ECM where it regulates the proper growth of lymphatic capillaries .We also showed that its interaction with integrin α9 is required for the valve formation and maintenance of lymphatic collectors .Finally, we found that EMILIN1 integrity is essential to guarantee the stability of LEC junctions .Very recently, exploiting a transgenic mouse model carrying the E933A mutation in the gC1q domain of EMILIN-1 which abrogates the interaction with α4 and α9 integrins, we provided evidence for a novel “regulatory structural” role of EMILIN-1 in lymphangiogenesis .Collectively, all these findings suggested that EMILIN-1 could play a relevant role in CAC.In the present study, we demonstrated that EMILIN-1, by virtue of its interaction with β1 integrins, is centrally located in the contest of CAC by controlling the process of proliferation, tumor development and by counteracting lymphatic dysfunction associated with chronic colonic inflammation.To verify if the expression of EMILIN-1 and the consequent interaction with its integrin receptor could halt tumor growth also in the context of colon cancer, we treated wild type, knock out and transgenic mice with AOM-DSS, an established protocol of colon carcinogenesis that induces the onset of mutations through the mutagenic agent AOM and of inflammation following DSS treatment, thus recapitulating the traits of human colon cancer pathogenesis .The development and growth of adenomas were next monitored and evaluated over time by endoscopy.At the end of the AOM/DSS treatment, colon samples were isolated, opportunely opened and washed to precisely evaluate all tumors.Tumor masses were assigned a score ranging from 1 to 5 with the highest score given to tumors occupying most of the colonic mucosa.Interestingly, these analyses indicated that the absence of functional EMILIN-1 was associated with a significant increase of the tumor lesions compared to the E1+/+ animals; not only the number of tumors was higher but also the size, and the differences were even more striking in E1-E933A mice.In particular, only 50% of E1+/+ treated mice developed proliferative lesions upon treatment; on the other hand, both E1−/− and E1-E933A mice were more prone to develop typical colonic tumors, particularly low and high-grade adenomas; in few cases, gastrointestinal intraepithelial neoplasias were also detected.These results corroborated the anti-proliferative function of EMILIN-1 also in the context of the colonic mucosa, as we had previously demonstrated following induction of skin tumors .Indeed, at the basal levels, the colon mucosae from E1−/− and E1-E933A mice displayed increased pro-proliferative signals such as increased phosphorylation of ERK1/2, and AKT as well as a higher number of Ki67-positive cells.Moreover, results obtained in the transgenic E1-E933A mouse model suggest that this EMILIN-1 property was strictly related to the interaction between the gC1q domain with its integrin receptor.According to the Disease Activity Index, which takes into account the extent of weight loss, the stool consistency and the presence of blood in stools, we could conclude that, under treatment, the E1−/− and E1-E933A mice were characterized by the presence of more diffuse inflammation in the colonic mucosa respect to E1+/+ counterpart.Similar results were obtained applying the murine endoscopic score of colitis severity which takes into account the thickening of the colon wall, changes in the vascular pattern, presence of fibrin, mucosal granularity and stool consistency.At the end of the experiment mice were sacrificed and the colons were isolated.Consistently, colons from E1−/− and E1-E933A treated mice were significantly shorter compared to those from the wild type counterpart, a clear indication of bowel inflammation .In addition to the typical signs of diffuse inflammation in the colonic mucosa already evident in H&E stained colon sections, epithelial crypts of AOM/DSS-treated E1−/− and E1-E933A mice were distorted and irregularly distributed within the lamina propria, which was abundantly infiltrated by inflammatory cells.The higher accumulation of inflammatory cells in the LP positively correlated to the general increase of white blood cells detected in the peripheral blood samples, despite the difference was not statistically significant.However, no predominant WBC population was observed.To avoid strain specific effects and different sensibility to DSS, we subjected both C57BL/6J and FVB mice to a DSS-induced protocol of colitis and assessed if EMILIN-1 could influence the inflammatory response.Under normal conditions, colon length, colonic mucosa morphology and blood cell composition were very similar in all genotypes.On the contrary, in both mouse strains, endoscopic examination showed a more severe inflammatory status in DSS-treated E1−/− and E1-E933A mice.E1−/− and E1-E933A mice suffered from a higher weight loss and displayed more serious rectal bleeding as envisioned from the DAI index.MEICS analyses indicated that E1−/− and E1-E933A mice displayed thicker mucosa characterized by a typical inflammatory granular pattern, copious fibrin accumulation, bleeding and vascular changes.On the contrary, E1+/+ littermates exhibited moderate granularity, modest fibrin accumulation, and regular vascular architecture.Also the evaluation of colon length confirmed a greater inflammatory response in E1−/− and E1-E933A animals since the colons were thicker and shorter compared to the E1+/+ counterpart.Despite the presence of slight differences between strains, no significant alterations of peripheral blood cell composition were observed, as observed upon AOM-DSS treatment.Epithelial damage and the extent of the LP infiltrate were chosen as parameters for the histopathological evaluation of inflamed colonic samples.E1−/− and E1-E933A treated mice were more responsive to the colitis induction, with a worse histopathological pattern respect to E1+/+ mice.In both C57BL/6J and FVB strains, E1−/− and especially E1-E933A mice displayed extensive epithelial damage, leading in some cases especially in the distal colon mucosa to squamous metaplasia, typical of ulcerative colitis, although its characteristic endoscopic appearance has been rarely described .E1−/− and E1-E933A crypts were not well-defined and appeared irregular.On the other hand, E1+/+ mice generally displayed a normal mucosa architecture and the majority of the crypts were not affected by the treatment.In addition, E1−/− and particularly E1-E933A treated mice were characterized by an increased recruitment of inflammatory cells compared to the E1+/+ counterparts.We next analyzed in more detail the type of inflammatory cells recruited in the colon of the different mouse models upon DSS treatment.We found that only FVB E1−/− and E1-E933A but not C57Bl/6J E1−/− and E1-E933A treated mice showed an evident increased number of CD3+ and CD45/B220+ cells compared to the E1+/+ mice.More leukocytes were detected in E1−/− and E1-E933A treated mice; indeed, Ly6G staining revealed a slight increase in neutrophil population in E1−/− and E1-E933A treated mice as well as a larger macrophage infiltrate as shown by Iba1 staining.In any case, the differences were not statistically significant if compared to E1+/+ animals.Overall, E1−/− and E1-E933A treated mice were characterized by an extensive inflammation in comparison with the E1+/+ counterparts; however, no predominant cell population apparently emerged as a characteristic driver of the inflammatory process.Recently, we demonstrated that in normal colon specimens LVs from both E1−/− and E1-E933A mice were irregular with narrowed and ring-shaped valves compared to E1+/+ mice.Moreover, in all E1−/− and E1-E933A colonic samples analyzed, we also detected dysmorphic structures and wide lacunae that were not detectable in colonic LVs of E1+/+ mice .As reported by many authors, the expansion of lymphatic network induced by the treatment with both DSS and AOM/DSS is required for a proper resolution of the inflammatory process .Interestingly, the analyses of the LV network in inflamed areas of the colon revealed that, besides the already documented alterations in EMILIN-1 mutant mice, the size of LVs in both E1−/− and E1-E933A mice was increased compared to E1+/+ animals in which some of the collectors displayed normal valves.Immunostaining of FFPE colon samples showed that the surface covered by Lyve-1 positive structures was higher in E1−/− and E1-E933A than in E1+/+ mice.The increase of the Lyve-1 positive colonic areas was statistically significant in untreated and DSS-treated E1-E933A respect to E1+/+ mice and confirmed that the induced inflammation caused lymphatic alterations that in mutant mice could also be documented by a larger LV diameter.Indeed, further analyses suggested an impairment of LVs function, since lymphatic drainage was prejudiced in E1−/− and E1-E933A mice, as assessed through the oral gavage administration of fluorescently labeled Bodipy-FL-C16, a 16-carbon chain fatty acid.The DSS and AOM-DSS treatments exacerbated the altered lymphatic functionality in E1−/− and E1-E933A mice where mesenteric lymph nodes and LVs did not contain Bodipy-FL-C16 tracer after 2 h from the administration, whereas the lymphatic structures of treated E1+/+ littermates were stained despite weakly by the Bodipy-FL-C16 tracer.The role of EMILIN-1 in promoting the formation of a proper and functional lymphatic vasculature was tested through the assessment of the activation of AKT and ERK, the major regulators of lymphangiogenesis .Through its regulatory domain gC1q, EMILIN-1 was able to efficiently trigger AKT and ERK phosphorylation in LECs, similarly to what observed upon treatment with VEGF-C, a well-known lymphatic growth factor.The kinetics suggested that gC1q and VEGF-C signaled through different receptors that may cross-talk.As reported in literature, VEGF-C induced the highest VEGFR3 activation already at 10–15 min following the treatment .Interestingly, only the wild type gC1q domain but not the mutant E933A-gC1q was able to induce AKT and ERK activation to the same extent observed challenging LECs with VEGF-C; however, the activation was persistent and could be detected also after 30 min following the treatment.To verify if the engagement of EMILIN-1 could induce a cross-activation of VEGFR3 via the engagement of integrin α9β1 expressed on LECs , the cells were challenged with gC1q and VEGFR3 was immunoprecipitated for western blotting analyses.These experiments revealed that gC1q was unable to phosphorylate VEGFR3, suggesting that the integrin engagement could signal independently from the activation of VEGFR3.Several lines of evidence suggest that the constituents of the local ECM microenvironment and their multiple interactions may contribute to the pathogenesis of the inflammatory diseases.The accumulation of hyaluronic acid at sites of chronic inflammation creates a permissive tissue microenvironment for the development of autoimmune diseases .In addition, fragments of collagen and elastin derived from the degradation activity of MMPs or other enzymes released by inflammatory cells play pathogenic roles in several common chronic inflammatory lung diseases and inflammatory bowel disease .Post-translation modifications of ECM components, such as glycation, citrullination and carbamylation, can also play a major role in the regulation of the inflammatory disease .A further layer of complexity in the modulation of the ECM molecular network in this context was recently ascribed to the action of the extracellular vesicles in the formation of niches suitable for tissue regeneration and inflammation .Here, for the first time we demonstrated that the EMILIN-1/β1 integrin interaction exerts a oncosuppressive role in the colon microenvironment, as it was previously well established in the context of the skin cancer .Also in the colonic context, the mechanism involves the activation of AKT and ERK, and, following the induction of colon carcinogenesis, associates with a higher number of tumors of substantial larger size in both E1−/− and E1-E933A mice.These results reinforced the notion that EMILIN-1 is one among the very few ECM proteins, such as fibulin-2 , EMILIN-2 and decorin , exerting a direct tumor suppressor function.Other components can play an anti-tumor activity through an indirect mechanism ; however, a large amount of ECM proteins promotes tumor growth and progression .A similar outcome was obtained when we employed the E1-E933A transgenic mice characterized by the ectopic expression of an EMILIN-1 mutant in the gC1q domain, incapable of engaging the α4/α9β1 integrin.These results suggested that the promotion of tumor growth depends on the lack of interaction with the integrin receptor, rather than an altered TGF-β signaling, since its maturation is regulated only by the EMI domain of EMILIN-1 .The altered tumor growth, inflammatory response and lymphatic alterations were more evident when using E1-E933A mice as opposed to E1−/− mice.This could likely be due to the absence of the EMILIN-1 EMI domain in E1−/− mice, and the consequent increase of the TGF-β levels that can exert contrasting effects on cell proliferation as well as in the lymphangiogenic process .Thus, the use of the E1-E933A mouse was useful and informative to discern the effects linked to the regulation of TGF-β by the EMI domain from those dependent on the EMILIN-1/integrin interaction.A second finding of the AOM/DSS colon carcinogenesis approach was the demonstration that E1−/− and E1-E933A treated mice of two different strains displayed more severe inflammation in the colonic mucosa respect to E1+/+ mice.During the pathogenesis of inflammatory bowel disease, which is linked to CAC development, both the LP and the epithelial layer are infiltrated by different types of immune cells, which create an inflammatory microenvironment .Our investigation indicates that EMILIN-1 deeply affects the inflammatory response since both E1−/− and E1-E933A mice displayed an increased infiltration of T and B cells, granulocytes and a slight but not statistically significant increase of macrophages in the inflamed colons.We also found that the levels of IL-1α, IL-1β and IL-6, which play a pro-inflammatory role during chronic inflammation, were slightly increased.Nevertheless, we were not able to discriminate a specific inflammatory cell population that could predominantly drive a consistent development of tumors in the two-step colon carcinogenesis model.The presence of a non-resolving chronic inflammatory niche is typical of the gastrointestinal tumor microenvironment, as well as in other tumors ; what we observed in this study is that the absence of EMILIN-1 exacerbated the inflammatory status which was persistent and more aggressive.Induction of intestinal inflammation is well known to require the presence of functional LVs and effective lymphatic drainage for its resolution.Indeed, the expansion of lymphatic vasculature is necessary to obtain fluid clearance and immune cell trafficking .Our findings provide evidence that the lack of EMILIN-1 induces the formation of aberrant lymphangiogenesis with the generation of unfunctional LVs suggesting that this could cause the exacerbation of the disease.We in fact have previously shown that EMILIN-1 is crucial for the maintenance of a correct lymphatic vasculature ; in our recent study we demonstrated that EMILIN-1 promotes the formation of functional LVs through the gC1q domain and in the present work we demonstrated that gC1q interacting with α9β1 integrin was able to induce the activation of lymphangiogenesis pathway in LECs.Differently than what was already demonstrated α5β1 integrin , a cross-talk between VEGFR3 and α9β1 integrin expressed on LECs very likely does not occur as shown by the absence of VEGFR3Tyr phoshorylation following stimulation by gC1q.The co-presence of both VEGF-C and gC1q induced a sustained and stronger activation, suggesting that a well-orchestrated lymphangiogenic response requires coordination between the integrin receptor and VEGFR3.This possibility is evinced by the fact that the treatment with exogenous VEGF-C is not sufficient for the formation of a correct new lymphatic vasculature .The data provided in this study further highlight the regulatory role of ECM in this context and that the expression of EMILIN-1 is required in order to allow a proper formation of the lymphatic vasculature and, hence, a proper resolution of the inflammatory response.In fact, in the absence of treatment E1−/− and E1-E933A mice displayed altered lymphatic phenotype in the colon; however no significant differences in resident immune cell number and distribution were detected.Following DSS or AOM-DSS treatment all the mice models taken into account were characterized by the presence of an expanded lymphatic vasculature, but only in wild type animals the LVs were functional, as assessed by mesenteric lymphangiography.These results suggest the strong inflammatory stimulus induced by DSS led to the formation of new competent and functional vasculature only in E1+/+ mice, thanks to the expression of a functional gC1q able to engage the integrin; in turn this allowed a proper drainage of the inflammatory cells and the restoring of the pre-inflammatory state.On the contrary, the impaired clearance of inflammatory cells in DSS-treated E1−/− and E1-E933A mice led to the onset of a severe inflammatory microenvironment.Thus, it is reasonable to assume that the harsh inflammatory condition observed in E1−/− and E1-E933A mice was the consequence of structural and functional lymphatic deregulation associated with a lack of EMILIN-1 and, in particular, of its functional gC1q domain.These findings are conceptually innovative since to our knowledge, it is the first time that the lack of a specific domain in the tumor microenvironment leads to an exacerbated inflammatory response and tumor progression impinging on the efficiency of the LVs.Thus since the E1−/− and E1-E933A mice are the only models showing that the lack of an ECM molecule associates with the formation of an altered lymphatic phenotype, they represent an ideal and precious tool to shed light on the possible mechanisms regulating the resolution of the inflammatory response in the tumor microenvironment.We have previously demonstrated that neutrophils-secreted elastase is responsible for the degradation of EMILIN-1 and this could likely be a possible mechanism of EMILIN-1 loss.Indeed, colorectal cancer is characterized by the recruitment of massive inflammatory infiltrates, including macrophages and neutrophils that colonized the LP and submucosa .The increased presence of these cells, and the consequent increase of the NE levels during colon cancer progression may thus cause a significant drop of the gC1q level in the microenvironment with a consequent loss of an important ligand for integrin α4/α9β1.We have in fact demonstrated that the activity of NE is able to fully abrogate the gC1q/integrin interaction inducing a cleavage adjacent to the E933 binding site .In conclusion, this study suggests that the degradation and consequent loss of EMILIN-1/gC1q in the tumor microenvironment may favor colon cancer initiation and progression in different ways: abolishment of the EMILIN-1/gC1q oncosuppressive properties; formation of dysfunctional LV; exacerbation of the inflammatory response due to LV-born impairment of the inflammatory cell drainage.We envision that the blockage of gC1q degradation may represent a promising ECM-based therapeutic approach to rescue the control of proliferation, induce LV normalization and facilitate the resolution of the inflammatory response in colon cancer.The recombinant wt and E933A-gC1q mutant were produced and purified as previously describe .The pQE30 vector, Wizard SV columns for DNA purification from agarose gels or for plasmid purification, as well as the restriction enzymes were purchased from Promega.T4 DNA Ligase was from New England Biolabs, and oligonucleotides were from Sigma Genosys.Recombinant human VEGF-C protein was purchased by R&D System.Anti–phospho-p44/42 MAPK, anti–phospho-AKT, anti-AKT and anti-ERK antibodies were obtained from Cell Signaling Technology.Goat anti-vinculin antibody was obtained from Santa Cruz Biotechnology, Inc.Anti VEGFR3 antibody was from R& D System, anti phospho-Tyr and anti GAPDH antibodies were purchased from Millipore.Mouse LECs were isolated and immortalized according to the described procedure ."Briefly, mice were intraperitoneally injected with 200 μl of emulsified incomplete Freund's adjuvant twice, with a 15-day interval.Hyperplastic vessels were isolated from the liver and diaphragm at day 30 and treated with 0.5 mg/ml collagenase A, and the resulting single-cell suspension was cultured.After 7 to 10 days of culture, subconfluent cells were recovered with trypsin/EDTA, immortalized by means of SV40 infection.Immortalized LECs were characterized for lymphatic endothelial markers as we have previously reported .Human LECs, and the media optimized for their growth were purchased from PromoCell GmbH.C57Bl/6J and FVB mice were purchased from Charles River Laboratories.Emilin1−/− and E1-E933A transgenic mice were generated and maintained at the CRO-IRCCS mouse facility .All animal procedures and their care were performed according to the institutional guidelines in compliance with national laws and with the authorization by the Italian Ministry of Health to Dr. Spessotto and to Dr. Mongiat.Both for AOM/DSS colon carcinogenesis treatments and DSS-induced chronic/acute experimental colitis, we used 6/8 weeks aged female mice of each genotype.C57BL/6 mice were treated with a single intraperitoneal injection of Azoxymethane followed by 1-week exposures to 2% dextran sulfate sodium salt in the drinking water.Inflammation and tumor growth were observed over time by endoscopy, and at the end of the observation period mice were sacrificed; clinical and endoscopic scoring of colitis were performed applying the Disease Activity Index and the Murine Endoscopy Index of Colitis Severity, respectively .Colon tissue samples were collected, measured and cut longitudinally.Differences in tumor number and volume between groups were first evaluated with gross anatomy examination of opened colons during necropsy; then colonic tissues were fixed, paraffin embedded and analyzed by immunohistochemistry or immunofluorescence.Colonic mucosa proliferative lesions were diagnosed according to the criteria described by Boivin et al. .Chronic colitis was induced with three 1-week exposures to 2% or 3% DSS in the drinking water; the impact of intestinal inflammation was evaluated over time with DAI and MEICS scores.Mice were, then, sacrificed at day 63 or day 84 and colon tissue samples were collected and treated as described in the above paragraph.At the end of each treatment a blood sample was collected by intracardiac sampling.Blood was recovered with 27 G, transferred in special tubes BD Microtrainer® MAP K2EDTA 1.0 mg to avoid blood coagulation and then analyzed to obtain the leucocyte formula using the Complete Blood Count program.Colonic samples were collected from AOM-DSS treated and untreated E1+/+, E1−/− and E1−/−/E1-E933A mice.Tissue extracts were prepared using the tissue protein extraction reagent lysis buffer supplemented with protease inhibitor cocktail following incubation on ice for 30 min.Cell extracts were obtained using a lysis buffer containing 50 mM Hepes pH 7.0, 250 mM NaCl, 5 mM EDTA, 0.5% NP40, 1 mM Na3VO4, 50 mM NaF and supplemented with protease inhibitor cocktail, following incubation on ice for 30 min.Samples were subjected to 4–20% SDS Page electrophoresis and blotted onto nitrocellulose membranes.Membranes were blocked and incubated with specific and anti-vinculin primary antibodies.HRP-tagged secondary antibodies were used.Signals were detected using Western Lightning ECL.Membranes were analyzed on Biorad Chemidoc Touch Imaging System and quantified by Biorad ImageLab software.Immunoprecipitates were performed as follows.LECs cells were starved in DMEM without serum for 24 h and then stimulated for 15 min at 37 °C with 10 μg/ml gC1q, 250 ng/ml VEGF-C or both.After washing with ice-cold PBS containing 0.1 mM Na3VO4, cells were solubilized with lysis buffer, collected, and incubated on ice for 30 min.Cell lysates were centrifuged at 10,000g for 20 min, and quantified by Bradford assay.600–1000 μg of lysates were immunoprecipitated with anti VEGFR3 specific antibody ON at 4 °C and then incubated with 30 μl of Protein G Sepharose 4 Fast Flow resin for 2 h at 4 °C.After incubation the solution was centrifuged at 6000g for 1 min and washed five times with HNTG buffer and at the end resuspended in 3× loading buffer.Immunoprecipitates were separated by 4–20% SDS Page electrophoresis, transferred to nitrocellulose, and immunoblotted with the specific anti phospho-tyrosine antibody.After necropsy, 1 cm of the distal colon was recovered, fixed in 10% neutral buffered formalin for 48 h, transferred in 70% ethanol and then processed for paraffin embedding.For histopathological examination 4 or 5 μm-thick sections were obtained at the microtome, stained with hematoxylin and eosin and examined with a light microscope for the detection and quantification of histological lesions; epithelial damage, inflammatory infiltrate and proliferative lesions were evaluated.For IHC analysis, serial sections were immunostained with the following primary antibodies: CD45/B220, CD3 epsilon, Ly6G, MPO and Iba1; sections were, then, incubated with biotinylated secondary antibodies and labeled by avidin-biotin-peroxidase system, using a commercial immunoperoxidase kit."Immunoreaction was visualized with 3,3′-diaminobenzidine substrate and sections counterstained with Mayer's hematoxylin.All evaluation procedures were made in blind fashion.Positive signals were scored as described in figure legends.Counts were made at 400× and areas were chosen on the basis of more positive areas.For IF analysis of LV density in colon tissues an anti Lyve-1 antibody was employed with the appropriate secondary antibody.For all samples, negative controls included corresponding isotype or IgG.To-Pro-3 was used to visualize nuclei.Images were acquired with a true confocal scanner system, equipped with a Leica DMi8 inverted microscope.From 6 to 8 fields were acquired and quantification of the fluorescence positive structures was evaluated by means of the Volocity software.Colons were also examined after whole mount staining approach.Untreated and treated colons were isolated, dissected and fixed in 4% PFA for 2 h at room temperature.After hydration for at least 48 h and permeabilization with PBS, 0.5% Triton X-100 buffer for 2 h, samples were incubated for 2 h with the blocking solution.Later, an overnight incubation at 4 °C with an anti podoplanin monoclonal antibody was performed.After 5 washes with PBS 0.3% Triton X-100, the secondary anti hamster Alexa Fluor® 488 conjugated antibody was added for a 3 h incubation at room temperature.Samples were washed and mounted with Mowiol-2,5% DABCO; images were acquired with a confocal microscope, using Leica confocal LAS AF SP8 software.To visualize mesenteric LVs and to evaluate lymph node draining capacity, 1 ml of long-chain fatty acid, Bodipy-FL-C16 was orally administered to E1+/+, E1−/− and E1−/−/E1-E933A C57Bl/6J DSS-treated and untreated mice.After 2 h from oral administration, mice were euthanized, and fluorescence imaging was performed in order to visualize labeled LVs and LNs in the mesentery, using a Leica M205 FA stereomicroscope equipped with a Leica DFC310 digital camera.Statistical significance of the results was determined by using the Mann Whitney U test to determine whether two datasets were significantly different.A value of P < 0.05 was considered significant.This work was supported by the Ministry of Health, Italy and Associazione Italiana per la Ricerca sul Cancro .
Colon cancer is one of the first tumor types where a functional link between inflammation and tumor onset has been described; however, the microenvironmental cues affecting colon cancer progression are poorly understood. Here we demonstrate that the expression of the ECM molecule EMILIN-1 halts the development of AOM-DSS induced tumors. In fact, upon AOM-DSS treatment the Emilin1−/− (E1−/−) mice were characterized by a higher tumor incidence, bigger adenomas and less survival. Similar results were obtained with the E933A EMILIN-1 (E1-E933A) transgenic mouse model, expressing a mutant EMILIN-1 unable to interact with α4/α9β1 integrins. Interestingly, upon chronic treatment with DSS, E1−/− and E1-E933A mice were characterized by the presence of increased inflammatory infiltrates, higher colitis scores and more severe mucosal injury respect to the wild type (E1+/+) mice. Since alterations of the intestinal lymphatic network are a well-established feature of human inflammatory bowel disease and EMILIN-1 is a key structural element in the maintenance of the integrity of lymphatic vessels, we assessed the lymphatic vasculature in this context. The analyses revealed that both E1−/− and E1-E933A mice displayed a higher density of LYVE-1 positive vessels; however, their functionality was severely compromised after colitis induction. Taken together, these results suggest that the loss of EMILIN-1 expression may cause the reduction of the inflammatory resolution during colon cancer progression due to a decreased lymph flow and impaired inflammatory cell drainage.
41
On input-output economic models in disaster impact assessment
In , disaster has been defined as the set of “consequences of a natural or man made hazard”.Considering the multi-faceted nature of primary triggers and outcomes encompassed by this definition, today one of the core research areas in disaster analysis is related to the assessment of resulting spillover effects in networked systems and societies.A critical review of the nature of cascading disasters and their fallouts was proposed in , pointing out elements of interest such as “interdependencies, vulnerability, amplification, secondary disasters and critical infrastructure”. ,introduces a magnitude scale for classifying incidents, disasters and catastrophic events.Dominoes are sometimes subtle to forecast and unexpected in magnitude and extent, when compared with causing factors.This is also the case as far as economic losses associated to disasters are concerned.In this domain, a quantitatively precise evaluation of indirect losses remains a challenge so far, considering the complexity of many economic environments, the diverse nature of disaster contingencies, data constraints and restrictions intrinsic to the analytic tools in use, which are often tailored to specific hazard types, geographical areas and/or historical moments.In spite of that, economic theory has an ancient interest in the discipline of disaster impact assessment and a number of principles have been determined towards sound economic loss estimation , whereas many aspects remain open for debate and further inquiry.For instance, observes that even the way of counting economic losses over time has been addressed quite variably in the literature.Clearly, different analysis frameworks can be more or less adequate to assess disaster consequences at the micro-, meso-, and macroeconomic scale.At the same time, objectives in their application can include both an ex post loss quantification and an ex ante risk evaluation, which in turn modulates the validity and effectiveness of specific techniques.Three dominant classes of economic models are mentioned in and a handful of other references towards disaster loss analysis: simultaneous equation econometric models; input-output models; computable general equilibrium models.These families of methods are gaining interest in time as within modern, strongly intertwined economies, indirect losses may outclass direct ones.Analyses and evidences about this aspect have been recently provided for a series of highly-disruptive events in terms of disaster impact multipliers, see for instance for discussion.An extended body of literature deals with comparative advantages and disadvantages of the three above-enumerated approaches in the context of disaster analysis .Also, adds to the comparison Social Accounting Matrix methods, while considers cost-benefit analysis.Many recent contributions primarily focus on I/O and CGE techniques, which qualifies as “the most commonly used and well-documented approaches in disaster impact analysis”.On one side, I/O models offer linearity as well as a neat way of outlining inter-industry linkages and demand structure, achieved by imposing specific structural constraints.On the other side, the CGE framework introduces higher flexibility and the possibility to represent a large spectrum of demand- and supply-side elasticities and behavioral responses, typically at the cost of more elaborated assumptions needed to describe the mutual adjustment of prices and quantities.In disaster analysis, I/O models are often regarded as overestimators of economic losses, while CGE models as underestimators .As far as the analysis time horizon is concerned, I/O models are often preferred for short-horizon estimation, whilst CGE models for long-horizon estimation .Besides, complexity considerations tend to promote the use of CGE on a narrower set of sectors than in the I/O case.See for a review of CGE applications to the analysis of economic impacts of disasters and for a broader comparison between the two approaches.In this paper, we specifically focus on I/O techniques, which have been gaining attention as instruments for rapid assessment of the cascading economic effects of disasters."The representation of backward and forward linkages allowed by I/O models can serve towards the identification of key sectors, the appreciation of different sources of change and of the system's sensitivity, the comparison of economies.Moreover, the I/O approach has inherent affinities with methodologies for the analysis of cascading events from other domains, which facilitates their integration.As for the operational use of I/O technques, a set of useful observations can be found in Ref. , while addresses the estimation of some catastrophe-relevant quantities such as production capacities.In itself, the empirical construction of I/O datasets underlying I/O models involves a number of different practices and procedures .However, the exploitation of I/O approaches in disaster analysis is facilitated by the increasing availability of I/O databases , the support provided by statistical accounting frameworks such as the System of National Accounts 1, and the development of industry classification standards such as ISIC, JSIC, NACE, SIC and NAICS.2,Some significant instances of publicly available I/O data sources are listed in Table 1.Observe that the data coverage today at disposal goes beyond the pure monetary accounting.An example is provided by the physical input-output tables, that constitute key instruments in enviromental-economic analysis .Moreover, a number of initiatives are in place to collect information on disasters and associated losses, e.g. the OFDA/CRED International Disaster Database EM-DAT3 .The International Input-Output Association4 manifests raising interest in the correlation of disaster impact analysis and I/O techniques, as well as in opportunities and solutions to address some of the shortcomings pointed out in the above-mentioned references.In the last years, two special issues of journal Economic Systems Research have been specifically devoted to this subject.Opening the first of them, dated 2007, witnesses the emergence of integrative approaches, “in which IO models are combined with engineering models and/or data, in order to estimate higher-order effects that are more sensitive to the changes in physical destruction”.The author also acknowledges time horizons, geographical space and in-built counteractions among the current issues in I/O-based disaster analysis.Introducing the 2014 special issue, expands over the latter reference by reflecting on the role of disaster impact analysis in terms of both post-hazard impact estimation and pre-hazard assessment.I/O methods are compared with CGE and SAM approaches.The difficulty of evaluating impacts in the long run is recognized.Finally, empirical data availability and reliability, modeling for decision making and impacts on economic structures are pointed out as key categories in current research on disaster impact analysis.In continuity with these contributions, in this paper we aim at providing an overview and some discussion on the recent developments of I/O analysis techniques for the study of disaster repercussions on an economy, putting an emphasis on cascading effect modeling and resilience assessment.At the outset of our discussion, we will overview two core classes of I/O models developed in the literature, namely demand- and supply-driven formulations.While the first one starts from an exogenous assignment of demand and assumes fixed production functions characterized by constant proportions of inputs, in the second case supply is assigned together with fixed allocation functions.The two approaches, jointly with their extensions, provide a reference frame for mapping a number of practically relevant disaster descriptors.A key aspect is the expression of exogenous perturbations in terms of either demand or supply shock.In addition to that, recent works highlighted further non marginal features of disaster analysis, including mixed impact localization, short- and long-term impact accounting issues, import/export re-balancing, substitution effects, capacity constraints, adaptation.Such a variety of aspects challenges traditional I/O formulations, and lately the scientific community has tackled the integration of some of these factors in a framework of one or the other type.Also, hybrid techniques have been proposed to combine demand- and supply-oriented approaches in addressing the peculiarities of disaster impact analysis.Finally, some studies also delve into the notion of economic resilience and its relationships with the I/O formalism.The property, as outlined in recent literature, depends on structural factors as well as behavioral responses to critical events; it is affected by conjunctural circumstances; it can be sensitive to different drivers such as the action of a single regulator, a polycentric governance and the market.As such, it opens interesting research perspectives and challenges for the I/O modeling community.Based on general equilibrium theory, the standard I/O model proposed by Leontief gained spread recognition as one of the cardinal contributions to the study of multi-sector economies, the latter being described in terms of the static balance between demand and supply at the level of both intermediate and final exchanges.One of the key uses of the model is multiplier based analysis, whose objective is “to assess the effect on an economy of changes in the elements that are exogenous to the model of that economy” ."Anyhow, multiplier analysis has limitations in that a sector's importance to the economy is not necessarily pointed out comprehensively by the score of a given multiplier, and import/export may affect it considerably.To conclude, observe that the I/O representation allows to incorporate empirical data specific to an economic area and its exchanges with other regions through import and export information.These features considerably contributed to the success of the I/O approach in the last decades.After its introduction, the Ghosh model underwent huge questioning as of its plausibility.Early criticism, in particular, disputed over the assumption of perfect demand elasticity to changes in supply and some improper handling of value-added terms, while alternative formulations were proposed as well .Ref. further criticizes the perfect input substitutability implied by the model, while it suggests the supply-driven formulation as reasonable in case of small perturbations.These aspects are taken into account in Ref. to further endorse the implausibility argument.Work was done to validate the role of the Ghosh model in I/O analysis in its interpretation as a price model, with a corresponding quantity model .See the latter reference and for discussion on the relationships between quantity and price models of the two classes.In Ref. , a re-interpretation and extension of supply-based approaches suggests the joint usefulness of the two methodologies in order to tackle non-stationary conditions occurring in-between two successive market equilibria."Still, questions various aspects of Ghosh's model and its overall interest.See for a more extended account of recent debate on the topic of supply-driven I/O formulations.The elaborations proposed in the literature over the two standard frameworks presented above offer quite a vast landscape of options.Next we provide a short overview of some of the most important developments and pointers to related literature.See also for a list of extensions and applications.In the standard Leontief model presented in Section 2.1, final demand represents an ‘exogenous sector’ with respect to the economy.However, the possibility to formulate a closed model has been explored, too.A typical way to perform this operation is based on taking into account the feedback effect introduced by households, considering their purchase power as a function of the amount of labor requested by the productive sector .Closing the model can be considered, anyway, a subtle operation, since it requires cautious estimation of the characteristics of the mentioned feedback mechanism.Similar considerations hold also for the case of the Ghosh model.The possibility of higher plausibility has been identified for the Ghosh model in the closed version .The two above-mentioned aspects are just some of the most relevant ones found in the literature.Among other notable variants, there are nonlinear I/O formulations taking into account the complexities of many real production processes , stochastic models , and multi-commodity-per-industry representations .Generalized theories have also been laid down to bring together salient aspects of both demand- and supply-driven frameworks and overcome some of their limitations.For instance, this is the case of the mixed I/O model proposed in Ref. , which aims at taking into account supply constraints more extensively with respect to traditional Leontief techniques by exploiting the concept of purchasing coefficients. ,observes the demand-sided nature of this type of model, which exploits a partitioning of supply into constrained and unconstrained sectors.See also for some discussion.As observed in Ref. , “of the various applications of IO models, impact analysis is undoubtedly the most widely used”.In this section, we will focus on some of the major aspects related to the representation of disasters and resulting economic impacts in the I/O framework.Our presentation builds on a developing corpus of literature devoted to both general principles and particular applications of disaster economics.This includes the already mentioned and survey , which points out initial costs, and both short-and long-run growth effects among the main features of the problem.Some recent references in the field specifically address advantages and limitations of I/O-based disaster impact representations.An account is provided in about indirect effects of critical events, not always adequately captured by some of the I/O techniques in use.These include: the forward propagation of supply perturbations; the different role and effects of replaceable and irreplaceable components, with potential substitution coefficients; the possible presence of both negative and positive economic impacts resulting from a critical event.See also for further discussion on general equilibrium effects of perturbations and on I/O modeling limitations.In continuity with the articulation of previous section into demand-driven and supply-driven I/O approaches, here we will especially discuss the interpretation of disasters in terms of perturbations to demand, supply or their ensemble.Furthermore, we will concisely address other aspects of I/O based disaster representation that have attracted interest in the literature, notably the unfolding of cascading effects in time and space and the treatment of non-market and behavioral effects.This will set the ground for our review of recent I/O disaster analysis methods, which often try to overcome some of the key limitations seen in traditional I/O modeling assumptions and practices.A major point of debate is related to the adequacy of the I/O framework and other equilibrium representations to assess the inherently disequilibrium economic setting that can be induced by a large disaster .An aspect of interest concerns, in particular, the relationships between post-disaster economic actions and economic growth.Criticism has been raised in the literature against the traditional Von Newmann growth theory assertions that separate the two moments consisting in post-disaster return to equilibrium and subsequent normal growth .In particular, the latter reference, aside from observing the incorrectness of separation in practical cases, claims potential interaction also in the pre-event phases, as the risk of future shocks may influence pre-crisis growth patterns, as well.The plausibility that adverse events may have negligible macroeconomic impacts or trigger positive economic counter effects is also under theoretical and empirical investigation. ,refers to the possibility that disasters of even consistent magnitude may lead to minor macroeconomic consequences.In Ref. , technological development speed-up following crisis is considered, together with the potential that it may induce long-run growth of the economy. ,sets on scale the building-back-better opportunities and poverty traps that may emerge, for instance, from the handling of the post-disaster decisions and the availability of spare production capacity.Additionally, with respect to long-run losses stemming from natural disasters, refers to the concept of creative destruction.Finally, some studies have statistically investigated the plausibility issue by mining disaster databases such as EM-DAT and relating the resulting losses to different economic indicators .In the literature, we find different definitions of direct/indirect losses and stock/flow losses.For instance, in Ref. direct losses are associated to stock input losses and indirect losses to flow output losses, wherein “stock input losses refer to material damages and include the existing level of capital, facilities, and inventories”, while “flow output losses refer to outputs and services of stocks over time”.In Ref. , “stocks refer to a quantity at a single point in time” and “flows refer to the services or outputs of stocks over time”.On the other side, “direct flow losses pertain to production in businesses damaged by the hazard itself”, whereas a terminological improvement is proposed to overcome the ambiguity in qualifying “indirect effects” as either “all economic impacts beyond direct” or, according to I/O parlance, “interactions between businesses” alone.See also for further discussion.Related to the distinctions made above is the issue of double counting of losses .For instance, emphasizes the importance of evaluating indirect costs in terms of value added. ,warns against the issue of double-counting in exogeneization procedures.Furthermore, argues that damages should not be included in the estimation of losses.As observed in latter reference, a correct interpretation of stock and flow losses should lead to avoid double counting issues deriving from simply computing the total impact as the sum of those belonging to the two types.Moreover, flow losses should be further articulated into those components directly deriving from stock losses and those induced by linkages.Loss assessment in the I/O disaster analysis literature tends to polarize around the two aspects of shock impact localisation and propagation paths.The first, typically associated to the assignment of exogenous variables, is discussed next.The second, mediated by the structure of I/O tables used for analysis and by other factors such as time and space, will be scrutinized in the next subsection.Demand-side perturbation.In continuity with the demand-driven nature of the standard Leontief model, a large portion of the literature on I/O-based disaster impact analysis concentrates on demand perturbations, which finds justification in a number of applications.In Ref. , for instance, consumption alteration is qualified as a consequence of “security concerns of the general public”, such as in the case of a terrorist event.A topic of debate in the scientific community is related to the issue of persistence of demand-side perturbations depending on the type of triggering circumstance.The relevance of this aspect is corroborated when considering that demand shock may introduce demand redirection, as well .The estimation of impacts of demand perturbation through demand-driven models is not free from risks.As observed in , in fact, even estimating the economic consequences of a negative demand shock through standard I/O models may induce some issues related to double counting of impacts on total output and labor income.Supply-side perturbation.Representing and analyzing impacts in terms of demand is not always exhaustive.In Ref. , for instance, it is observed that “a disruption caused by a disaster is most often a disruption in the supply side of the production chain”.In the context of I/O models, a supply shock should also be considered as a perturbation to internal demand to properly assess its propagating consequences. ,observes that supply alteration can be further articulated in terms of internally constrained and externally constrained supply.Notably, changes in the I/O structure of specific regions throughout a post-catastrophe time horizon have been assessed in specific empirical analyses.For instance, in Ref. homogeneity of sectoral recovery in the aftermath of the 2011 Tōhoku earthquake is studied in conjunction with trade and demand changes.Many complexities underlying the modeling of supply-side shocks have been well outlined in Ref. .In particular, the reference discusses some inconsistencies deriving from the use of demand-driven formulations to estimate supply-side impacts on top of demand impact estimation.Supply-driven perturbations are particularly in contrast with the fixed allocation logic pertaining to the standard Leontief model.Furthermore, supply-driven perturbations can allow overcoming the assumption that the economy proportionally shrinks in all sectors .Mixed-sided perturbations.Since the early days of the I/O application to disaster analysis, methods to blend demand- and supply-side perturbations were searched for.Notable examples are provided by and related literature, which introduce a partitioning of the standard I/O model and an exhogeneization procedure for supply-constrained sectors.Also , applies a mixed I/O model to better account for supply constraints.Recently, a connection has been found between the simultaneous effect of demand- and supply-driven perturbations and the articulation of impact assessment into short- and long-term components.In Ref. , the possibility of widespread negative impacts along supply chains is addressed, considering them as “caused by the backward effect of the direct drop in demand in the region at hand, and by the forward effect of the direct drop in the supply of its output”.Moreover, two major aspects of interest related to the analysis of causal paths associated to shocks are about the description of cascading effects in time and space.The application of I/O techniques for loss assessment over different time horizons is a controversial.A primary criticism is related to the impossibility of standard I/O models to incorporate substitution effects, which may be relevant particularly in long term and may induce loss overestimation issues .This may also be substantial to an enhanced estimation of the recovery phase duration.Related aspects are about production simultaneity and synchronization.In addition to that, time horizon concerns emerging in recent discussion include the following topics:"time resolution: an important consideration related to the use of I/O models is that “unexpected events often generate the bulk of their impacts within time periods that are shorter than the time interval of the model's observation or solution” .This aspect often differentiates between economic and, for instance, engineering impact assessment datasets and methods ;,time disaggregation and economic cycle: work has been done, notably, in order to disaggregate I/O tables in time and provide a more detailed and dynamic view of factors such as trends and seasonal effects, see for instance and related references.The literature also focuses on the topic of time disaggregation of recovery processes ;,interaction of disaster impacts and long-run economic trends: this aspect has been taken into account in a number of case studies; in , for instance, it is observed that the superimposition of structural economic trends and disaster impacts can result in complex responses of economies under the action of critical stressors;,time compression: points out that, following the disruption of an economic equilibrium, recovery actions may induce a non-gradual, accelerated progression towards the next steady state.A comprehensive discussion of methodologies for regional and interregional analysis can be found for instance in Refs. .Notable case studies include the nonsurvey approach proposed in Ref. , and the hybrid method found in .The aggregation error resulting from performing regionalizations was studied, for instance, in .The literature also includes models tuned to cover the absence of regional data, such as the Leontief-Strout gravity model .The coupling of disaster space localization and geographical resolution of available I/O tables is among the core research issues today.An emerging concern is related to the role of globalization in determining disaster impacts and the challenge of analyzing global supply chains .Also, the topic should be put in the broader context of the development of joint regional tools supporting a global assessment of damage impacts .Additional geographical considerations pertain to the qualification and quantification of import/export perturbations, which may assume the double role of,a consequence of the impacting threat affecting the import/export activity itself;,a result of regional demand perturbation, inducing the need to compensate demand imbalances or to fulfil reconstruction needs .The necessity for both a temporal and a geographical articulation of imports/exports disruptions was emphasized, for instance, in Ref. ."In addressing interregional and international spillover effects through I/O models, one of today's research branches aims at including the analysis of feedback effects observed in multi-regional studies .An issue related to the use of I/O models in the context of disaster analysis is about the presence and significance of non-market effects. ,in particular, refers in this sense to losses due to the lack of provision of public services, e.g. public infrastructures.Further discussion on the topic is found for instance in , referring to damages that do not respond to market purchases in recovery.Furthermore, reflects on the fact that focusing on flow losses in the analysis of non-market effects can be primary, as stock losses may be difficult to quantify. ,offers further observations on the non-negligible role of individual behaviors in determining the overall economic response to stress.Finally, among the most relevant behavioral aspects to be taken into account, we have to consider the in-built disaster counteraction mechanisms of societies, as illustrated for instance in Ref. .In the last decades, huge progresses have been registered both in I/O methodologies for disaster impact analysis and in their empirical application.A number of major recent events have been studied by means of this class of techniques, see Table 2 for a literature mapping.The relevance of these developments is also gauged by the integration of I/O modules in some reference disaster analysis and decision support tools in use.Our review of pivotal I/O methods for disaster impact analysis will be structured around the static and dynamic categories.Despite the fixed-structure limitations found in traditional I/O analysis, successive developments such as exogenous superimposition of structural changes and enhanced structural analysis methods helped to deal with a number of aspects related to disaster representation.Interestingly, structural analysis is also one of the key tools used today for industrial performance monitoring of productive sectors and of countries, see for instance the OECD STructural ANalysis Database.5,As discussed in Section 2, multiplier analysis stands very much at the core of I/O techniques .Research on multipliers is documented, for instance, in , and one of the aspects of interest is the breakdown of economic consequences into direct and indirect effects.Moreover, the reality of demand- and supply-sided factors led the literature to the investigation of both backward and forward linkages of economic sectors in I/O models, which can be helpful in assessing the results of demand-driven supply-side alterations and vice versa.The application of structural analysis techniques led to significant achievements in recent disaster impact analysis literature.For instance, were able to detect tangible structural changes in time due to a disaster and subsequent reconstruction activities.In the domain of key sector analysis, another fundamental reference technique is the hypothetical extraction method, see for attribution information. ,discusses the role of both the classical multiplier method and the hypothetical extraction method in evaluating internal and external impacts, while it proposes a hybrid approach allowing to disaggregate external and internal effects.Among the recent results, introduces a generalized hypothetical extraction method together with a mixed exogenous/endogenous I/O model as an alternative to the standard IIM to analyze losses by imposing the structural change determined by disasters.However, observes that the algorithm indeed estimates backward effects resulting from the removal of demand on a particular sector, while forward effects are not properly captured.In recent years, research in I/O modeling is displaying a fertile interaction with econophysics and complex network theory, in particular.This connection finds a poivotal point in interpreting I/O tables as weighted, directed networks with boundary conditions .Studies in this domain are shedding light into the bonds between underlying topologies of I/O tables and emerging performance, including response to shocks."In the first place, this entails a characterization of I/O structures in terms of each sector's systemic importance, centrality or other metrics, often with a multi-regional outlook .Moreover, as far as perturbation analysis is concerned, areas of investigation include the fundamental relationships between network metrics and shock propagation, the development of shock-diffusion models, the assessment of economic stability properties .Along the same lines, we can observe a surging attention towards the vertical trade perspective and the analysis of spillovers in global value chains.This activity is supported by the emergence of inter-country I/O databases endowed with increasing levels of detail on bilateral transactions.In this sense, data have been exploited in various ways, e.g. to identify trends based on value chain decomposition , to assess the role of industries and countries in the global perspective and to investigate competition and collaboration .Some of the metrics proposed in the domain are apt to account for both demand and supply shock transmission, such as in the case of output upstreamness and input downstreamness .Network analysis has also been applied in order to relate the probability of spillover effects to the existence of global hub sectors playing a primary role in shock transmission .Finally, one may observe that some of the recent literature on multi-sector real business cycle modeling investigates cascading effects under the lens of I/O network layouts and structural properties.Notably, a conceptual framework for the study of cascading effects is proposed in Ref. , which examines the relationships between low-/high-order interconnections and aggregate volatility.Moreover, questioning the assumption that idiosyncratic shocks to individual firms cancel out in the aggregated perspective , Ref. connects the asymmetries in the I/O structure of economies to the possibility that microeconomic shocks may induce macroeconomic fluctuations.See also and related references for further discussion.The connection between I/O analysis and optimization techniques is deeply rooted.For instance, as discussed in Ref. , the pioneering work on linear programming by George Dantzig led to the formulation of a Leontief substitution model, both in a static and a dynamic version .This achievement allows the evaluation of different ways of aggregating final products starting from inputs based on the simplex method.Linear and nonlinear programming techniques matched with I/O models have also found use in disaster impact analysis, especially in order to allow specific degrees of flexibility useful in this type of applications."Among the recent contributions, proposes a nonlinear programming formulation based on Leontief's model and including the representation of production bottlenecks.The impact assessment model presented in Ref. combines demand-driven I/O modeling with linear programming in a multi-regional perspective, implementing a constrained production costs minimization principle that takes into account demand, technological restrictions and maximum production capacities.A key advantage of the method is its ability to describe inefficiencies associated to the presence of those constraints, which is particularly relevant towards disaster impact analysis.In Ref. , nonlinear programming is employed to describe the short-run response of an inter-regional, inter-industry economy to shock, formalizing the effort to re-establish pre-event transaction levels.To this end, the authors assume “fixed technical coefficients, flexible trade coefficients, partial import and export substitution, and minimum information gain with endogenous totals”.Both regional production shocks and inter-regional trade shocks can be accommodated in this representation.The I/O Inoperability Model was introduced in its theoretical foundations in Ref. , wherein it is expressed in physical terms, and , through a demand-reduction formulation that allows to exploit standard I/O tables towards parameterization.Inheriting the demand-driven nature of the standard Leontief model, the IIM translates critical events into demand perturbations and assumes infinite elasticity of supply.In time, the IIM has gained attention as a tool useful to jointly assess inoperability and economic losses that result from critical events.For instance, in a post-disaster analysis perspective, associate IIM representations to different successive post-event regimes.Notable applications of IIM in disaster analysis include the cases of terrorism , electrical blackouts and shocks in transportation systems .Moreover, extensions and integrations of the basic model are numerous in the literature.One such case is the IIM formulation in Ref. , which exploits a fields of influence approach for mapping the most relevant technological sectors, see also . ,presents an IIM taking into account international trade inoperability.In Ref. , a methodology was introduced for IIM parameters assessment in the context of critical infrastructures, exploiting technical and operational data.A supply-driven IIM was proposed in Ref. on the basis of the Ghosh I/O representation; the technique was applied in Ref. for risk analysis in manufacturing systems.Optimization techniques have also been combined with IIM models.For instance, within a risk management perspective, applies linear programming to determine the optimal distribution of initial inoperability leading to minimum total losses.In Ref. , the IIM was criticized as “a straightforward application of the standard input-output method”, while the relevance of the modeling work enriching the basic framework was acknowledged.Limited usability was attributed to the IIM in Ref. , observing that the model “tries to estimate only a subset of mainly the negative impacts”.See both references for further discussion.The research community developed an interest in I/O analysis frameworks able to cope with complex perturbation types and constraints.For instance, on the basis of , introduces a technique to impose supply-side output constraints to selected sectors. ,exploits a mixed multi-regional input-output model to evaluate the global effects of a supply chain perturbation induced by a disaster.A generalized I/O model is proposed in Ref. starting from a comparison of Leontief and Ghosh approaches and applying ideas elaborated on top of the total flow concept.Impacts on production are attributed to an ensemble of agents, e.g. “consumers, producers, workers, and investors”, to better reflect the role of different stakeholders in the supply chain.In this way, both supply-driven and demand-driven perturbation propagation factors can be simultaneously captured.A number of recent initiatives are related to the exploitation of I/O modeling in the construction of analysis and decision support tools, which in time have been expanded and integrated with other disaster assessment techniques, even beyond a purely static analysis framework.A notable example is HAZUS, a multi-hazard analysis tool proposed by the US Federal Emergency Management Agency,6 see for an overview of its development.As part of the portfolio of risk estimation methodologies proposed therein, the evaluation of both direct and indirect economic losses is included.While the direct component involves capital stock losses and income losses, indirect losses are evaluated by means of an I/O model . ,explains that the method is based on the concept of rebalancing of demand and supply in a standard Leontief I/O model based on adjustments in imports and exports, see also .The technique also takes into account supply constraints and factors such as the presence of inventories.In the recent literature, we can find an extensive set of case studies involving HAZUS-aided economic loss estimation, see for instance .Another case of interest is represented by the National Interstate Economic Model, an operational MRIO focusing on the United States and which has allowed, in time, the introduction of extensions and refinements such as TransNIEMO and FlexNIEMO .A number of studies proposed in recent years are investigating the time evolution of I/O economic networks through the comparison of I/O tables referring to different years.Examples include temporal inverse analysis techniques and methods based on network theory .This research direction has strong ties with the discussion on structure analysis methods proposed above in this section."According to , while the theoretical characterization of Leontief's dynamic I/O model was extensively addressed in the literature, its empirical application brought up a number of issues.One of the ways found in research to address this problem was by the introduction of techniques enriching static I/O methods with dynamical features, obtained for instance by exploiting series expansions of the Leontief inverse.This is the case of lagged I/O models, see for instance .A somehow affine approach to the formulation of dynamic I/O representations is based on the use of the Sequential Interindustry Model, see for instance .Introduced in Ref. , the method aims at describing the inter-industry production web dynamically, by integrating the Leontief I/O framework with technological aspects related to the way production is performed.“In the SIM, production is not simultaneous as in the static input-output model, but rather occurs sequentially over a period of time” .This leads to the definition of an industry time interval internally articulated to include production and shipment time frames.The SIM allows to compare differences in production modes, notably anticipatory and responsive production as well as their combination.It is also employed in Ref. as a tool for temporal disaggregation towards impact analysis in regional economies.Interesting applications of the SIM to disaster impact analysis can also be found, for instance, in .In the latter reference, in particular, a discussion is proposed about the applicability of the SIM to economic impact quantification of unscheduled events.It has to be mentioned that the I/O equilibrium framework has opened the doors for the formulation of dynamic disequilibrium characterizations, for instance through supply-demand disequilibrium models and equilibrium-disequilibrium switching .Further relevant observations on imbalance in terms of partial versus general equilibrium effects, as well as micro-/meso-/macroeconomic effects, can be found in Ref. in the context of computable general disequilibrium analysis.See also for further considerations about recovery processes and disequilibrium.Recent contributions in this direction include , wherein post-disaster recovery is represented as a two-step process: the first stage aims at re-establishing the pre-disaster relationships between outputs, while the second one aims at returning to the pre-event output levels.The modeling objective is met by combining an I/O table with an event accounting matrix.At the core of the method is the construction of the “basic equation”, an I/O equation depicting the imbalances of the system resulting from a disaster.Imbalanced economic recovery has instead been considered explicitly in Ref. by means of dynamic inequalities.The idea is to express the contributions of post-crisis drivers such as labor, capital and final demand in terms of constraints affecting the dynamics of recovery between consecutive equilibrium conditions, formulated in terms of Leontief models.The method can be used to trace the temporal progress of an economy and to perform sensitivity analysis.In time, the IIM underwent dynamic extensions, notably through the Dynamic Input-Output Inoperability Model , wherein a resilience matrix is used to specify the inoperability dynamics.A comparison of the DIIM and the dynamic Leontief I/O model can be found in .See also for an analysis of IIM and DIIM and the proposal of an alternative approach based on systems dynamics principles.Similarly to the case of IIM, a number of applications of DIIM appeared in the literature, including the cases of terrorism , natural events and epidemics .Elaborations of the basic DIIM were meant to include different features of the recovery processes and mitigating factors such as inventories , as well as to incorporate recovery strategies based for instance on sector prioritization logics .A hybrid DIIM and event tree analysis was introduced in Ref. to specify time-varying recovery models.Also, proposed a fuzzy DIIM for the analysis of global supply chains.While, in principle, the DIIM approach is based on the concept of demand-side shocks, other possibilities have been considered, too.For instance, in Ref. a time-varying perturbation on workforce availability has been taken into account.Based on the above-mentioned SIIM, presented a dynamic extension of the SIIM, related to the DIIM.The Adaptive Regional Input-Output model has been introduced in Ref. to the purpose of disaster modeling and is based on a Leontief model plus additional features to cover limitations of the basic technique in the context.In particular, the methodology considers demand perturbation, while it adds constraints on supply capabilities, expressed in terms of production bottlenecks, and proposes a rationing scheme determining priorities associated to the demands to be served.Furthermore, price dynamics are evaluated as a function of underproduction, whereas adaptation capabilities are modeled at the levels of final demand, intermediate consumption and production.The formulation described in the reference above, in particular, focuses on the following behavioral parameters: overproduction, by the exploitation of spare production capacity; adaptation, which obeys the principle of reconfiguring demand if possible and bringing it back when possible; demand and price response. ,building on the ARIO model, introduces a disaggregated view for the production system as a web of producers-consumers specified on the basis on assigned degree distribution configurations.Disaster response characteristics, particularly robustness, are then studied as a function of network features such as concentration, clustering, subregions connectedness.The role of inventories is also taken into account. ,extends ARIO by including expanded categories of supply.Inventories are consequently introduced in the model, together with their filling and depletion dynamics, while production bottlenecks and input scarcity are also allowed by the representation.In the proposed case study, two phases are identified for the recovery period: an initial stage, dominated by production bottlenecks; the rest of the reconstruction period.Also in the domain of dynamic I/O-based analysis, some of the recent contributions focus on the integration of I/O techniques with other economic models such as the ones mentioned in the introduction of this paper.Notable is the concept of merging I/O and CGE principles, that can be found for instance in some of the already mentioned models such as ARIO.Moreover, the combination of I/O models and econometric models has been considered in the literature .Instances include INFORUM models and FIDELIO 2, a fully interregional dynamic econometric long-term I/O model for the EU and beyond .In some cases, analysis frameworks have also been constructed by involving multiple techniques among those reviewed above in this section.Such is the case of the multistep procedure proposed in Ref. , which addresses direct loss assessment, economic shock, prerecovery period, recovery period and total consequences.The method contemplates the exploitation of the basic equation and of the ARIO model at specific analysis stages.A generalized dynamic I/O framework is proposed in Ref. by combining intertemporal dynamic modeling principles with the intratemporal representation of production and market clearing.The approach allows to consider both demand and supply constraints and has a strong nexus with static and dynamic Leontief models as well as with SIMs.Another interesting trend in dynamic analysis is that of merging the advantages of the I/O-based economic representation with that of heterogeneous modeling components, able to track more specifically dynamical features of systems involved in the disaster scenarios under consideration.Examples of matchings with technological models can be found for instance in Ref. and .Other recent references include , integrating of the I/O framework with a biophysical model for flood risk assessment, and , proposing a combined I/O technique and system dynamics ecological model."Resilience in economic systems is a central topic in today's research in disaster impacts assessment and mitigation.Aspects of great interest include the definition and measurement of this property as well as the associated policy implications .Recently, a number of characterizations of economic resilience relevant to multi-industry representations have been provided in .The latter reference, in particular, qualifies this notion as focusing more on flow losses than on stock losses.Additionally, it defines static economic resilience as “the ability of a system to maintain function when shocked”, while dynamic economic resilience in terms of “hastening the speed of recovery from a shock”.Moreover, it is possible to distinguish between inherent and adaptive aspects of a resilient economic behavior, as well as between final customer-side and business-side resilience measures.In turn, the latter ones can be broken up as follows, taking into account the double nature of businesses as customers of intermediate goods and suppliers:customer-side measures: these represent ways for the different industries to effectively exploit available input resources in order to minimize impacts on their own activity;,supplier-side measures: here the focus is on the ability of businesses to keep delivering service.For both cases, the reference discusses a categorized series of resilience options.Micro-, meso- and macroeconomic levels are taken into account in this study.Finally, resilience indicators and indexes from the literature are assessed.Traditional approaches to impact analysis based on I/O models are challenged in providing resilience-oriented interpretations of economic systems and applications, as resilience “places greater emphasis on flexibility and responding effectively to the realities of disequilibria, as opposed to unrealistically smooth equilibrium time-paths” .In recent years, remarkable efforts have been made by the scientific community in the use of I/O techniques to address various aspects of resilience analysis.In the first place, I/O structures are being studied in the literature as possible determinants of shock response and resilience attributes of economies.This topic is inherent, for instance, to a number of works on structural analysis and on network-theoretical methods.Moreover, empirical validation has been performed in recent works, especially in a regional perspective.For instance, I/O methods are employed in Ref. to assess regional labor market resilience.Adopting an evolutionary approach, the two phases of shock and recovery are considered.Key factors of regional resilience are identified in embeddedness, relatedness and connectivity, where the first reflects the dependency of shock propagation on the I/O structure of the region, while the other two are associated to intersectoral and interregional labor mobility.Another case can be found in Ref. , combining I/O modeling and shift-share analysis to assess regional resilience to economic crisis.Furthermore, resilience concepts, factors and metrics have been integrated in some of the models illustrated above in this paper, especially with reference to some of the dynamic frameworks.For instance, as mentioned, the DIIM was complemented in Ref. with the representation of the buffering capabilities provided by inventories, while in the inventory DIIM was also enriched considering different types of recovery paths.The DIIM is also studied in Ref. through the concepts of static and dynamic economic resilience and the related resilience triangle representation, see .Attributes such as robustness, rapidity, redundancy and resourcefulness allow the formulation of resilience metrics, including the time-averaged level of inoperability, the maximum loss of sector functionality and the time to recovery.A combined demand- and supply-driven I/O analysis framework for resilience assessment was introduced in Ref. ; in the considered port disruption application, resilience measures were identified in terms of: ship re-routing; export diversion; use of inventories; conservation; unused capacity; input substitution; import substitution; production recapture.A risk management perspective has also been adopted in proposing the exploitation of I/O models for resource allocation and prioritization.For instance, the IIM has been exploited in Ref. to address preparedness considerations in a multi-regional perspective.Also, in Ref. inventory resources allocation has been considered in the DIIM by means of an optimization technique taking into account inoperability, inventory costs and technical constraints.Resilience metrics and aspects of the failure and recovery processes reverberate in a number of recent formulations of optimization problems for I/O systems.One such example is , wherein an extended Leontief I/O model is embedded into an energy-economic resilience optimization problem.This relates to the determination of “the minimum level of extrinsic resource recovery investments required to restore the production levels sufficiently, such that the total economic impacts do not exceed a stipulated level over a stipulated post-disruption duration”.Finally, decision theory has also benefited from the assimilation of I/O techniques and datasets towards the formulation of resilience assessment methods, see for instance .In Ref. , some reflections are proposed on the emerging challenges and opportunities for I/O analysis: exploiting increasing volumes of data available and fostering estimation capabilities; integrating the I/O analysis framework with other techniques; tackling the study of global supply chains, emerging economies and global cities; expanding regional accounting systems; exploiting multipliers; favor supply chain literacy in conjunction with the evolution of Internet of Things; increasing the frequency of I/O tables computation, taking into consideration both the national and inter-country dimensions.In this paper, in particular, we addressed the relationships between I/O modeling and the assessment of economic losses associated to disasters resulting from both natural and man made hazards.The relevance of I/O models in the context finds a huge plus in their moderate data requirements and their ability to combine with other analysis techniques, such as technological models or market behavioral descriptors.In this sense, they could maintain a relevant role in policy support, especially for large scale impact analysis, and in determining a cost-effective use of resources .We documented the recent evolution of the discipline to support a better understanding, measuring and counteracting of complex disasters scenarios affecting societies and economies.Theoretical problems and practical case studies explored in research often involve complementary views of rippling phenomena, including both backward and forward aspects of propagation.The literature has considerably expanded and extended classical demand- and supply-driven I/O formulations to take into account the dynamics of critical events and crisis response.The interaction with other disciplines, such as complex network theory, also aims at addressing some of the emerging problems, such as the large-scale behavior of interacting economies and supply chains.Resilience analysis of economic systems also represents an opportunity for an evolved approach to I/O modeling, involving a continuous dialog with complementary analysis frameworks.
During the last decades, input-output (I/O) economic models have assumed a prominent role in disaster impact analysis and resilience assessment. Rooted in general equilibrium theory and economic production theory, they catalyse attention on the distinction between direct economic losses and ripple effects that may be generated inside a multi-industry system as a consequence of perturbations. Empowering the I/O analysis framework and overcoming some of its inherent limitations is crucial in order to successfully approach emerging disaster assessment challenges, such as multi-regional loss quantification and the investigation of shock responses in global supply chains. In this paper, we review and discuss how different disaster modeling aspects have been incorporated in recent contributions exploiting I/O techniques, taking into account both demand- and supply-sided perturbation triggers, static and dynamic representations, as well as the assessment of economic resilience.
42
Superconductivity in U-T alloys (T = Mo, Pt, Pd, Nb, Zr) stabilized in the cubic γ-U structure by splat-cooling technique
The large interest in stabilization of U-based alloys with a cubic γ-U structure has came first from the viewpoint of metallurgy.In the late 1970s massive research programs were launched in USA to develop the low enriched uranium fuels .The research showed that the U-Mo alloys with γ-U phase were the most promising candidates for LEU fuels, e.g. they have a higher stability under irradiation and are more resistant to swelling .Indeed, U-10Mo) has been selected for the U.S. reactors, while many European reactors have used the U-7Mo .This concentration in uranium is sufficient to reach the γ-U phase stability.In Vietnam, the high enriched uranium rods of the nuclear reactor in the Central Highlands of Da Lat City have been exchanged by LEU ones since 2011.From the fundamental research viewpoint, the 5f electronic states in many uranium-based compounds are generally close to the verge of localization, which brings up fascinating many-body physics.However, the fundamental physical properties of elemental uranium have been investigated thoroughly for the orthorhombic α-U phase , since only this phase is stable at and below room temperature.The superconductivity of natural uranium was first discovered at Tc = 1.3 K in 1942 .Most recent reports gave Tc = 0.78 K .However, no signature of the superconductivity was found down to 0.02 K at ambient pressure in good-quality single crystals of uranium, although the charge-density-wave states were found to be developed fully at low temperatures in those crystalline uranium specimens .The basic thermodynamic properties of γ-U phase alloys have been much less investigated and remained practically unknown.Except for old reports from 1960s on the superconductivity of the γ-U phase around 2 K in water-quenched U-Mo and U-Nb alloys , there are no more detailed data on fundamental low-temperature properties of the γ-U alloys.We have been interested in stabilization of γ-U alloys and characterization of their fundamental electronic properties, especially their superconductivity.It was shown earlier that the rapid quenching of certain alloys from the melting point could lead to a formation of new meta-stable phases and/or amorphous solid phases .Indeed, the splat-cooling technique has been used for searching novel microstructure or amorphous uranium .Recently, using ultrafast cooling from the melt to room temperature, we were able to retain the cubic γ-U phase in U-T alloys.In our equipment, the molten metal drops between two colliding massive copper anvils, yielding a cooling rate better than 106 K/s.We can then proceed with characterization of low-temperature properties.Starting with Mo alloying, we succeeded to suppress the α-U phase with about 11 at.% Mo .We have extended our investigations on other U-T alloys, focusing in particular on their superconductivity.This work is a review of our results obtained up to date.U-T alloys with low T concentrations were prepared using natural U and T element by arc-melting on a copper plate in argon atmosphere.The sample ingots were turned over 3 times to ensure the homogeneity.Up to 4 samples could be obtained in one arc-melting cycle without breaking a vacuum, thanks to a special construction of copper crucible and the chamber.The splat-cooled sample was prepared from the alloy-ingot by splat-cooling technique and had a shape of irregular disc with a diameter of approx. 20 mm and a thickness of 100–200 μm, as shown in Fig. 1.More details of preparation of the splats have been reported earlier .Throughout our work, the T-content is given in the atomic percent.The crystal structure of the splat-cooled alloys was investigated by X-ray diffraction using the Bruker D8 Advance diffractometer with Cu-Kα radiation.The resistivity and specific heat measurements were carried out in the temperature range 0.4–300 K by means of standard techniques using e.g. Closed Cycle Refrigerator system and Quantum Design Physical Properties Measurement System described earlier .For investigations around the superconducting transitions, we performed those measurements in applied magnetic fields up to 7 T. Additional phase purity analysis was performed by scanning electron microscope equipped with an energy dispersive X-ray analyzer.The splats show in most cases a homogeneous distribution of the alloying elements with concentrations corresponding the nominal ones.Electron backscattering diffraction analysis has been employed to study the microstructure and texture of several splats.The crystal structure of U-Mo splats, 1, 2, 4, 6, 10, 11, 12, 13, 15 and 17 at.%) has been thoroughly investigated in order to determine precisely the minimal Mo concentration necessary for obtaining the pure cubic γ-U phase.Details of our investigations of crystal structure and phase stability in U-Mo system have been reported earlier .For a necessity of a comparison with other U-T splats, we summarize briefly the main outcome obtained on U-Mo splats: 1) the α-U phase has disappeared and the γ-U phase or its tetragonally distorted variant has developed fully in the alloys with Mo larger than 11 at.%.A pure cubic γ-U phase without any distortion is revealed only for U-15 at.% Mo and U-17 at.% Mo splat, and 2) the stable γ-U alloys were obtained in the as-formed state without any additional sample treatment.Thus, the effect of the splat cooling can be seen in a better capability in retaining the bcc-type of structure for lower Mo concentrations.No aging or phase transformation/decomposition was observed for all splat-cooled alloys when exposed to air.They show even a very good resistance against any hydrogen absorption in the hydrogen atmosphere with the pressure below 2.5 bar .As small amount of orthorhombic α-U phase is difficult to recognize by XRD if it coexists with the cubic γ-U phase, EBSD analysis has been performed on several U-Mo splats.Earlier published EBSD results for pure–U and U-15 at.% Mo splats corroborated the XRD data.For instance, the EBSD maps for U-15 at.% Mo splat have revealed a pure cubic γ-U phase with an equigranular grain structure without twinning and preferred crystallographic texture.For as low as 12 at.% Mo, the EBSD maps exhibited a full crystallinity with grain size of several micrometers and no evidence for α- or α-U related phases .Recently, we have extended our studies to the splat-cooled U-based alloys with other T metals.Some of the results were included in our recent publications .We present here a comparison of selected results.The XRD patterns of U-Pt splats in the as-formed state are shown in Fig. 2b. For an easier comparison, we display normalized intensities.Increasing the Pt concentration leads to merging of several reflections around 36°, suppression of the low-index α-reflections, vanishing of the high-index α-reflections and a development of γ-reflections.The situation is very similar to U-Mo alloys, showing a coexistence of both α- and γ-U phase for splats with less than 10 at.% alloying level.The XRD pattern of U-15 at.% Pt revealed four characteristic reflections of the γ-type structure, γ, γ and γ respectively at 36.8°, 53.0°, 65.3° and 78.2°), indicating a stabilization of the cubic γ-U phase.However, unlike U-15 at.% Mo with very narrow γ-reflections indicating the fundamental cubic A2 structure, there is a certain broadening for all the γ-reflections in U-15 at.% Pt, similar to those observed in the U-13 at.% Mo splat.It is interesting to compare our findings with respective binary phase diagrams.The maximum reported solubility in γ-U of Pt or Pd does not exceed 5 at.% .Our results reveal that using the splat cooling we not only retain the bcc phase to low temperatures, but also extend its occurrence for much higher concentrations of alloying Pt/Pd metals.However, SEM analysis indicated that a small amount of the binary phase UPt occurring at the grain boundaries, which is accompanied by the U-Pt alloy depleted in Pt, so the splat cannot be taken as single phase.The normalized XRD patterns of the splat-cooled U-Nb alloys in the as-formed state are shown in Fig. 3a.In general, the increase of the Nb concentration leads to the suppression of α-U reflections and the development of γ-U reflections.It causes the overlap of low-index reflections around 36° and then the combined reflection becomes narrower for 10 at.% Nb.For the U-15 at.% Nb alloy, the splitting of the γ-reflections into doublets was observed for all four prominent γ-reflections.For instance, the γ reflection of U-15 at.% Nb splits into doublet located around 36.3°) and 37.0°).The situation is similar to that of alloying with 11–12 at.% Mo which stabilizes the γ0-U phase.In general, our results show a similarity between the U-Nb and U-Mo systems.Moreover, we expect that using ultrafast cooling could reduce the necessary Nb concentration.Indeed, it turned out that the γ0-U phase is found to be stabilized by 15 at.% Nb alloying, i.e. lower than the minimal content for stabilization of such a phase in water-quenched or in argon quenched ones .Using a combined arc-melting, hot-rolling, annealing and water-quenching, the γ-U phase was stabilized in U-7 wt% Nb alloy .In the case of U-Zr system, the situation is similar to that of U-Nb, i.e. the complete miscibility in the high-temperature bcc phase.The normalized XRD patterns of the splat-cooled U-Zr alloys in the as-formed state are shown in Fig. 3b.The results illustrate the phase transformation from the α-phase to γ with increasing Zr concentration.Unlike other T alloying, the α and α reflections still persist for U-11 at.% Zr and U-15 at.% Zr.They become very broad for U-20 at.% Zr and then vanish for U-30 at.% Zr.Existing reports indicate that the single-phase γ-alloys were obtained for Zr concentrations between 25 at.% and 80 at.% .In our case the single γ-U phase can be considered only for U-30 at.% Zr splat.Moreover, most of γ-reflections at 35.9°) are broadened.We attribute such the broadening to an additional disorder by randomly distributed Zr atoms especially in alloying with high Zr concentrations.In all splats, UC and UO2 impurity reflections were observed in the low-angle part of the XRD patterns attributed to surface segregation.Additionally for U-Zr system, ZrC presence is revealed by most intense reflections ZrC and ZrC at 33.4° and 38.7°, respectively.Traces of carbon are ubiquitous in uranium metal.However, it seems that it couples preferentially only with Zr and has a high surface segregation tendency.The lattice parameters estimated for the γ-U phase alloys are given in Table 1.The atomic radii of Nb, Pd and Pt are equal or close to that of Mo, all which are lower than the nominal atomic radius of U, while the Zr atomic radius is larger .The lattice parameters of the alloys can be compared with that of γ-U at 1050 K and the value extrapolated to room temperature considering the thermal expansion.It is evident that the largest lattice parameters for the Zr alloying are related to the Zr atomic diameter.A remarkable fact is the large tetragonal distortion for the Nb alloying, which apparently exhibits c > a, i.e. opposite than for the γ0-U phase at U-Mo alloys.For a brief summary of the change of the temperature coefficient in splat-cooled U-T alloys with increasing T content in the normal state in the temperature range 3–300 K, we show in Fig. 4a the temperature dependence of the electrical resistivity of U-Mo splats.We concentrate on the two limit cases which reveal a striking difference, i.e. the pure-U splat and the U-15 at.% Mo splat.The pure-U splat exhibits a quadratic temperature dependence below 50 K and then an almost linear dependence up to 300 K, i.e. with a positive temperature coefficient.Unlike such a common metallic behavior, for U-15 at.% Mo, the resistivity weakly decreases with increasing temperature in the normal state in the whole temperature range, i.e. with a negative temperature coefficient.The temperature dependence of the resistivity of other U-Mo splats lies between such the two limits.The U-Mo alloys consisted of both α- and γ-U phase have still positive dρ/dT, all U-Mo alloys with γ-U phase have the negative dρ/dT.As such a change of the temperature dependence appears in conjunction with increase of the absolute resistivity value, we can deduce that a large disorder effect plays an important role in the splat-cooled alloys, similar to a strong disorder observed e.g. in some amorphous systems or disordered alloys and compound .The reason for the negative slope can be seen in the weak localization, i.e. a quantum interference effect occurring in strongly disordered systems .In our case, there is certainly still some extra contribution to the disorder produced by ultrafast cooling, affecting the grain size.It is interesting to review the resistivity behavior of all splat-cooled U-T alloys formed in the γ-U structure.The temperature dependence of the resistivity of these alloys in zero-field and in the temperature range of 3–300 K is shown in Fig. 4b.The resistivity values at 300 K and 4 K are given in Table 1.The ρ curves of U-15 at.% Mo and U-15 at.% Nb splat are quite similar.Besides, the residual resistivity ρ0 and the resistivity at room temperature are also similar.For the U-15 at.% Pt splat, although the resistivity values are twice higher, the relative change of the resistivity in U-15 at.% Pt curve) is very similar to that of U-15 at.% Mo.Namely, from room temperature down to temperature just above the superconducting transition, the electrical resistivity exhibits a negative temperature coefficient.For U-30 at.% Zr containing the γ-U phase, the negative slope does not develop yet.Instead, we found a very small but still positive slope of the temperature dependence in this splat.It should be mentioned that a negative temperature coefficient was indeed reported for U-Zr system, but for sample with 70 at.% Zr .We assume that the negative slope can be also observed for higher Zr concentrations than 30 at.%.All investigated U-Mo splats become superconducting at low temperatures below 2.2 K.The superconducting transitions revealed by abrupt resistivity drops in zero magnetic field are shown in Fig. 5.We focus first on the two cases: the pure U splat and the U-15 at.% Mo splat.The transition is manifested by a single drop at Tc = 1.24 K and 2.11 K, respectively .We remind here a very small width of the transition ΔTρ = 0.02 K observed for U-15 at.% Mo, while a wider transition ΔTρ = 0.2 K was found for pure U splat.However, unlike a λ-type anomaly for U-15 at.% Mo, the superconducting transition in the pure U splat was revealed only as a small feature around 0.65 K in the specific heat which is a clear evidence against the bulk nature of superconductivity.We assume that only a small fraction of the sample becomes superconducting.As the impurity phase has to form a 3D network to reach a zero-resistance state, it must be related to the grain boundaries.For other γ-U alloys, such as U-11 at.% Mo and U-12 at.% Mo, the superconducting transition also appears as a single resistivity drop, although broader than that in U-15 at.% Mo.We pay particularly attention to the superconducting transition in the U-6 at.% Mo splat , i.e. the intermediate range of Mo alloying consisted of both α- and γ-U phases.The phase coexistence is reflected by a flat but still a metallic-type overall temperature dependence.In the low-T range, the resistivity starts to decrease rapidly below 1.6 K.This decrease ends in an abrupt drop into the zero resistance state at Tc = 0.78 K.The obtained results suggest that there are two different superconducting phases in the U-6 at.% Mo splat, each of them exhibiting its own superconductivity.The lower Tc may be associated to the γ-U phase, as it revealed by a sizeable anomaly in the specific heat .The low-temperature ρ dependence of U-15at.% T splats measured in zero field is shown in Fig. 5b.We add in the same figure the data for U-30 at.% Zr splat consisting of γ-U phase.In all cases, a very sharp resistivity drop was observed at Tc.The estimated values for Tc and ΔTρ are given in Table 1.U-15 at.% Nb becomes superconducting at similar critical temperature as for other splat alloys consisting of γ0-U structure.U-30 at.% Zr exhibits a superconducting transition revealed by a single drop at Tc = 0.81 K .The superconductivity in U-15 at.% Pt is characterized by a sharp drop at Tc = 0.61 K.Despite of a similarity in the crystal structure and lattice parameter between U-15 at.% Mo and U-15 at.% Pt, U-15 at.% Pt becomes superconducting at much lower temperature.In addition, a second small drop was observed at Tc = 0.95 K.As a complicated phase situation was detected for the U-15 at.% Pt splat at the grain boundaries, we cannot be conclusive about intrinsic behavior of U-Pt alloys.More detailed investigations of superconducting phase transition in U-15 at.% Pt are in progress in order to understand the two transitions below Tc and Tc.We note here that, even if for the U-5 at.% Pt splat consisted of a mixed α-U and γ-U phase, the superconducting phase transition is revealed by only a single drop in the resistivity at 0.7 K .One can also see a certain parallel to recently observed two transitions in the skutterudite-related La3Rh4Sn13 and La3Ru4Sn13 .Applying external magnetic fields, the superconducting transitions shift towards lower temperatures, as expected.The estimated values for critical magnetic fields at zero temperature and the critical slopes at Tc of the Hc2 vs T curvesTc) for selected U-Mo splats were reported earlier .In Table 1 we listed only the values for pure U and U-15 at.% Mo splat, for a comparison with other T-alloying splats.The estimated values for and forTc) are respectively in the range of 2–7 T and 2–4 T/K.These values are close to that found for the strongly interacting Fermi liquid superconductor U6FeTc = 3.42 T/K) and Chevrel-phase superconductorsTc) ≤ 8 T/K) .One difference is that for those splat-cooled γ-U alloys, the Tc values are lower than 2.2 K, while Chevrel-phase superconductors have much higher Tc.The temperature dependence of specific heat, Cp, has been studied for selected splats over the whole temperature range, including both the low-T and high-T parts for characterizing the superconducting behavior as well as the electronic and phonon contribution.The estimated values for Sommerfeld coefficient of electronic specific heat and the Debye temperature are given in Table 1.A clear evidence of an increase of density of states at the Fermi level for γ-U is observed only for U-15 at.% Mo, as shown by an enhancement of the γe value by Mo alloying).It is ascribed to the increasing atomic volume and higher UU spacing.The enhancement of the γe value is found to be larger for Pt alloying, while it was smaller for Nb and Zr alloying.We estimated the height of the experimentally observed specific-heat jump and then compared to the estimated BCS values by using the γe and Tc values determined from our experiments.In Fig. 6, we shown the C-T curves in zero field for selected investigated U-T splats.Only a very small feature related to the superconducting transition was revealed at 0.65 K in the specific heat for the pure-U splat.The results suggest that only a small fraction of the sample is really superconducting.For U-15 at.% Mo splat, a pronounced λ-type specific-heat anomaly was observed.The height of the experimentally observed specific-heat jump is in a good agreement with that estimated from BCS theory.For other U-Mo splats with lower Mo contents, a broader peak with a smaller specific-heat jump was observed close to the superconducting transition temperature Tc defined from the resistivity measurements.The experimentally estimated jump for instance for U-6 at.% Mo splat amounts to only about 55% of the BCS value .The specific heat of other U-T splats containing the γ-U phase measured down to 0.4 K in zero magnetic field is shown in Fig. 6b. Only a weak and broad bump with a small height was observed in the C curve of U-15 at.% Nb .The crystal structure, the resistivity jump and the Tc value of this splat are similar to that of U-12 at.% Mo splat, but a much larger peak was observed for U-12 at.% Mo in the C curve.The specific heat peak related to the superconducting transition in U-30 at.% Zr splat is visible at Tc determined from the resistivity jump, proving that the superconductivity in this splat is a real bulk effect.We have stabilized the γ-U phase in the U-T alloys by a combination of ultrafast cooling and alloying with 15 at.% T content and 30 at.% Zr content.An ideal bcc A2 structure was found only in the U-15 at.% Mo splat.It is crucial that using ultrafast cooling we are able to reduce the necessary concentration of the T elements, i.e. the γ-U phase can be stabilized by a lower concentration of alloying elements.Moreover, ultrafast cooling could also extend the solubility of Pt metal and thus we are able to stabilize also γ-U phase in U-15 at.% Pt splat.We emphasize again that all splat-cooled alloys were obtained without any additional treatment and that they are very stable when exposing to ambient conditions.All the U-T splats become superconducting with the lowest and highest Tc of 0.61 K and 2.11 K respectively for U-15 at.% Pt and U-15 at.% Mo.The prediction of BCS superconductivity for the specific heat jump at Tc was found to be entirely fulfilled in the U-15 at.% Mo among all investigated splats.Our investigations have provided new data to the data-base for low-temperature properties of the U-T system with low-T content.
We succeed to retain the high-temperature (cubic) γ-U phase down to low temperatures in U-T alloys with less required T alloying concentration (T = Mo, Pt, Pd, Nb, Zr) by means of splat-cooling technique with a cooling rate better than 106 K/s. All splat-cooled U-T alloys become superconducting with the critical temperature Tc in the range of 0.61 K–2.11 K. U-15 at.% Mo splat consisting of the γ-U phase with an ideal bcc A2 structure is a BCS superconductor having the highest critical temperature (2.11 K).
43
Moving (back) to greener pastures? Social benefits and costs of climate forest planting in Norway
Endre Kildal Iversen: Conceptualization, Data curation, Formal analysis, Methodology, Resources, Software, Validation, Visualization, Writing - original draft, Writing - review & editing.Henrik Lindhjem: Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Validation, Writing - original draft, Writing - review & editing.Jette Bredahl Jacobsen: Conceptualization, Investigation, Methodology, Validation, Writing - original draft, Writing - review & editing.d Kristine Grimsrud: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Writing - original draft, Writing - review & editing.Norway has ratified the Paris Agreement to pursue efforts to limit the temperature increase to 1.5 °C above pre-industrial level.Norway committed to cut emissions of greenhouse gases by 40 per cent by 2030, while the Norwegian Climate Act target an 80–95 per cent reduction by 2050 compared to the 1990 level.Afforestation and forest management measures to increase carbon storage are becoming an important means of reaching the targets.However, these measures may come at the expense of other ecosystem services provided, and the question is how to make the right trade-offs from a societal perspective.The Norwegian government is considering implementing a national Climate Forest Programme consisting of planting forest for the sequestration of greenhouse gases on former semi-natural pastures, that otherwise would be revegetated by natural forest.Semi-natural pastures has been maintained by grazing and the ecosystem depends on grazing to maintain its characteristic biodiversity.In addition, the pastures provide provisioning and cultural ES such as landscape aesthetics, but probably also sense of identity and place, as pastures have been an important component of traditional farming and rural lifestyles.Pastures previously covered large areas but have been considerably reduced across Europe due to land use changes.An official report identified 9800 km2 of abandoned pastures, of which 1350 km2 have quite recently been abandoned and have not yet become forested.When abandoned, the pastures slowly grow into natural forests consisting of tree species like birch, Scots pine and in some regions of Norway, spruce.Compared to natural reforestation, spruce climate forests are relatively densely planted, grows faster and can thus contribute to climate mitigation by two processes: faster sequestering of carbon while growing, and timber and biomass substituting other materials that are carbon intensive in use or production.There is public debate on the planting of climate forests, since such land use reduces biodiversity, and many people see the presence of climate forests as an impairment of landscape aesthetics.The CFP requires avoiding the planting of climate forests on land areas that are important for recreation and of high value for biodiversity preservation.The CFP may not cause immediate extinction of any species, but planting monocultures of spruce will infringe on the land areas inhabited by species dependent on a landscape kept open by grazing.Over time, the loss of habitat requiring human maintenance may increase the risk of extinction, in the same way as the risk of extinction is increased by the loss of available natural habitat.While several species, including some that are red listed, may expand their current habitats because of reforestation, several red listed species are endemic to pastures, due to the long-term management of grazing and/or mowing.The loss of pasture to any type of forest represents a loss of associated ES.Hence, an alternative to natural reforesting of abandoned pastures and the CFP would be to reverse reforestation and restore the recently abandoned pastures.The CFP commenced with a three-year pilot starting in 2015 in the three counties of Nordland, Nord-Trøndelag and Rogaland.The decision of whether to scale up the programme should depend on an assessment of the costs and benefits of the different land uses.We consider the costs and benefits of combinations of land use options compared to the status quo situation.An official evaluation of the pilot program was recently released without a full economic assessment of costs and benefits.Our focus on land not yet reforested differs from studies of the Norwegian Environment Agency and Søgaard et al., which consider the effect of climate forest planting in already reforested abandoned pastures.In addition, we expand their analyses by also estimating the non-market benefits elicited from people’s preferences for different land use options.We conducted a nationally representative choice experiment internet survey to assess the benefits of different land use options, including landscape aesthetics and greenhouse gas sequestration and biodiversity, and derive welfare estimates based on future scenarios.We use secondary sources to estimate the costs and market benefits of the land use options of CFP and recovering pastures by grazing animals, and compare them with the benefits, within a cost-benefit analysis framework.The main objective of the paper is, therefore, to estimate the welfare effects of land use options in a situation where there are trade-offs between the different ES provided.There is a relatively large related stated preference literature on assessment of different land uses, including national assessments of landscape aesthetics, forest ES such as biodiversity and recreation, forest management alternatives targeted to enhance recreational benefits, and carbon sequestration.This study contributes to, and expands on, this literature by integrating the values from the choice experiments into a full CBA of the Norwegian carbon forest program, pasture recovery and natural reforestation of abandoned pasture.We find that all our considered land use scenarios are preferable over the status quo of no management and natural reforesting.The paper is structured as follows: The next section briefly presents the analytical framework of the CBA in terms of social cost and benefit components, and how they are defined and measured.Section three explains the underlying data for estimating costs and benefits and discusses the assumptions for the policy scenarios.Section four estimates and compares costs and benefits over time in terms of net present value and conducts sensitivity analyses of restricting the extent of the market.We conclude and discuss the implications of the results in the final section.The pastures in Norway have been the home of numerous vascular plants, including herbs, and pollinators and other insects that depend on meadows and pastures for their survival as a species.As of 2015, 635 species distinctive for pastures were threatened.Of course, afforestation of abandoned farms as well as modern farming practices on pastures which involves the use of more fertiliser is identified as causes.Natural reforestation of abandoned pastures will allow species thriving in landscapes with more woody vegetation to increase their populations.Planted spruce for climate forests is a vegetation monoculture and has the lowest biodiversity of the analysed land uses.Landscapes sequester carbon at different rates.According to the Norwegian Environment Agency, planted spruce forests sequester carbon in the above ground biomass faster than any other vegetation in Norway.If the chosen policy is to recover pastures, we will miss out on the sequestration associated with natural reforesting or spruce forests.The soil also stores carbon, and soil carbon storage is substantial for boreal forests.There are knowledge gaps regarding the carbon sequestration potential of the soil of pasture.At the time of this study we did not have adequate knowledge on soil organic carbon levels for Norwegian climatic conditions for the two other land uses.We, therefore, choose to focus only on carbon storage in vegetation above ground.Benefits of planted spruce includes the timber value.The CFP requires that the spruce trees must first be felled after 60 years.Although the discounted value of net profits from forestry are relatively small, we account for these future incomes from forestry.According to several studies, Norway would, in a free-trade equilibrium with no subsidies, in theory produce no agricultural food.Since the recovery of pastures is dependent on government subsidies covering costs and toll barriers protecting the home market, we do not include farmer incomes of recovered pastures in this analysis.Thereby we implicitly assume the subsidies to cover the income.CBA is a method for ranking of policy options and finding whether policies are socially beneficial taking account of both the benefits and costs of the options as compared with a situation without policy intervention.The social welfare function summarises social preferences over allocations of resources and represents a preference ordering of individual utilities in CBA.The status quo scenario is to let abandoned pastures naturally reforest as mixed forest, causing a reduction in the number of species threatened by extinction to only 550 species.We investigate eight land-use scenarios to the status quo in our CBA; two scenarios where either half or a quarter of the abandoned pasture is recovered through agricultural production in the form of grazing, two scenarios where either half or a quarter of the abandoned pastures are afforested through the climate forest program and, finally, four scenarios combining afforestation and pastures.Land use will affect landscape aesthetics, CO2 sequestration and other values, and the associated species under threat range from 400 to 700 species in the different scenarios.Our simple set up implies linear relations between the land-use and the associated values.Thereby we disregard that spatial distribution of land-use may affect aesthetics and other values.We also assume an increase in pasture land use and a correspondent decrease in the CFP land use are equivalent in terms of impacts on biodiversity.We apply a seventy year horizon in our cost-benefit comparisons.We return to our assumptions for key parameters below.The total economic value of an environmental good produced by a policy measure equals the sum of all benefits/values of the change in the ES flow related to changes in land use.In our case this is the sum of the value attached to landscape aesthetics, carbon sequestration and biodiversity.The total economic value includes the benefits individuals derive from using the good and the value they place on the good even if they do not use it.Landscape aesthetics affect both non-use and use values.Landscapes provide existence and bequest values through people’s feelings towards how and for what purpose different types of land are managed and their sense of place, and use values through visual perceptions, such as observing landscapes while travelling or walking from home/cabin.The ability of landscapes to sequester carbon is a global public good, and the marginal benefit of carbon sequestration for individuals themselves approaches zero.Biodiversity is also a global public good, in terms of biodiversity as basis for ES and future food security.Although the value of biodiversity is often attributed to containing a large part of existence value, people also appreciate the experience of nature, enjoying flowers, birds and butterflies.The value of carbon sequestration is more related to future generations’ use values, i.e. bequest values.Thus, while it is currently a non-use value, it may, by time, turn into a use value for future generations enjoying a beneficial climate.The CFP aims to incentivise landowners to plant spruce on abandoned pastures to increase the uptake of CO2 in standing biomass.The Norwegian Environment Agency examined possible organizational models, environmental aspects, costs and future benefits associated with the programme in 2013 and started several pilot projects in three counties to test the forest planting policy.The agency proposed that the CFP should produce 10 million spruce plants and plant 50 million square meters of abandoned pastures a year.The government will cover expenses, including production of plants, administration of the program, the planting and the first years of maintenance by the landowner.We include all these costs, annualised, in our calculations.Pastureland can be categorised into different types, such as cultivated and uncultivated pastures, and the different types are grazed by different animals, first and foremost sheep, which graze both cultivated and uncultivated pastures during spring, summer and autumn.There are also cattle, which graze mostly on cultivated pastures, and on mountain pastures during summer farming, and goats, which graze mostly on uncultivated pastures.The areas of focus for this study is abandoned semi-natural pastures, meaning these pastures are not cultivated or fertilised, and they need not be fenced.1,The long-term trend has been a reduction in pastures, investments, relative wages and number of farmers, which complicates the calculation of the costs associated with increase in pastures.We assume linear cost of recovering pastures, meaning more recovery cost the same per unit recovered.The distortionary effects of the taxation and tariffs necessary to raise revenue for pastures and climate forests are an additional cost in all scenarios.Given that taxes are distortional to the economy, i.e. it is costly in efficiency terms to collect them, a substantial increase in governmental funding will, ceteris paribus, increase the marginal cost of public funds required to compensate farmers.To account for this, we apply a standardised net distortionary factor.In this section we describe the methods used to estimate benefits and costs of the various land use options.There is no market information that could approximate the value of the ES benefits of land use and biodiversity.We decided to elicit people’s preferences for these two ES benefits using the CE method.Thus, benefit estimates are based on data collected specifically for this purpose.We held on one focus group to receive feedback on our prototype questionnaire design.After adjusting the questionnaire based on the feedback from the first focus group, we held a second focus group where we conducted one-to-one interviews to perform a final test of the questionnaire before sending out the survey to the Internet panel.The questionnaire contained an introductory section with questions about people’s preferences for environmental policy objectives, the CE survey contained text explaining the main topic of the survey, starting by describing the baseline situation of areas in Norway that were previously used for farming and grazing.The policy problem was defined as whether to restore these areas to pastures, set aside and utilise some areas for climate forest planting for a sixty year period, or let them naturally reforest as mixed forest.The policy alternatives were defined as various combinations of these three land uses, compared to an alternative representing the status quo situation of natural reforestation.Any active management choice would entail a cost, while leaving the areas for natural reforestation would be free.Based on focus group testing and a qualitative study conducted by means of Q-methodology, two main attributes for the CE, in addition to the cost, were identified: combinations of land-use and biodiversity.These attributes were in turn explained in the survey using photos and icons for illustrations.For land use, examples of open, grazed pasture, mixed, natural reforestation and climate forest were shown using photos from three representative areas in the three counties of Nordland, Nord-Trøndelag and Rogaland in respectively Northern, Central and Western Norway.In the CE, land use were statistically designed as three different attributes, but graphically, it appeared as a single attribute consisting of combinations of them.The survey then explained how biodiversity in terms of vascular plants such as flowers, herbs and grasses, as well as the occurance of insect species, are the highest in pastures and the lowest in climate forest.The planted spruce by our design could never occupy more than 50 per cent of the total land area considered, and consequently biodiversity levels were permitted to vary independently of the spruce attribute in the CE.The argument for permitting this variation in biodiversity levels was that the impact of planted forest on biodiversity is reduced if one is more careful when determining where to plant.This information was presented to the respondents before they were given the choice sets.Finally, the survey explained above-ground carbon sequestration in the three land use types, from low to high.The amount of carbon sequestered was derived directly from the proportion of each type of land use in the alternatives in order for the different choices to be realistic – i.e. the highest level of carbon sequestration in the vegetation combined with land use that is all pastures would not appear credible to the respondent, violating content validity.Thus, while we represent carbon sequestration and storage graphically to the respondents as an attribute, statistically they are not, but are rather a specification of the characteristics of the land use attribute.Hence, the combinations of land uses give trade-offs between land use and biodiversity.As we ask for people’s preferences, we are looking at changes in a given level, and we assume that these changes can result in the ES provision mentioned in the CE.The areas relevant for the CFP are generally not very accessible and most likely not much used for recreational purposes.Thus, to make sure that all the attributes were relevant, we omitted recreation from the CE.Instead, we chose to ask about recreation in separate questions.The attribute levels were based on parameters from the initial report on the CFP.This report identifies the total amount of land that could potentially be planted with spruce.We set the maximum amount of planted spruce or pasture as 50 per cent of the total potential area.In addition, these land uses had levels of 25 per cent and 0 per cent.The amount of the landscape left to naturally reforest was derived as the residual area when the other land uses varied freely.As a result, natural reforestaton has five levels as shown in Table 2.Although the land use options vary by percentage in the choice cards, the respondents are given the exact land area size in the introductory information in the CE.An early estimate of the number of species under threat of extinction in Norway due to abandonment of pastureland was 550.Two other biodiversity levels were added in based on advice from biologists, an increase and a decrease of 150, or about 30 per cent of 550, in the number of species under threat of extinction.The levels of carbon sequestration were estimated on the basis of the CFP report for planted spruce and reforestation.For pasture we made the assumption that this vegetation can store one third of the carbon stored by planted spruce.Cost levels were based on feedback from the focus group and one-to-one interviews with respondents.After receiving information about the impacts of the various land uses, respondents were introduced to the choice sets.They were informed that anything other than status quo would require active management that has a cost that would have to be paid for by an annual earmarked income tax levied on all Norwegian households.The CFP, and agricultural policy, is paid for by everyone, so this was not expected to generate much protest.The CE design was found using SAS and uses the methods and procedures described in Kuhfeld.A full factorial design would have 3 × 3 × 3 × 6 = 162 profiles and 81 choice sets.We chose to use a fractional factorial design with 18 choice sets based on the output from the MktRuns-procedure.The profiles used in the choice sets were then chosen using the MktEx-procedure with constraints.The design was constrained to prevent the lowest level of red listed species to occur together with the highest levels of area allocated to spruce planting.The status quo alternative was added to the final output of the MktEx-procedure.The ChoiceEff-procedure optimised the combination of profiles into choice sets.The 18 choice-sets were blocked using the Mktblock-procedure.Each respondent received either 6 or 12 sets of choices2 and were asked to choose between two policy options in addition to the status quo.The order of the choice sets was randomised between individuals.The choice sets were followed by standard follow-up questions regarding which attribute they thought was the most important and whether it was difficult to answer.The survey then had a series of questions about recreational use and whether there are areas people prefer no climate forest planting, before concluding with socio-economic background questions.The data were collected from an Internet survey panel maintained by the survey company NORSTAT, as part of a large nation-wide, representative survey.Internet stated preference surveys have been shown to give reasonable response quality compared to more traditional survey modes such as personal interviews, mail or telephone.The survey was conducted on a representative sample of the Norwegian adult population in April-May 2018, obtained through their panel.We obtained 977 completed surveys, using a median of 12 min to complete.In 2013, the program was estimated to cost slightly less than NOK 100 million a year throughout a twenty-five year period, a total of NOK 2.4 billion in 2018 prices.When the government hand out afforestation grants to individual farmers, the farmers agree not to extract timber for the next sixty years.After sixty years the farmers are permitted to utilise the forestry resources.The survey respondents were explained that the farmers were assumed to harvest the trees after 60–80 years.We assume the CFP is implemented within 10 years, and that the costs are about NOK 190 million a year in 2018 prices, totalling NOK 1.9 billion NOK in the 50 per cent afforestation scenarios.The government will cover all expenses, including production of plants, administration of the program, and the planting and management of the climate forests by the forest owners.In addition to sequestering carbon, planting of climate forests represents future forestry incomes.We assume a single rotation situation, meaning that once trees are harvested, the area may be used for something else, which is consistent across the three alternatives.It also reflects how land use is going to change in the future with climate change and expected changed demand for food and fibre products is highly uncertain, thus assuming a repetition of rotations into perpetuity would not be appropriate for the current analysis.We account for the future harvest incomes of the first rotation and assume that the trees are felled and sold when the trees are 60 years old, meaning that the first trees to be planted in 2022 are cut down in 2082 while the last three to be planted in 2028 are cut down in 2088.The estimated volume of timber in that future point in time is 55 cubic meters per thousand square meters, and we assume that future prices correspond to current prices.3,We are only to include the net profits in our net benefits calculations, excluding the alternative use of labour and capital, and we assume a 25 per cent profit margin on the value of timber.The calculations are in accordance with valuation assumptions made by The Land Consolidation Courts of Norway and our resulting estimates are in line with an alternative estimation made by Søgaard et al.There are several studies investigating the costs of recovering pastures in Norway.Ebbesvik et al. investigate the cost of incorporating abandoned pastures when farms have excess capacity among labourers, in barns and outbuildings.They find that incorporating abandoned pastures cost about NOK 250 a year per thousand square meters.Small increases in the use of pasture, incorporating abandoned pastures into a farm with excess capacity, will be a lot less costly than a large scale increase in the use of pastures at national level.In our analysis, we investigate situations where the government decides to increase pastures by 337 or 675 square kilometres, more than 2.5 and 5 per cent of the total agricultural land in Norway.Such policies will necessitate both investment and stronger economic incentives for farmers to utilise the pastures.A cost analysis by Fjellhammer and Hillestad finds that investing in outbuildings and farm equipment reduces sheep farmers’ profitability by NOK 1500–2300 per thousand square meters as an annual average.We therefore expect the cost of recovering pastures to be NOK 500 per thousand square meters on average, both when the use of pastures is increased by 337 square kilometres and when the use of pastures is increased by 675 square kilometres.At present, about 65 per cent of the farmers’ income stems from governmental subsidies, and since the protection of the consumer markets from outside competition is an additional de facto subsidy, we expect this policy to be covered by governmental taxes and tariffs.In estimating the marginal cost of raising public funds, we follow the guideline of the Norwegian Ministry of Finance, which recommends assuming a cost of NOK 0.2 to raise NOK 1 for a public project or policy.This means in practice that we add 20 per cent to the opportunity and transaction costs of the programs.Further assumptions are provided in Table 3.We apply a time period of 70 years, from 2018 to 2088, including a ten-year implementation period and 60 years of climate forest conservation through the program.Regarding the other CBA assumptions, the Norwegian Ministry of Finance presented a White Paper making predictions for Norway until the 2060s in 2013, and a White Paper recommending assumptions for CBA in 2014.We adopt assumptions on number of households, real price growth and discount rates from these government documents, and use the recommended risk-adjusted discount rates of 4 per cent per annum for the first 40 years, and 3 per cent per annum for the years thereafter.The response rate for the CE survey was 16 per cent, and the completion rate was 82 per cent.The sample shows fairly good representativeness of the Norwegian population along the dimensions of gender, age distribution and education.4,Attribute levels for pastures, climate forest and biodiversity are dummy coded with the status quo of natural reforesting as the reference level.We include an alternative specific constant term coded as a dummy equal to one on the alternative scenarios, capturing respondent’s unobserved preference for moving away from the status quo.Table 4 presents the RPL model estimated on CE data.The coefficients of pastures, climate forest, biodiversity and income tax all have the expected signs.The coefficients for biodiversity show, as expected, a higher marginal value of a loss than of a gain of the same size.The parameter coefficients indicate that respondent’s value recovered pastures significantly higher than planted spruce.Respondents value pasture higher than natural reforestation.The two pasture coefficients are significantly different from each other but close in value; respondents’ value 25 per cent pasture recovery almost at as much as 50 per cent pasture recovery.The coefficients for planted spruce are not significantly different from each other and only the 25 per cent level is different from the status quo at 90 per cent significance level.We calculate the WTP for changes in non-monetary attributes relative to the base case, according to Eq., following Holmes et al.We calculate standard errors and confidence intervals using the delta method.The results are presented in Table 5.The scenarios involving some recovery of pastures yield higher WTP, reflecting both higher valued land use and increased biodiversity compared to status quo, F1, and F2.The scenarios involving solely the CFP are less popular, although the land-use is valued positively, this is severely dampened by the negative effects of the biodiversity reduction.Notice, the only reason this scenario has a positive WTP at all, is due to the constant term indicating a willingness to pay to move away from status quo regardless of the policy.The highest WTP is obtained from the P1 pasture recovery of half of the abandoned land scenario and the PF2 scenario, which is not significantly different from each other, but significantly higher than the other scenarios.We calculate the population’s annual WTP for land uses by multiplying household WTP by the number of households in Norway in 20186 .We assume that planting of climate forests and recovering of pastures will be implemented during a ten year period, so that the population WTP figures will increase stepwise from zero to the levels presented in Table 5 during implementation of policies.We consider an introduction of the scheme initiated in 2018 and completed within ten years.We assume the production of the spruce plants starts in 2020.In 2022 the planting starts, and as of this year, the total costs will be approximately NOK 230 million a year.We base our cost estimation on the Norwegian Environment Agency’s program cost estimates, a recent report on the effect of planting on natural reforesting areas and a recent evaluation of the CFP.We assume linear cost between 50 per cent and 25 per cent programs, except for administrative costs, which is higher in the 25 per cent scenarios.In addition, we calculate the incomes from future forestry of the climate forest.We expect that on good site quality three quarters of the climate forest provides financially profitable forestry in the future, and thus a ten year of forestry incomes towards the end of our period of analysis.Given today’s timber prices minus operating costs, we calculate the present value of future incomes at about NOK 30 million a year from 2078 to 2088 in scenarios where half of the abandoned pastures are afforested with spruce, and NOK 15 million when a quarter the abandoned pastures are afforested with spruce.From 2088 we allow land use to be changed – or continued.Thus, we look at a single rotation situation.To simplify, we assume that both the 50 per cent and the 25 per cent scenarios of recovering abandoned pasture, through the reintroduction of grazing animals, are implemented stepwise over a ten-year period.This implies that pastures gradually recover from 2019 and are fully recovered, according to the land use specified in the respective scenarios, in 2029.In the 50 per cent scenarios, we assume linearly rising cost from 2019 until 2029, where additional NOK 34 million NOK is funnelled to farmers in 2019, rising to NOK 337 million per year from 2029 and onwards throughout the time period analysed.In the 25 per cent scenarios, we also assume linearly rising costs from 2019 until 2029, where additional NOK 17 million is funnelled to farmers in 2019, rising to about NOK 169 million per year from 2029 onwards.The net present values of the population’s willingness to pay and program costs calculated using the standard CBA assumptions listed above, are provided in Table 8.Our main result is that active use of the abandoned pastures, whether through pasture recovery, planting spruce forest in the CFP or a combination of these policies, is preferable to the status quo option of natural reforestation.When comparing our scenarios, we see that the 50 per cent and 25 per cent pasture scenarios yield larger net benefits than the 50 per cent and 25 per cent climate forest scenarios.The households’ WTP for policy measures other than the status quo of natural reforestation of the abandoned pastures yield net benefits between NOK 51 and 158 billion, implying that any of the policies considered would be highly efficient use of public resources.According to our respondents’ choices and the subsequent cost-benefit comparisons, our results indicate that the scenario P1 where half of the abandoned pastures are recovered yields the highest net present value.This scenario provides the largest household WTP together with the PF2 Pasture and climate forest scenario but is a less extensive program and thus cheaper to implement than PF2.In conclusion, the difference in aggregated welfare between pure pasture and the combined policies with 25 per cent CFP land use are not large, indicating that the loss in aesthetic values of establishing climate forest may be compensated by carbon sequestration.Notice that the value of carbon sequestration, and potential substitution effects in future use of the wood is elicited through respondents’ value hereof seen together with the land-use attributes.Stated preference methods have been under scrutiny for estimating exaggerated welfare estimates, especially non-use values.Murphy et al. found that among 28 stated preference valuation studies, 83 observations had a median ratio of hypothetical to actual value of 1.35.All our scenarios remain positive even if we cut the willingness to pay figures by half, meaning net present benefits are positive at a 100 per cent hypothetical bias level, while the scenario with the highest net present value change to the P2 Pasture scenario.Our cost estimates are uncertain.Although the costs could be underestimated, the scenarios considered yield benefit-cost ratios ranging from 16 to 35, suggesting that cost is unlikely to overturn total benefits.We test whether changing the estimated costs change the ranking of scenarios and find that the P1 Pasture scenario remains the most beneficial scenario when multiplying costs by factors of 0.5, 1.5 and 2.A central issue in CBA is defining the extent of the market.Should all households in the country count equally, or should the preferences of households closer to the abandoned pastures be given a higher weight than households further away?,One can argue that households in the larger cities are likely to be less informed and affected by the ongoing abandonment of agricultural land and that the aesthetics related to landscapes are more relevant to households living in the affected areas.We check whether our results remain stable when restricting the analysis to rural households.Unfortunately, we lack detailed geographical information on the abandoned pastures, thus we cannot easily determine which and how many households are close to abandoned pastures.As a second-best solution we use urban-rural dimension as an instrument.Although the urban-rural dimension is unrelated to landscapes and pastures, it should coincide with the approximate geographical location of abandoned pastures, which one is relatively more likely to encounter in rural areas where agricultural production is costlier due to difficult terrains and long distances.When running the model presented above and restricting the analysis to the 323 500 most rural households7, rather than the whole Norwegian population, we find that all the scenarios retain the positive net benefits result.The P1 and P2 scenarios are the most efficient due to higher WTP for pasture recovery among rural households, revealing spatial heterogeneity of pasture ES values.Economic theory motivates several explanations for spatial welfare patterns, such as distance decay of use values, substitutes and complements distributed across space, and spatial dimensions of scope and diminishing marginal utility.Shorter distance to use values of pastures and biodiversity such as visual perception of landscape, experiences of nature, flowers, birds and butterflies, might explain the higher WTP among rural households.See results in Appendix C.Our CE and corresponding CBA indicate that recovery of abandoned pastures would be efficient use of land.Climate forests may be an efficient measure to meet the 80–95 per cent carbon dioxide emission reduction target in 2050, but other societal demands require land use management measures to recover semi-natural pastures as well, both because of landscape values and biodiversity benefits.Apart from the effect on the landscape itself, the result is driven by a strong preference for biodiversity conservation.From an economic point of view, any of the policy measures considered are highly beneficial compared to the status quo of natural reforesting.Recovering half of the abandoned pastures is the most preferred scenario, and while setting aside land area for climate forests for sixty years is slightly preferred over natural reforestation, respondents do have strong preference for departing from the status quo scenario of no management.Our results lend some support to the favourable assessment of the pilot program made by Søgaard et al. and Norwegian Environment Agency.These studies conclude that recently abandoned pastures with high site quality should not be used for climate forests due to biodiversity concerns, while already reforested pastures, not considered in our study, are more suitable for the CFP.Respondents were not scope sensitive to the area coverage.While this could be an indication of low validity of the survey, an alternative explanation is that people find that some traditional land use is important to keep, somewhat independently of specific size.The ranking of scenarios holds when increasing the costs, while when allowing for substantial hypothetical bias the scenario where a quarter of the abandoned pastures are recovered as pastures is most efficient.There are some examples of similar, but not directly comparable studies.Hynes et al. find a compensating surplus of EURO 22 per person per year for a sustainable rural environment in Ireland, implying the same area of pastures as status quo and improved conservation of species and stone walls.This would amount to about NOK 600 per household in 2018 prices and is roughly similar to our WTP estimates for enhanced biodiversity.Huber and Finger find in a recent meta-analysis of monetary valuation studies of cultural ES aesthetics, thus including e.g. landscape aesthetics values but not carbon sequestration values, a willingness to pay by EURO 53 per person per year for an increase in grasslands in less-intensive land-use in mountain regions, about NOK 1300 per household in 2018 prices.In another study from Ireland, Campbell et al. find a WTP for safeguarding some pastures as EURO 190, and a WTP for safeguarding of a lot of pastures as EURO 210 per individual per year, which is higher but comparable with our results.Designing public policies targeting a large geographical area, like an entire country, faces the problem that people may care less about the extent – but more about the process and where benefits are distributed.If this is a problem, it also carries over to similar surveys.Interestingly, similar to our findings, Campbell et al., as noted above, find a similar low scope sensitivity.In the analyses we have excluded recreational values which is in line with the lack of geographical specificity as it would require people to link national policies to where they specifically recreate.We have addressed this by telling respondents that climate forests will not be established in areas of importance for recreation.If they have ignored this, they could potentially have factored it in.Further, aggregation of household level welfare estimates becomes an important issue in CBA, especially as the study is on a national scale.Many studies find unrealistically high welfare estimates when mean WTP estimates are aggregated over a national population.Recent guidance on the use of SP methods mentions that determining the extent of the market “remains a challenge for which research is warranted”.This issue is also closely related to non-use or existence values, as, for example in our case, only a small part of the population will experience or use the areas for which afforestation is considered.Hence, the extent of the market for non-use values may be difficult to assess and “distance decay” approaches may not be appropriate for high non-use value goods.When we restrict the extent of the market to most rural households, we find net benefits to remain positive across scenarios, while scenario P1 and P2 become most efficient, due to higher WTP for pasture recovery among rural households.An interesting extension would be to go further into the distribution of values across geography.We rely on general calculations of cost and income of recovering pastures and planting climate forests.A further enhancement of the CBA would be to add more detailed figures on the costs and income possibilities related to different production scenarios.The estimated WTP for pastures, climate forests and biodiversity could be applied in agro-economic modelling, as Norwegian studies using such models have long called for values based on stated preference studies.Brunstad et al., for example, adopt the Norwegian JORDMOD model, used by the government for agricultural policy planning purposes, to consider the values of public goods stemming from agricultural production.Brunstad et al. had to resort to a crude transfer of values from an old Swedish study, since local values were non-existent.The inclusion of our results in agro-economic models could give a better knowledge of the total economic significance of the agricultural and food sector and how policy measures and framework conditions can best be designed.Our results indicate substantial positive externalities stemming from agricultural production.In our analysis we estimate the value of carbon sequestration through people’s perception hereof through the land use.Thus, we do not explicitly put an estimate on the carbon sequestration, but we do inform people of the carbon sequestration levels of the alternatives.This information is based on the climate sequestration from the pastures and forests and do not include the emissions caused by grazing animals, thereby implicitly assuming that the meat produced would cause as much emission if produced under other circumstances.Pastures can be maintained both through different production methods associated with different emissions, such as harvesting grass for the purpose of landscape preservation, or by grazing sheep, goats and cattle.We do neither include the potential climate mitigation through future materials substitution due to increased forestry.Natural extensions of our analysis would therefore be to include the cost of emissions of methane gas associated with grazing animals in our CBA, include the effect of materials substitution due to increased forestry and explore the importance of albedo, increased by maintaining the open pastureland.Had we included such values, we would have come up with larger climate policy benefits of the scenarios.However, the difference in estimates of our scenarios is likely small, as carbon sequestration is only a part of the land use attribute evaluated.Rather than having respondents valuing carbon sequestration indirectly through land-use alternatives, a possibility would be to calculate the value of carbon sequestration explicitly, using a unit price on carbon.Norway’s national climate policy has in isolation no effect on the global climate, and therefore inclusion in welfare economic analyses is best done from a cost-effectiveness approach, given the international commitment Norway has made.It is in this light the current paper should be seen – a CBA of a policy to fulfill the overall climate policy through the use of land use changes.Expanding the analysis to let people make tradeoffs between different ways to obtain the goal would be a different approach that we leave for future research.
Norway is considering a national afforestation program for greenhouse gas sequestration on recently abandoned semi-natural pastureland. However, the program may have negative impacts on landscape aesthetics and biodiversity. We conducted a nation-wide choice experiment survey to estimate non-market values, combined with secondary data on program costs and other impacts, to derive the social net return on land use scenarios. Our results indicate that the scenarios where either half of the abandoned pastures are recovered, or half of the pastures are recovered, and a quarter are designated to the climate forest program, yields the highest net present value. The net present value of all land use scenarios remains positive when limiting the aggregation of willingness to pay to rural households, and when allowing for potential hypothetical bias in benefit estimates and cost increases. Results indicate that landscape and biodiversity values are substantial and should be considered when designing agricultural and climate policies.
44
Membrane-enclosed multienzyme (MEME) synthesis of 2,7-anhydro-sialic acid derivatives
Sialic acids constitute a structurally diverse family of nine-carbon acidic monosaccharides commonly found at the termini of the glycan chains on glycoproteins and glycolipids .N-Acetylneuraminic acid, N-glycolylneuraminic acid, and 2-keto-3-deoxy-D-glycero-D-galacto-nonulosonic acid are the three basic forms of sialic acids which are distinguished from one another by different substituents at carbon-5.Additional modifications include for example acetylation, lactylation, methylation, sulfation, resulting in more than 50 structurally distinct sialic acids .As the outermost carbohydrate residues, sialic acids are critical recognition elements in a number of biologically important processes including cell-cell interaction, bacterial and viral infection, and tumor metastasis .For example, terminal sialic acid residues attached via α2, 3/6 glycosidic linkages to mucin glycan chains are prominent targets for commensal and pathogenic bacteria .The release of sialic acid by microbial sialidases allows the bacteria present in the mucosal environment to access free sialic acid for catabolism, unmask host ligands for adherence, participate in biofilm formation, modulate immune function by metabolic incorporation, and expose the underlying glycans for further degradation .Most are hydrolytic sialidases, which release free sialic acid from sialylated substrates.However, there are also examples with transglycosylation activities.Intramolecular trans-sialidase, represents a new class of sialidases recently identified in pathogenic and commensal bacteria, releasing 2,7-anydro-N-acetylneuraminic acid instead of sialic acid .Reaction specificity varies, with hydrolytic sialidases demonstrating broad activity against α2,3-, α2,6- and α2,8-linked substrates, whereas IT-sialidases appear specific for α2,3-linked substrates .Recently, an IT-sialidase, RgNanH, from the gut symbiont Ruminoccocus gnavus was structurally and functionally characterised.This enzyme produces 2,7-anhydro-Neu5Ac from α2,3-linked sialic acid glycoproteins or oligosaccharides .2,7-Anhydro-Neu5Ac was found to be a preferential metabolic substrate for R. gnavus strains expressing the IT-sialidase, suggesting a role in the adaptation of gut symbionts to the mucosal environment .Previously, 2,7-anhydro-Neu5Ac had been detected in rat urine and human wet cerumen .It was suggested that this unusual sialic acid derivative may have bactericidal activity and/or could serve as a reservoir for sialic acids in the biological systems .At present, the biological significance of naturally occurring 2,7-anhydro-Neu5Ac in body fluid and secretions is still largely unknown.This is due to the lack of effective synthetic methods for its production.Lifely et al. showed that methanolysis of sialic acid gave the methyl ester of 2,7-anhydro-Neu5Ac in addition to the methyl ester ketoside of sialic acid.The first directed synthesis of 2,7-anhydro-Neu5Ac was completed efficiently by intramolecular glycosidation of the S- glycoside derivative of Neu5Ac using silver triflate – bis palladium chloride as a promoter .More recently, the production of 2,7-anhydro-Neu5Ac was achieved using a leech IT-sialidase and 4-methylumbelliferyl Neu5Ac as substrate using sequential purification steps including precipitation, Folch partitioning and Bio-Gel purification .Here we report a facile membrane-enclosed multienzyme approach for the preparative synthesis of 2,7-anhydro-Neu5Ac and 2,7-anhydro-Neu5Gc from glycoproteins using R. gnavus IT-sialidase and a commercial sialic acid aldolase-catalysed reaction in one pot.The MEME synthesis offers several advantages over the existing synthetic methods such as the improved yield, the fact that the synthesis is scalable and that the route is greatly simplified.In addition, the MEME offers a cheaper solution to obtain sialic acid derivatives considering the cost of the starting material used in the previously reported chemical or enzymatic synthesis.The synthesis of 2,7-anhydro-Neu5Ac was achieved at high purity and in 20 mg scale using fetuin as substrate.The use of Neu5Gc-rich glycoproteins as donor substrates for preparing 2,7-anhydro-Neu5Gc is also described.Obtaining these compounds at a preparative scale is crucial for studying the biological importance of 2,7-anhydro-sialic acid derivatives in the gut and their potential applications in the biomedical sector.Fetuin from bovine serum, asialofetuin, bovine submaxillary mucin, cytidine triphosphate, orcinol, CMP-sialic acid synthetase from Neisseria meningitidis group B, α2,3-sialyltransferase from Pasteurella multocida, ammonium formate and Dowex 1 × 8 anion exchange resin were purchased from Sigma Aldrich.4MU-Neu5Ac is from Toronto Research Chemicals.Sialic acid aldolase from Escherichia coli, 3′-sialyllactose and Neu5Gc were purchased from Carbosynth Limited, BioGel P2 from Bio-Rad laboratories and SnakeSkin™ Dialysis Tubing, 7 kDa molecular weight cut off, 22 mm from Thermo Fisher Scientific.Nanopure water was used for buffer preparation and for purification.RgNanH was produced and purified as described previously .3′-glycolylneuraminyllactose was synthesised as previously reported .A dialysis membrane containing fetuin, modified asialofetuin or bovine submaxillary mucin was incubated in 100 mM ammonium formate pH 6.5 for 2 h at 37 °C with gentle shaking.Following the addition of 50 nM RgNanH into the dialysis membrane, the reaction mixture was incubated in a new solution of 100 mM ammonium formate at 115 rpm, 37 °C for 24 h.Then 0.5 units/mL of sialic acid aldolase was added to the membrane enclosed reaction mixture.Following a further 20 h incubation at 115 rpm and 37 °C, the dialysate was recovered, diluted with 100 mL of ultrapure water and freeze dried.The membrane enclosed reaction mixture was dialysed with 100 mL ultrapure water for 2 h and again overnight with 115 rpm shaking and at 37 °C.The water was recovered, pooled with the dried dialysate and freeze dried.After complete dryness, the powder was dissolved in 100 mL Nanopure water and freeze dried again 3 times to remove volatile salts.The freeze-dried sample corresponding to crude 2,7-anhydro-Neu5Ac was dissolved in ultrapure water and purified by anion exchange chromatography using a Dowex 1 × 8 column.The anion exchange resin was first equilibrated with ultrapure water before applying the sample.After washing with ultrapure water and 1 mM ammonium formate, 2,7-anhydro-Neu5Ac was eluted with a gradient of ammonium formate ranging from 5 mM to 50 mM and freeze dried.After complete dryness, the powder was dissolved in ultrapure water and freeze dried again 3 times to remove volatile salts.2,7-anhydro-Neu5Ac dissolved in ultrapure water, centrifuged and filtered on a PTFE 0.45 μm membrane, was then desalted by BioGel P2 size exclusion chromatography.The obtained 2,7-anhydro-Neu5Ac was collected and freeze dried.Asialofetuin was suspended in 100 mM Tris-HCl pH 8.5 with Neu5Gc and cytidine triphosphate was added.After addition of a CMP-sialic acid synthetase from Neisseria meningitidis and an α2–3-sialyltransferase from Pasteurella multocida, the reaction medium was incubated for 24 h at 37 °C under gentle shaking.The Neu5Gc disappearance as well as intermediate formation were monitored by thin-layer chromatography using a 2:1:1 butanol/AcOH/water mobile phase and orcinol sugar stain.The reaction was stopped and the asialofetuin derivative was separated by Folch partitioning using a 2:1 chloroform/MeOH solution.The obtained precipitate was dried under nitrogen flux, re-suspended in 2 mL of ultrapure water and freeze dried.This product was then dialysed in 100 mM ammonium formate pH 6.5 for 4 h at 37 °C, 115 rpm, before performing the next step.For electrospray ionisation mass spectrometry analysis, 2,7-anhydro-Neu5Ac was dissolved in methanol and filtered on a PTFE 0.45 μm membrane.MS spectra were acquired on Expression CMSL with ESI ionisation using direct injection operated in a negative ion mode.Advion Data Express software package was used to evaluate the MS data.For high resolution mass spectrometry analysis, the sample was diluted into 50% methanol/0.5% NH4OH and infused into a Synapt G2-Si mass spectrometer at 5 μL/min using a Harvard Apparatus syringe pump.The mass spectrometer was controlled by Masslynx 4.1 software.It was operated in high resolution and negative ion mode and calibrated using sodium formate.The sample was analysed for 2 min with 1 s MS scan time over the range of 50–600 m/z with 3.0 kV capillary voltage, 40 V cone voltage, 100 °C cone temperature.Leu-enkephalin peptide was infused at 10 μL/min as a lock mass and measured every 10 s. Spectra were generated in Masslynx 4.1 by combining a number of scans, and peaks were centred using automatic peak detection with lock mass correction.For Nuclear magnetic resonance analysis, 2,7-anhydro-Neu5Ac or 2,7-anhydro-Neu5Gc was dissolved in D2O and transferred into a 5 mm o.d. NMR tube for spectral acquisition.NMR spectra were run on a 600 MHz Bruker Avance III HD spectrometer fitted with a TCI cryoprobe and running Topspin 3.2 software.All experiments were performed at 300 K. 1H NMR spectra were acquired using the noesygppr1d pulse sequence for suppression of the residual water signal.Spectra were acquired with 32678 complex points in the time domain, spectral width 20.5 ppm, acquisition time 2.67 s, relaxation delay 3.0 s and 64 scans.Fourier transformation was carried out with zero filling to 65536 points and exponential multiplication to give line broadening of 0.3 Hz.A proton decoupled 13C spectrum of the same solution was acquired at 151 MHz using the zgpg30 pulse sequence with 32678 complex time domain points, spectral width 220.9 ppm, acquisition time 0.98 s, relaxation delay 2 s and 7200 scans.Fourier transformation was carried out with zero filling to 65536 points and exponential multiplication to give 3.0 Hz line broadening.The spectra were referenced using an external standard of methanol in D2O.An HSQC spectrum of the same sample was recorded with the hsqcetgpprsisp2.2 pulse sequence with spectral widths of 12 ppm and 165 ppm, a 2048 x 256 point acquisition matrix which was zero filled to 2048 × 1024 on 2D Fourier transformation, and 16 scans per t1 experiment.For complete and unambiguous assignment of the proton and carbon signal, we performed standard two-dimensional NMR techniques such as COSY and HSQC.13C signals were assigned using the HSQC spectrum based on the known 1H chemical shifts.For MS analysis of fetuin glycosylation, native asialofetuin or its derivative was dissolved in 100 μL ammonium bicarbonate and heated to 100 °C for 5 min.After cooling the sample to room temperature, trypsin was added and the reaction was incubated at 37 °C for 16 h. Trypsin was heat inactivated at 100 °C for 5 min and PNGase F was added to the mixture.The reaction was incubated at 37 °C for 16 h.The released glycans were purified using a C-18 Sep-Pak® cartridges, prewet with 3 vol of methanol and equilibrated with 3 vol of 5% acetic acid.Released glycans were eluted with 3 vol of 5% acetic acid and freeze-dried.The samples were dissolved in 200 μL of anhydrous DMSO, followed by the addition of ∼25 mg NaOH and 300 μL iodomethane.The permethylation reaction was incubated at room temperature for 60 min under vigorous shaking and quenched by the addition of 1 mL of 5% acetic acid.The permethylated glycans were purified using a Hydrophilic-Lipophilic Balanced copolymer Oasis cartridge, prewet with 4 vol of methanol and equilibrated with 5% methanol in H2O.Salts and other hydrophilic contaminants were washed with 5 vol of 5% methanol and permethylated glycans eluted with 3 vol of 100% methanol.The samples were dried under a gentle stream of nitrogen, dissolved in 10 μL of TA30 and mixed with equal amount of 2,5-dihydroxybenzoic acid, before spotted onto an MTP 384 polished steel target plate.The samples were analysed by MALDI-TOF, using the Bruker Autoflex™ analyzer mass spectrometer in the positive-ion and reflectron mode.For HPLC analysis of sialic acid content, asialofetuin and its sialylated derivatives were spiked with 50 ng KDN.Sialic acid and its derivatives were released by mild acid hydrolysis in acetic acid at 80 °C for 3 h.The samples were dried using a Concentrator Plus centrifugal evaporator and dissolved in 50 μL DMB labelling reagent .The labelling reaction was incubated at 50 °C for 2.5 h.The labelled samples were analysed by HPLC using a Luna C18 column.Solvent A was acetonitrile, solvent B was methanol 50% vol/vol in H2O and solvent C was H2O.The gradient used was 4% A, 14% B and 82% C to 11% A, 14% B and 75% C, over 30 min.The excitation and emission wavelength was 373 and 448 nm, respectively.Elution times of labelled sialic acids were compared to known standards of Neu5Ac and Neu5Gc.Substrates 3′SL or 4MU-Neu5Ac were incubated at 4 mg/mL with 1 nM RgNanH in 100 mM ammonium formate for 24 h at 37 °C.The reaction was stopped by addition of an equal vol of ethanol prior to Folch partitioning.Neu5Gcα2-3Lac was incubated at 0.33 mg/mL with 60 nM RgNanH in 20 mM sodium phosphate buffer containing 0.02 mg/mL bovine serum albumin overnight at 37 °C.The reaction was stopped by boiling for 20 min, and the precipitated enzyme removed by centrifugation.In the case of all three substrates, a no-enzyme control was also carried out.Here we used the recombinant IT-sialidase from R. gnavus ATCC 29149 which produces 2,7-anhydro-Neu5Ac from α2,3-linked Neu5Ac substrates , and commercially available fetuin which contains about 8% in weight of α2,3-linked Neu5Ac for the enzymatic biosynthesis of 2,7-anhydro-Neu5Ac.A Membrane Enclosed Enzymatic Catalysis approach was used to achieve rapid and efficient recovery of the reaction product.This technique is based on the containment of the soluble enzyme in a dialysis membrane allowing the smaller product to diffuse through and be more readily recovered in the reaction buffer.Fetuin was chosen as a substrate as it is bulky and commercially available, allowing for both the enzyme and its substrate to be enclosed in the same dialysis membrane and therefore readily separated from the monosaccharide product .The 2,7-anhydro-sialic acid derivative recovered in the reaction buffer was further purified by size exclusion chromatography on a Bio-gel P-2 column in order to remove the salts.ESI-MS and 1H and 13C NMR spectroscopy were used to monitor the purity of the reaction product.The mass spectrum and 1H NMR spectrum were identical to that of 2,7-anhydro-Neu5Ac obtained by methanolysis of Neu5Ac or following enzymatic reaction of 4MU-Neu5Ac with the leech IT-sialidase .This process achieved 37% yield based on the 8% α2,3-linked sialic acid on fetuin and the product purity.Further analysis of the fetuin recovered after the synthesis revealed that its glycosylation pattern was similar to that of asialofetuin, attesting completion of the reaction.However, the recovered product contained 85% 2,7-anhydro-Neu5Ac and 15% free sialic acid, as determined by 1H NMR.No Neu5Ac was detected in the control reaction in absence of RgNanH, suggesting that no spontaneous degradation of fetuin occurred under the synthesis conditions or even after a prolonged period up to 50 h dialysis, as monitored by ESI-MS.The use of alternative substrates such as 3′SL or 4MU-Neu5Ac did not affect the amount of free Neu5Ac produced.Together these data indicated that Neu5Ac may be a by-product of the transglycosylation reaction.Considering the difficulty and the low efficiency of the separation of free Neu5Ac from 2,7-anhydro-Neu5Ac, a commercially available sialic acid aldolase from E. coli was introduced into the dialysis membrane with RgNanH and fetuin.Sialic acid aldolases are efficient biocatalysts that convert free Neu5Ac into N-acetyl-mannosamine and pyruvate .Since 2,7-anhydro-Neu5Ac is resistant to degradation by sialic acid aldolase, this enzyme was used to convert free sialic acid into smaller and uncharged enzymatic products which were easily eliminated using anion exchange chromatography on Dowex 1 × 8 resin prior to the size exclusion chromatography step.The 2,7-anhydro-Neu5Ac obtained during this one pot membrane-enclosed multienzyme synthesis was about 96% pure, as shown by MS and NMR, with a 33% yield.The Neu5Ac amount was reduced to traces representing less than 1%, and other impurities were identified as protein residues.The multi-step enzymatic synthesis developed here is efficient, low cost and scalable to at least 20 mg, and allows the recovery of 2,7-anhydro-Neu5Ac with high purity.This method is also applicable to the synthesis of other 2,7-anhydro-sialic acid derivatives.The ability of the RgNanH to produce 2,7-anhydro-Neu5Gc was first assessed using Neu5Gcα2-3Lac as substrate.Following an overnight incubation at 37 °C, the product of the reaction was analysed by NMR, showing the disappearance of the Neu5Gcα2-3Lac, the release of lactose and Neu5Gc, and the formation of a 2,7-anhydro compound similar to 2,7-anhydro-Neu5Ac by NMR.This 2,7-anhydro derivative was further identified as 2,7-anhydro-Neu5Gc by NMR and by high resolution MS: m/z 1Calcd -: 306.08304; found: 306.0831).In order to scale up the reaction, a membrane-enclosed synthesis was first performed using bovine submaxillary mucin as Neu5Gc donor, in place of fetuin which only displays a negligible amount of Neu5Gc .Submaxillary mucin is 9–17% sialylated, and is the most abundant source of Neu5Gc among the commercially available glycoprotein, with Neu5Gc accounting for 15% of the total sialic acid content .Although 2,7-anhydro-Neu5Gc was produced as monitored by ESI-MS, a large amount of 2,7-anhydro-Neu5Ac was also present, and the two 2,7-anhydro derivatives could not be efficiently separated.It is expected that using porcine submaxillary mucin as donor would greatly improve 2,7-anhydro-Neu5Gc yields as Neu5Gc represents 90% sialic acid , although the reaction product would still contain some 2,7-anhydro-Neu5Ac.In the absence of a commercially available glycoprotein containing exclusively Neu5Gc, and to minimise 2,7-anhydro-Neu5Gc contamination with the Neu5Ac derivative, we attempted to transfer Neu5Gc onto an acceptor glycoprotein using a one-pot two-step enzymatic synthesis developed by Chen et al. .Asialofetuin was the most suitable acceptor as it contains ≤1.0% of Neu5Ac and the Neu5Ac free sites are good acceptor sites for the Neu5Gc transfer.We used a CMP-sialic acid synthase from Neisseria meningitidis group B to activate Neu5Gc to CMP-Neu5Gc, and a α2–3-sialyltransferase from Pasteurella multocida for the transfer of the activated sialic acid onto the acceptor glycoprotein with an α2,3-linkage.The obtained asialofetuin derivative was separated from the by-products by Folch partitioning and characterised by MALDI-TOF, showing the appearance of mono and di-sialylated oligosaccharide chains terminated with Neu5Gc.The relative amount of Neu5Gc and Neu5Ac on the asialofetuin derivative was further quantified by HPLC after derivatization with the 1,2-diamino-4,5-methylenedioxybenzene, using KDN as an internal response factor.The total sialic acid content ranged from 86% Neu5Ac and 14% Neu5Gc in asialofetuin to 6% Neu5Ac and 94% Neu5Gc after Neu5Gc transfer.This Neu5Gc-rich glycoprotein was then used in the membrane-enclosed enzymatic synthesis after dialysis in ammonium formate.The 2,7-anhydro-Neu5Gc obtained was characterised by NMR, and the profile found identical to the one previously obtained with Neu5Gcα2-3Lac as substrate.The only difference between the 2,7-anhydro-Neu5Ac and the 2,7-anhydro-Neu5Gc is the absence of a signal for the acetyl group at 2.07 ppm on the 2,7-anhydro-Neu5Gc, replaced by a glycolyl one at 4.13 ppm.In summary, we have developed a convenient and efficient membrane-enclosed multienzyme approach for producing 2,7-anydro-modified sialic acids in a pure form, starting from readily available glycoproteins.The synthetic method reported herein offers general and straightforward access to a class of sialic acid derivatives recently discovered in the gut and shows promise to assess the biological significance and potential applications of 2,7-anydro-modified sialic acids in the context of drug discovery and biomedical research.
Naturally occurring 2,7-anhydro-alpha-N-acetylneuraminic acid (2,7-anhydro-Neu5Ac) is a transglycosylation product of bacterial intramolecular trans-sialidases (IT-sialidases). A facile one-pot two-enzyme approach has been established for the synthesis of 2,7-anhydro-sialic acid derivatives including those containing different sialic acid forms such as Neu5Ac and N-glycolylneuraminic acid (Neu5Gc). The approach is based on the use of Ruminoccocus gnavus IT-sialidase for the release of 2,7-anhydro-sialic acid from glycoproteins, and the conversion of free sialic acid by a sialic acid aldolase. This synthetic method, which is based on a membrane-enclosed enzymatic synthesis, can be performed on a preparative scale. Using fetuin as a substrate, high-yield and cost-effective production of 2,7-anhydro-Neu5Ac was obtained to high-purity. This method was also applied to the synthesis of 2,7-anhydro-Neu5Gc. The membrane-enclosed multienzyme (MEME) strategy reported here provides an efficient approach to produce a variety of sialic acid derivatives.
45
Incidence of bifid uvula and its relationship to submucous cleft palate and a family history of oral cleft in the Brazilian population
Bifid uvula is a frequently observed anomaly in the general population.1,Its incidence varies according to racial groups.2,The incidence is higher in Indians and Mongols, average in Caucasians and less frequent in blacks.2,3,Bifid uvula is often regarded as a marker for submucous cleft palate although this relationship has not been fully confirmed.1,4,The bifid uvula has thus served as a tool for clinicians to detect the earliest signs of oral cleft.4, "Submucous cleft palate is a congenital malformation with specific clinical features that were first described by Calnan and are known as “Calnan's triad”.5", "The diagnostic signs of Calnan's triad are bifid uvula, midline soft palate muscle separation with an intact mucosal surface, and a midline posterior bony palate-notching defect.6",It has an estimated prevalence of 1:1250–1:5000.7,The OMIM database of Mendelian disorders lists submucous cleft palate as a clinical finding in approximately 40 distinct syndromes.Yet, in approximately 70% of cases, submucous cleft palate is an isolated finding.8,Nonsyndromic cleft lip and/or cleft palate is the most common orofacial birth defect, occurring in 1 in 500–2500 live births worldwide.9,In Brazil, the prevalence varies from 0.36 and 1.54 per 1000 live births.10,11,NSCL/P is caused by a complex interplay between environmental exposures and genetic and epigenetic factors.Although in the past decade multiple genetic variants have been associated with oral clefts, providing valuable insights into its genetic etiology, the disease-susceptibility genes identified so far only account for a small percentage of cases.9,12,Therefore, the aim of the current study was to determine the frequency of bifid uvula and submucous cleft palate and their relationship with oral clefts in a Brazilian population.After proper approval of the Ethics Committee, Institutional Review Board, we conducted a transversal, descriptive and quantitative study of 1206 children between August 2014 and December 2015.The children were assessed in primary or ambulatorial units of health.All units are Public Health Network Brazilian.All of the study subjects were born in the same region of the Minas Gerais State, Brazil, and had similar social conditions.A clinical examination of the children was conducted by means of inspection of the oral cavity with the aid of a tongue depressor and directed light.The use of light through the lantern allowed a direct view in front of the examiner.The examination of the oral cavity aimed to verify the presence of a bifid uvula or submucosal cleft palate.After the clinical examination in children, parents answered a questionnaire with questions about basic demographic information and their family history of oral clefts in their first-degree relatives.13,No parent declined to respond the questionnaire.The questionnaires were applied in a single session, always after the clinical examination of children.Children with congenital anomalies or syndromes were excluded from the study.This was initially performed as a pilot study.The oral clefts were categorized, when present, into the following three groups, with the incisive foramen as a reference: 1) cleft lip: includes complete or incomplete pre-foramen clefts, either unilateral or bilateral; 2) cleft lip and palate: includes unilateral or bilateral transforamen clefts and pre- or post-foramen clefts; and 3) cleft palate: includes all post-foramen clefts, complete, or incomplete.14,After application of the questionnaires, the information collected were archived in a database and analyzed by the statistical program SPSS® version 19.0, by applying Chi-square tests.Values with p < 0.05 were considered statistically significant.Of the 1206 children included in this study, 608 were female and 598 were male.The average age was 3.75 years.There was a prevalence of non-Caucasians versus Caucasians.Of the 1206 children studied, 6 presented with bifid uvula.Submucosal cleft palate was not found in any child.When the family histories of children were examined for the presence of NSCL/P, no first-degree relatives presented with the congenital anomaly.The ancestry of individual inhabitants of the Minas Gerais state with oral clefts had been previously investigated.15,16,The average ancestry contributions to patients with oral clefts were estimated as 87.5% European, 10.7% African, and 1.8% Amerindian.15,The term bifid uvula means the partial or full bifurcation of the uvula.The occurrence of bifid uvula has aroused interest because of the possibility of being considered a mild form of cleft palate or being associated with submucosal cleft palate.4,17,Discovering bifidity of the uvula, however, may not be as simple as it first appears.Mucous viscosity can hold a notched or grossly bifid uvula together, making bifidity quite difficult to identify by routine oropharyngeal exam.Mucous viscosity can also prevent the identification of these anomalies intraoperatively, even after careful inspection and palpation.4,Bifid uvula is apparent in 0.44%–3.3% of normal individuals.1,18,Of 1206 children examined in the present study, 6 presented with bifid uvula.As several studies19–21 showed a higher incidence of cleft palate in females, it is possible to assume a higher prevalence of bifid uvula in females as well.However, in our study of 6 cases of bifid uvula found, most occurred in males.Studies conducted by our group in the same State, showed a predominance of cleft palate in females.21,22,There are also other studies18,23,24 that have shown a higher occurrence of uvula bifida in males in agreement with our study.Similar to other cases of cleft palate, submucosal cleft palate shows malpositioning of the palate muscles and may result in velopharyngeal insufficiency and hypernasality.6,However, submucosal cleft palate is more difficult to diagnose than other cases of cleft palate, in part because the soft and hard palates show no gap and only the uvula is bifid.6,This is in accordance with previously reported results that submucosal cleft palate is often diagnosed late.25,26,One reason for late diagnosis may be a lack of alertness for obvious anatomical features of an underlying invisible cleft of the palate.25,26,During intra-oral examination, more than 90% of the patients showed a bifid uvula, which was associated with submucosal cleft palate.This visual anatomical variation, however, remained undetected during screening of newborns after birth.27,Although, the presence of bifid uvula is constant for the occurrence of submucous cleft palate, in our study, of 1206 children evaluated, no cases of submucous cleft palate were found.Although there has been marked progress in identifying the environmental and genetic risk factors associated with oral clefts, its etiology in most cases remains unclear.9,Studies have sought to correlate several changes with oral clefts.28,The occurrence of malignant neoplasms in relatives of patients with oral clefts13,28 and dental anomalies29 has been reported in patients with oral clefts.In the present study, we could not identify any cases of oral clefts in relatives of children with bifid uvula.Although children with bifid uvula may have changes in speech, hearing and swallowing, of the 6 children with bifid uvula uncovered in our study, these changes were not observed.All of their parents were told of the presence of the uvula bifida in their children.The cooperation of doctors such as pediatricians and otorhinolaryngologists who are in contact with several infants and young children, will be vital for the identification of bifid uvula.Children in whom bifid uvula is evident upon oral examination during regular health checkups should be examined by a specialist.6,Here, the important interactions between various health professionals, including doctors and dentists, are clearly visible.In summary, this study revealed that 0.5% of patients with oral cleft showed bifid uvula in a Brazilian population.No patient presented submucous cleft palate and no first degree relatives had congenital anomaly.Our study suggests that an intensification of new reviews, with broader and diverse populations, seeking to associate the occurrence of bifid uvula, submucous cleft palate and oral clefts, is needed.The authors declare no conflicts of interest.
Introduction: Bifid uvula is a frequently observed anomaly in the general population and can be regarded as a marker for submucous cleft palate. Objective: In this study aimed to determine the frequency of bifid uvula and submucous cleft palate and their relationship with oral clefts in a Brazilian population. Methods: We conducted a transversal, descriptive and quantitative study of 1206 children between August 2014 and December 2015. A clinical examination of the children was conducted by means of inspection of the oral cavity with the aid of a tongue depressor and directed light. After the clinical examination in children, parents answered a questionnaire with questions about basic demographic information and their family history of oral clefts in their first-degree relatives. After application of the questionnaires, the information collected was archived in a database and analyzed by the statistical program SPSS® version 19.0, by applying Chi-Square tests. Values with p < 0.05 were considered statistically significant. Results: Of the 1206 children included in this study, 608 (50.40%) were female and 598 (49.60%) were male (p = 0.773). The average age of children was 3.75 years (standard deviation ± 3.78 years). Of the 1206 children studied, 6 (0.5%) presented with bifid uvula. Submucosal cleft palate was not found in any child. When the family histories of children were examined for the presence of nonsyndromic cleft lip and/or cleft palate, no first degree relatives presented with the congenital anomaly. Conclusion: This study revealed that the incidence of bifid uvula and submucous cleft palate in this population was quite similar to previously reported incidence rates. Our study suggests an intensification of new reviews, with broader and diverse populations, seeking to associate the occurrence of bifid uvula, submucous cleft palate and oral clefts.
46
Data on trace element concentrations in coal and host rock and leaching product in different pH values and open/closed environments
Tables 1 and 2 show concentrations of major and trace elements in coal samples, and Tables 3 and 4 show concentrations in host rock samples after the first and second sampling and testing.Analysis of ICP-AES, ICP-MS were applied in the first the second testing, respectively.26 and 16 elements were analyzed in the first and second testing, respectively.Selected coal and host rock samples were used in the leaching experiments, to simulate leaching behavior of the trace elements.Tables 5 and 6 show leaching behavior of trace elements from coal and host rock, respectively, which were tested by ICP-MS.In the coal and host rock leaching experiments, different conditions, including type of leaching water, initial pH values, open/closed environments, and temperature, were set.Solid and water samples were collected at Xuzhou-Datun coal mine district, which is located at the north of Jiangsu province, eastern China.Average temperature, relative humidity and air pressure in the region are 14 °C, 73%, 101280 Pa, respectively.Average precipitation is 758 mm, ranging from 492 mm to 1178 mm, with an average evaporation 1623 mm.The geology in this area could be described as a series of sediment stratum cover the Archean system, from bottom to top these are Simian, Cambrian, middle-lower Ordovician, middle-upper Carboniferous, Permian, Jurassic, Cretaceous, Tertiary and Quaternary system.The coal seams that are being mined are located in the Carboniferous and Permian systems, the former includes Benxi and Taiyuan formations, and the latter includes Shanxi and Lower-Shihezi formations, listed from bottom to top in both systems.The existing coal seams are located in the Shanxi formation and Lower-Shihezi Formation.In the lower formation, white feldspar, quartz granule-sandstone and silicon-mudstone cementation are the main minerals.Grey mudstone, sand-mudstone and sandstone are the main rocks in the middle Shanxi formation with some silicon-mudstone and siderite also present .In the upper formation, grey-green middle-fine quartz sandstone, siltstone, mudstone are the main minerals.In this area, ground water was reported to be contaminated by coal mining and electricity plants .To investigate contamination of trace elements on surface and ground water by coal mining activities, 28 solid samples and 16 ground water and surface water samples were collected.The solid samples were collected from roof, floor, and coal from the working areas of the coal seam No. 2, 7, and 9 as shown in Tables 1–4 The ground water samples, including that from roof leaching, limestone aquifer and caved goaf, were collected in coal mines.Surface water samples were collected from lakes that located in both coal mine and non-coal mine district.1000 mL Nalgene bottles were used to contain water samples, which were cleaned by acid in laboratory and rinsed twice before the sample were collected.A JENCO 6010 pH/ORP meter was used to test Eh and Eh value of samples.Coal and host rock samples were collected from the working area and put into black plastic bags and sealed immediately.Some factors may control or impact the leaching process, leading to different migration behavior of major and trace elements.pH value is a key parameter in leaching behavior of trace elements from coal and host rock.Most of leaching study focus on the acid leaching behavior for the higher mobility of metal elements .The group of Zhang have applied a series of experiments to investigate acid leaching of vanadium from coal .However, metalloid elements may release more in an alkaline environment .At the same time, high pH values are usually observed in coal mine water.Eh impact the leaching behavior in two aspects, higher kinetical ratio in one hand, and oxidation of some minerals that stable in an anaerobic environment in the other hand .Temperature may promote the water-rock interaction according to the thermodynamics laws .Some researchers have also discussed the impact of liquid/solid ratio , types of leaching water on the trace elements’ migration, as some major components, such as sulfate, may hider release of trace elements .The leaching experiments were designed in batch mode to simulate water-rock interaction in a coal seam where the water moves slow and reaction tends to achieve equilibrium .To avoid effect of content of solid samples on leaching behavior, most of the leaching experiments used the same coal/rock samples.All the glassware used in the experiments were soaked in 3.2 mol/L HNO3 for two days so as to reduce cross-contamination.The selected solid samples were ground until 75 μm.30 mg were weighted for each leaching experiment, for interacting with 1000 mL aliquot water.The water used in the experiments were ultra-pure water, surface water from coal mine district and non-coal mine district to simulate different environment.To investigate the immigration behavior of trace elements in both acid and alkaline environment, initial pH of ultra-pure water was set to 2, 5.6, 7 and 12.Temperature was controlled using a water bath at 38 °C, or in raw temperature, which was about 15 °C.To simulate a ‘closed environment’ with low pO2, for details) , bottles were closed with a rubber stopper.Flasks were sealed and shaken every two hours.Leachate solutions were collected using syringes at 2, 6, 24 and 48 hours as the reaction may achieve equilibrium in hours in some conditions , some samples were collected even later to ten days to observe long time behavior .0.5 mol/L HNO3 was added into each sample to reduce adsorption and hydration of trace elements.The pH and Eh of the solution during experiments was determined by a JENCO 6010 pH/ORP meter.The conditions in every experimental setting are illustrated in Tables 7 and 8 for coal leaching and host rock leaching, respectively.While the solid sample were digested, 0.05 mg of samples were carefully weighted and put into tetrafluoroethylene crucibles, 3 mL of HNO3, 1mL of HF and 1 mL of HClO4 were added and heated until all liquid consumed up.Then 3 mL of HCl were added and heated again until the consumption of the liquid.After that, the crucibles were rinsed using ultra-pure water, and then filter into a constant volume of 50 mL using volumetric flasks for analysis.The data of ground water and surface water samples are shown in related article .The data of solid samples and the water samples collected from the leaching experiments are shown in the DiB article, as supporting material for the related article.In the related article, selected attributes were used based on their concentration levels, proportion of under-detected data, and feature engineering.Major ions and physical parameters of water samples were determined in Jiangsu Provincial Coal Geology Research Institute in line with Chinese standard protocols.pH was tested both in site and in the laboratory using Glass electrode method.Total dissolved solids and hardness were analyzed in term of the standard GB/T 8538 method.Mg and Ca were measured using atomic absorption spectrophotometric.K and Na were analyzed by flame atomic absorption spectrophotometry.Fe, Ammonia and Nitrate-Nitrogen were analyzed by phenanthroline spectrophotometry, phase molecular absorption spectrometry and gas-phase molecular absorption spectrometry, respectively.Sulphate and chloride were determined using flame atomic absorption spectrophotometry and silver nitrate titration, respectively.Concentration of trace elements in solid and liquid samples were determined by ICP-AES and ICP-MS.The ICP-AES analysis was carried out in the Nanjing University using a JY38S ICP-AES model.Limit of detection and deviation for the analysis were 0.01 μg/mL and less than 2%, respectively.The ICP-MS analysis was carried out in the Analysis and Test Centre of China University of Mining and Technology using the X-Series ICP-MS-Thermo Electron Co., Rh was used as internal standard to determine limit of detection and analytical deviation.
The data presented in this article are related to the research article entitled “Multivariate analysis of trace elements leaching from coal and host rock” [1]. During coal mining, coal and host rock undergo water-rock interaction, leading to release of trace elements to surface and ground water. The batch experiments were designed and implemented to investigate the leaching behavior and mechanisms during the process of water-rock interaction. In different experimental sets, various types of leaching water, open/closed environments, temperature, and initial pH values were used to evaluate their impact on leaching of trace elements. These data could be used to analyze leaching mechanisms of trace elements from coal and host rock, and understand, predict, control trace elements’ contamination to surrounding waters.
47
Correlates of unsuccessful smoking cessation among adults in Bangladesh
Worldwide, tobacco use is the leading cause of avoidable death.In each year, nearly 6 million peoples are killed by tobacco and if present pattern of tobacco use remains uncontrolled, > 8 million deaths will be caused annually by 2030.However, rates of smoking in developed countries is decreasing but in developing countries, it is rising.The fast increase in smoking in developing countries would result in 7 million deaths per year by 2030.The countries in Asia, especially, South East Asia are not unsusceptible to smoking epidemic.The South East Asia region is the place of residence for about 400 million tobacco users, which bring about 1.2 million deaths annually.Bangladesh is larger than most other tobacco consuming countries in the world where 46 million adults use tobacco.Bangladesh ranks among the top ten heaviest smoking countries in the world having high current smoking prevalence of 44.7% among males, 1.5% among females, and overall 23.0% among adults aged 15 years or above.This means an estimated 21.9 million adults in Bangladesh currently smoke tobacco.Bangladesh is one of the fifteen countries in the world having a greater burden of tobacco-attributable illness.An earlier study conducted by World Health Organization in 2004, showed that tobacco causes approximately 57,000 deaths and 1.2 million tobacco-related illness in Bangladesh.Another study conducted in 2010, found that smoking was responsible for 25% of all deaths among Bangladeshi men aged 25 to 69 years and reduces their life expectancy by average 7 years.Moreover, because of tobacco-attributable deaths in Bangladesh, the health and economic burden are rising rapidly.Therefore, to tackle this epidemic reducing commencement of tobacco use and widespread cessation can have a substantial effect.Several studies that identify the factors that are associated with successful quit attempts have been restricted to specific populations such as young adults/adolescents clinic and/or patients, prisoners.As far, we know, there are a few studies that have identified the correlates of successful smoking cessation in general populations.In a recent study in Bangladesh, successful smoking cessation was associated with older age, perceiving good/excellent self-rated health a, and an increased level of self-efficacy."In a study done in Korean population, successful quitters were more likely to be aged 65 years or older, women, married, having higher education, having higher income, having a lower level of stress, having smoked 20 or more cigarettes per day, and one's will for quitting.In another study in the U.S. population, successful quit attempts were associated with smoke free-homes and no-smoking policy at work, older age, having at least a college education, being married or living with a partner, being a non-Hispanic White, having a single life time quit attempt, and not switching to light cigarettes.However, almost none of these studies have considered interaction effects between potential factors on the outcome variable in multivariable modeling where the effect of one factor may be different depending on the another factor."For example, the effect of smoking rules inside the smoker's home on the probability of successful smoking cessation may greater for who lives in urban areas than who lives in rural areas.Moreover, multi-stage sampling is used by almost all national surveys.Consequently, the collected data are clustered with a nested structure.One vital result of clustering is that measurement on units within a same cluster are correlated.To our knowledge, almost none of the studies that identified the characteristics of successful quitters did not consider clustering in the data set.Ignoring clustering effects in the data set may draw an invalid conclusion such as overestimating the variability, falsely increasing the p-values, reducing the statistical power, and increasing the chance of type-II error.To our knowledge, few studies specify the amount of smokers who have attempted to quit but failed and describe their characteristics.However, quitting smoking is a dynamic process and several unsuccessful quit attempts may be involved before finally succeeding.Though many smokers are attempted to quit smoking but unsuccessful, it is important to take into consider their all quit attempts.Moreover, these unsuccessful quitters are at the minimum attempted to stop smoking underscores that they are intended, but because of their tobacco addiction, they are impotent to sustain continual abstinence.Therefore, with a view to addressing all the impediments sufficiently to smoking cessation among these unsuccessful quitters, it is essential to identify the characteristics of the smokers who have tried to quit but unsuccessful.This study uses a large representative sample from a cross-sectional national survey in Bangladesh to determine the factors that are associated with the smokers who have unsuccessfully attempted to quit smoking during the past 12 months of the survey.We used latest nationally representative data from the 2009 Global Adult Tobacco Survey, Bangladesh.The GATS, a component of Global Tobacco Surveillance System, is a global standard for systematically observing adult tobacco use and tracking key tobacco control measures.In Bangladesh, GATS was conducted in 2009 executed by the National Institute of Preventive and Social Medicine with the cooperation of National Institute of Population Research and Training, and the Bangladesh Bureau of Statistics.Centers for Disease Control and Prevention and the WHO provided the technical support.The sampling frame for the 2009 GATS was used from the 2001 Population and Housing Census.The primary-sampling units were mahalla and mauza and a three-stage stratified cluster sampling was used to draw sample.In the first stage, 400 PSUs were selected using probability proportional to size sampling.In the second stage of sampling, one secondary sampling units was selected from per PSU with simple random sampling.In the third stage, a systematic sample of 28 households on average from each SSU was selected to produce equal male and female households on design specifications.With this design, the survey selected 11,200 households.Among the selected households, 10,050 were found to be an acceptable person for the single interview.Out of 10,050 households, 9629 individuals completed the interview successfully.The sampling procedure and the study design is presented in Fig. 1.The detailed survey procedure, study method, questionnaires are available in elsewhere.We compared unsuccessful quitters with recent former smokers who had stopped smoking 12 months earlier of the survey and had not relapsed.The unsuccessful quitters were defined as those who reported that they smoke currently on daily basis or less then daily basis, and had tried to stop smoking, but recently failed.The successful quitters were defined as those who reported that they do not smoke currently, but they have smoked daily basis or less then daily basis in the past and have stopped smoking for > 12 months of the survey.The screening process used to select unsuccessful quitters and successful quitters is illustrated in Fig. 2.Six socio-demographic characteristics such as age, gender, place of residence, occupation, education, wealth index was used in this study.Wealth index was created using principal component analysis.Beliefs about the health effects of smoking indicated, believe that smoking causes serious illness, and believe that cigarettes are addictive.Environmental characteristic indicated smoking rules inside the home.We compared the proportion of successful quitters and unsuccessful quitters between the categories of various independent variables.Binary logistic regression analysis and generalized estimating equations with considering clustering effect in the data were used to identify the factors that are associated with unsuccessful smoking cessation.We included all potential factors in the multivariable full model.We evaluated multicollinearity using variance inflation factor with a cutoff 4.0.We formed logistic regression model using backward elimination procedure.First, a full model was formed with all main effect and selected two-way meaningful interactions between factors.Then, at a time the term that has the highest p-value was eliminated from the model.The procedure was repeated until no effects met the 5% significance level for elimination from the model.Akaike Information Criteria was calculated in each step.We select the final model based on the minimum AIC.With a view to assessing the overall fit of the final model, we used Pearson Chi-square and Hosmer-Lemeshow goodness of fit statistic.We did not find any lack of fit of the model.In addition, to detect influential observation, Pearson residual and deviance residual were used.We used area under the curve of receiver operating characteristic curve to check the predictive accuracy of the final model.The GATS, Bangladesh-2009 data set used in this study was based on multistage cluster sampling.For this reason, the hierarchical structure of the data creates the dependence among observations.Hence, observations within a same cluster are correlated.With a view to taking into consideration the clustering effect in the data, we considered GEE, which accounts the correlation among the observations within a cluster.To choose a working correlation structure, we used two methods GEE.First, we choose a correlation structure that minimizes Quasi-Information Criteria, second, a correlation structure for which the empirical estimates of the variance most closely approximate the model-based estimate of the variance.Similar to the logistic regression model, the final GEE model was also selected using a backward elimination procedure.In each step, we computed QICu and the final model was selected based on the minimum QICu value.The number of covariates with interactions in logistic regression final model does not differ much from GEE final model.In GEE, we found only one additional main effect and an interaction effect than logistic regression model.Statistical software SPSS and SAS version 9.4 were used for data management and analysis.Among the 9629 respondents aged 15 years or older who completed the survey, 2233 were current smokers, 563 were former smokers, and 6833 were never smokers.Among current smokers 1058 were unsuccessful quitters.Of the former smokers 494 had quit 12 months earlier of the survey.The 69 former smokers were not included as a successful quitter in the analysis because they had quit 1 to 12 months earlier of the survey and they are probable to have had relapse.Thus, 1058 unsuccessful quitters and 494 successful quitters were the final study subjects.Table 1 shows the proportion of successful quitters and unsuccessful quitters between the categories of various potential factors.Among the male smokers, 69.8% were unsuccessful quitters, while among female this proportion was 36.5%, similar proportions of unsuccessful quitters were observed in rural and urban areas.The highest proportion of unsuccessful quitters was observed among younger adults.Individuals who had less than primary education and belong to lowest wealth quintile had a higher rate of unsuccessful quitting smoking.Among different occupation groups laborers had the highest proportion of unsuccessful quitting followed by employed and business professionals.For the belief about health effects of smoking variables, respondents those did not believe that smoking causes serious illness and those did not believe that cigarettes are addictive, 90.3% and 73% are unsuccessful quitters respectively.For the environmental characteristic, among the smokers house where smoking was allowed, 83.4% were unsuccessful quitters and among the smokers house where smoking was never allowed, 54.2% were unsuccessful quitters.The results of the logistic regression model for unsuccessful smoking cessation are shown in Table 2.With respect to socio-demographic characteristics, the odds of unsuccessful smoking cessation decreased with age.Males were 6.18 times more likely to be unsuccessful in quitting smoking than female.People with secondary school or higher educational attainment were 0.57 times less likely to quit unsuccessfully than those with no formal education.With respect to belief about health effect of smoking, people who believed that smoking causes serious illness were 0.14 times less likely to quit unsuccessfully compared to who did not believe that smoking causes serious illness."For the interaction between place of residence and smoking rules inside home, we found that among the smoker's, in those house there were no rules about smoking, and who lived in urban place were 1.61 times more likely to be unsuccessful quitters than those who lived in rural place.We found the correlates of unsuccessful smoking cessation were age, gender, level of education, place of residence, believe that smoking causes serious illness, smoking rules inside home.We also found a significant interaction between place of residence and smoking rules inside home.Moreover, we found approximately similar results from both analyses ignoring and considering clustering effects in the data.This finding indicates that the clustering effect in the data may not be notable.Consistent with findings from previous research, in this study, we found that young adult smokers have higher unsuccessful quit rate compared with older adults.The probable explanation of this association is that young adults faced less health problems, which do not cause the risk of smoking apparently.On the other hand, older smokers make multiple visit to health care providers and receive advice from them to quit which influence them to succeed in quitting smoking.Moreover, it is also investigated that older smokers are more likely to show manifestation of smoking-attributable illness, which also may strengthen their intention to quit.Thus, our findings suggest that it is necessary to promote targeted smoking cessation interventions for young adults in order to quit smoking successfully.There is conflicting result of gender for predicting smoking cessation.Some studies found that male smokers were more likely to be successful quitters and other studies found no association between gender and successful quitting from smoking.Surprisingly, we found that female smokers were less likely to be unsuccessful quitters than male smokers, which is consistent with the findings from.In the present study, among unsuccessful quitters there were only 2.6% women and among successful quitters there were 9.5% women.In Bangladesh, unlike western societies but like other Asian societies, relatively few women smoke.However, the female smokers are aware about the harmful effect of smoking especially during pregnancy and childcare which may influence them to quit successfully from smoking.On the other hand, the male smokers may highly addict to smoking.In addition, they may think, they will quit permanently after experiencing several negative impact of smoking and for this reason, they are failed to succeed in quitting.Thus, this study suggests that to discourage men from smoking and encourage them about the importance of quitting, it is also necessary generating gender-specific research and programs on the prevention of smoking in men.Consistent with the previous findings, we found that education is a potential predictor of smoking cessation.In our study, we found that, unsuccessful smoking cessation rate is decreased with the increase in the level of education.Now-a-days smoking is not so much common among highly educated people.In a study, they found that, a higher level of education raise the odds of smoking cessation rather than reducing the smoking initiation and they also showed that the duration of smoking with 9 months is reduced due to one additional year of education.Another study of 18 European countries, noted that smokers with lower education were less likely to have quit smoking than smokers with higher education in all countries.Factors that may influence variation in quit rates among the smokers with lower and higher educational attainment may comprise general health knowledge, attitude, and beliefs.With a view to discouraging unsuccessful smoking cessation among lower educated smokers, targeted policies and interventions should be focused."Consistent with findings from previous research, in this study, who didn't believe that smoking causes serious illness were more likely to be unsuccessful in quitting.The smokers who are unconscious that smoking is menacing, or who do not believe that smoking causes serious illness, are not more likely to make quit attempt; and when they try to do so, they are more likely to be unsuccessful.Furthermore, smokers may think that they will stop smoking after they experiencing adverse effect of smoking.Therefore, it is essential for tobacco control messages to highlight the importance of stopping smoking earlier rather than later.In our study, we found a significant interaction between smoking rules inside home and place of residence.From the interaction, we found that, among the smokers house where smoking was never allowed, and who lived in urban areas were less likely to quit unsuccessfully than rural smokers.In addition, we found that, though smoking was allowed in urban smoker house, they were more likely to quit successfully.A study found that in Bangladeshi urban residents had significantly higher likelihood of having smoke-free homes compared to rural residents.Our findings suggest that increased public education campaign about the harmful effect of secondhand smoke and the benefits of quitting in both urban and rural areas may influence smokers to stop smoking in the home voluntarily.Our study has several strengths: first, the present study with a nationally representative sample was distinctive in its comprisal of unsuccessful quitters as well as its inclusion of predictors with interactions.Second, we assessed our final model using several model diagnostic tools.Finally, our final statistical model has a good prediction power.There are some notable limitations of our study.First, an important variable “number of cigarettes smoked per day” were not available for successful quitters in the data set.Second our study is a cross sectional study.For this reason, we are not able to see the changes over time.Third, since no cross-sectional data have been released after the 2009 GATS for this country, we used this old data in the present study.As the data was collected about ten years ago, smoker attitudes and beliefs may be changed over a couple of years.Fourth, the definition of unsuccessful quitters and successful quitters was based on a single question of “Do you currently smoke tobacco on a daily basis, less than daily, or not at all?,Fifth, similar to a large number of population-based studies, the GATS depends on self-reported smoking status and cessation behaviors.Because of this smoking could be under-or over-reported.Sixth, a number of important factors such as self-efficacy, number of previous quit attempt, marital status, monthly income, and number of smokers in the household that may also have associated with smoking cessation were not included in this study as they were not available in the dataset.Seventh, quitting methods used by the former smokers were not available in the data set and age of smoking initiation, time to first smoking after waking up also were not available for successful quitters in the data set.Finally, as we only considered smokers of age 15 years and older, the findings may not be generalizable to the younger age groups.The present study findings confirmed that age, gender, level of education, believe that smoking causes serious illness, place of residence, and smoking rules inside home contributed to unsuccessful smoking cessation among adults in Bangladesh.Moreover, we found that the effect of smoking rules inside home on unsuccessful cessation depends on the smoker place of residence.We recommend a targeted intervention plan for those smokers, particularly who lives in rural areas, younger age group and had no formal education and simultaneously, implementing tobacco control strategies and programs that assist smoking cessation in Bangladesh.
Having 21.9 million adult smokers, Bangladesh ranks among the top ten heaviest smoking countries in the world. Correlates of unsuccessful smoking cessation remain unknown. We aimed to identify the correlates of unsuccessful smoking cessation among adults in Bangladesh. We used data from the 2009 Global Adult Tobacco Survey (GATS) for Bangladesh. We compared socio-demographic, belief about health effect of smoking, and environmental characteristics of current smokers who had a recent failed quit attempt during the past 12 months of the survey (unsuccessful quitters) with those former smokers who had quit ≥ 12 months earlier of the survey and had not relapsed (successful quitters). Data were analyzed using logistic regression model and generalized estimating equations. A total of 1552 smokers (1058 unsuccessful quitters and 494 successful quitters) aged 15 years and older who participated in the survey was included in this study. Among the smokers, 1058 (68%) were unsuccessful quitters. Our analysis showed that older aged, female, and higher educated smokers were less likely to quit unsuccessfully. Moreover, who believed that smoking causes serious illness were also less likely to quit unsuccessfully. For the interaction between place of residence and smoking rules inside home, we found that among the smoker's, in those house smoking was allowed, and who lived in urban place were less likely to be unsuccessful in quitting than those who lived in rural place. Our findings suggest a cessation program that requires integrated approach with a view to considering these findings in setting up.
48
Dynamics in Transcriptomics: Advancements in RNA-seq Time Course and Downstream Analysis
Profiling of gene expression via high-throughput methods has been achieved for the first time in 1992 with the development of Differential Display protocols followed in 1995 by the implementation of complementary DNA microarrays .Subsequently, several other large scale techniques were developed like Serial Analysis of Gene Expression , Massive Parallel Signature Sequencing , Cap Analysis Gene Expression and tiling arrays .Finally, the breakthrough of RNA-seq technology now offers scientist greater power, lower costs and new tools to better understand a wide spectrum of scientific and complex medical problems .RNA-seq allows the assessment of the whole transcriptome, including: allele specific expression, gene fusions, non coding transcripts such as long non coding RNAs, enhancer RNAs and the possibility to detect alternatively spliced variants.Compared to microarrays approach, RNA-seq data is highly reproducible and allows the identification of alternative splice variants as well as novel transcripts .Expression or tiling microarrays and capture arrays are still used intensively in biology and medicine for specialized tasks and diagnosis due to the standardized protocols and gold standard bioinformatics analysis.Several RNA-seq protocols for differential expression or detection of novel transcripts have been developed and can be classified into two main methods: enrichment of messenger RNA or depletion of ribosomal RNA.For eukaryote genomes, the most common and so far standardized protocol is the selection of poly transcripts via oligo-dT beads enriching non rRNA fractions.The second category consists of the depletion of ribosomal RNA .Several of these protocols, have been compared and reviewed in regards to different applications .When studying dynamic biological processes such as development or drug responses, datasets have to be captured continually in a Time Course experiment.Therefore, these data are sampled at several Time Points in order to recapitulate the whole regulatory network involved, identifying possible regulators and genes switches responsible e.g. for cyclic behavior or correct differentiation of cells.TC experiments can be classified into three groups :Single-time series investigating only one condition.Here, all time points are compared to the first one, which is considered as control.This approach requires fewer samples, but will not properly control for e.g. varying temperature in the incubator, as the control was not sampled over time.Multi-time series accessing several conditions simultaneously.The TC data sets are compared to a control TC.This approach allows to better control the experiment, due to the fact that controls are sampled over the time in parallel across the samples.Alternatively, the comparison can be performed directly between the different condition TCs.The drawback of this approach is higher costs, as more samples have to be sequenced and analyzed.Periodicity and cyclic TC consisting of single or multiple time series.A cyclic event of interest is investigated for reoccurring expression patterns and their differences between conditions.As at least two full cycles should be sampled for each condition, a large number of total samples are required to perform such experiments.Furthermore, differentiating between phases within the cyclic event might be challenging and may lead to “mixed datasets” due to non-uniform cell identities of mixed cell populations.Therefore, synchronization of cells prior the experiment is of importance to avoid “mixed datasets”.As the complexity of the obtained data is increased by at least one dimension per TP of each sample, specific algorithms and methods are required to analyze TC experiments.Some have already been successfully implemented for microarray data.However, only few have been adapted for RNA-seq data.In the following sections of this review, we will discuss current challenges and available methods as well as promising improvements and extensions of RNA-seq Time Course experiments.Time course experiments follow the same workflow as static RNA-seq experiments, starting with preprocessing and normalization of the data, followed by differential gene expression and downstream analysis by clustering and network construction.In this review, we are only considering the analysis of RNA-seq TC data, therefore assuming that the data was already pre-processed.We only consider whole population RNA-seq data, not including single cell RNA-seq approaches.For a complete overview and comparison of sequencing platforms as well as available tools for mapping reads the reader is referred to .Well known biases, such as GC content, gene/fragment length or batch effects are currently assessed during the quality control step using QC tools like FastQC.Time course experiments introduce additional experimental and computational challenges that have to be addressed and will be further discussed.As in other sequencing experiments, the experimental design is of utmost importance.Setting the sampling rate by defining the number of replicates per time point and the number of TP is still dictated by relatively high sequencing costs.In the case of microarray experiments, under-sampling has been shown to cause aggregation of effects due to insufficient temporal resolution .Some tools are already available to facilitate sample size calculation for RNA-seq data .These methods calculate a sample size of 20 to 79 or between 8 and 40 in order to detect differential expression.However, such number of samples is for several experiments not feasible and most of these approaches do not consider multi-factor experiments.Recent estimations of power and sample size for RNA-seq have been performed on different datasets.This work revealed that 10 replicates on a 10,000$ budget restrain already yield maximum predictive power, a number of replicates that nevertheless could be still to high for static and especially time course experiments .Moreover, choosing a feasible method to analyze data is depending on the experimental setup.This depends on whether it is a long or short time course experiment, or whether the time course was sampled uniformly and on how many replicates are needed for reliable and robust final statistics evaluation.Depending on the system investigated, it might also be necessary to synchronize the data in order to accomplish a uniform starting point to exclude phase or patient specific differences and therefore improve normalization and DEG analysis.So far no gold standard method is established for RNA-seq data analysis, though for some specific applications guidelines have been recently published .The sequencing depth is usually not posing a problem.A protocol of 100 bp paired end library preparation coupled with a minimum of three replicates should be established as minimum requirement for powerful statistics of DEG analysis .When having to make a trade-off between sequencing depth and biological samples, Liu and colleagues showed that adding more replicates is increasing predictive power of detecting DEGs to a greater extend than sequencing depth .The quality of the raw data is of importance for the subsequent bioinformatics analysis.Therefore, a good experimental design including a statistically relevant number of controls and replicates are essential for the quality control, mapping and normalization steps.Erroneous designs, including no replicates, will result in less powerful statistics, an increase of false positive candidates and will cause unnecessary and enormous costs in downstream analysis and validation experiments.Possible attempts to improve data quality are mentioned in the discussion of this review.Several methods/tools have been developed for microarrays or static RNA-seq analysis.The most recent tools are able to solve the problems of differences in sequencing depth, outliers and batch effects introduced by library preparation protocols, sequencing platform and technical variability between sequencing runs .Even if some tools developed for static experiments can be used for TC data, one major issue is that they do not consider correlations of genes between previous and subsequently TP.Indeed, random patterns, overall time trends in expression or time shifts are therefore not taken into account for normalization, noise correction and differential expression steps.For example, a drug treatment could induce a slower metabolism of a cell population, resulting in a delay or change in the establishment of gene expression patterns.Such delay effects can be recognized only when using all TP data for analysis.Most established methods for DEG analysis are parametric using count-based input and apply their own normalization approaches to raw data.The majority of parametric methods apply a negative binomial model to the read counts in order to account not only for the technical variance but also address the biological variance.Previously, Poisson distributions were used to correct for the technical variance.The one-parameter distribution is not able to describe biological variance, which is higher than a calculated mean expression making the Poisson distribution unsuitable.Therefore a negative binomial distribution is used, adding a dispersion parameter to be more flexible accounting for biological variance and appropriately identifying DEGs .Several non-parametric methods like NOISeq , or more recently NPEBseq and LFCseq offer an alternative way to normalize and model expression data, which are not fitting with negative binomial or Poisson distributions.Nevertheless, these methods are usually more computationally exhausting and need a higher number of replicates to perform equally well .Major methods perform equally well in normalizing the data , but show significant differences in the number of DEGs identified, in accuracy and in power.In this review, we will not discuss each method in detail and we will not make a statement regarding which method to use.These methods were designed for a specific context and might be more appropriate for certain experiments.In conclusion, there is no overall best method for all types of analysis.However, we would like to emphasize the importance of considering the following aspects when choosing a method for analyzing the data to meet the experimental design:How many replicates are needed for this method?,Is a simple two-way comparison sufficient or is a more complex multi-factor model needed for DEG analysis?,Is it desirable to detect differentially expressed RNA isoforms as well?,Time Series experiments have been extensively conducted in the past using microarrays, providing algorithms such as spline fitting , Bayes statistics or Gaussian processes to account for temporal aspects of DEG.Moreover, algorithms detecting periodic patterns have been also developed.Most of them have been implemented into pipelines such as STEM , maSigPro , BETR , TIALA and platforms for researchers like PESTS .To date, there are only five tools available to implement RNA-seq TC data for DEG analysis that we would like to describe in more detail.Next maSigPro is an updated version of maSigPro, an R package on Bioconductor initially developed for microarray TC experiments.The updated version allows the analysis of RNA-seq TC data as well.It uses generalized linear models instead of a linear model in order to allow the modeling of count data.This is achieved by fitting to a negative binomial distribution followed by a polynomial regression.In order to be detected as DEG, the difference of Log Likelihood Ratio of the hypotheses has to be greater than a user defined significance threshold.This ensures a best-fit model for each gene by only keeping significant coefficients.Though, Next maSigPro does not offer built-in normalization methods, the package is equipped with functions for clustering and visualization of processed data.In a comparison with edgeR package, Next maSigPro can control better the False Discovery Rate.Candidates identified by both approaches or solely by Next maSigPro have highly significant and well-fitted models, while the majority of the candidates selected only by edgeR do not pass the second significance threshold step.The small number of DEG not pre-selected by Next maSigPro has a high variance as well as a little fold change.One first drawback of the pipeline is that the threshold for DEG detection is not set automatically according to the data but it is a user defined threshold, making it more challenging to indirectly determining a FDR.Furthermore, the user has to define the number of clusters, whereas it would be better if the number of clusters would be determined based on the actual data.Finally, replicates are not merged with error bars in the output graph but data points are plotted one after each other.DyNB uses negative binomial likelihood distribution to model count data taking a temporal correlation of genes into account.It is also correcting for time shifts between replicates and time-series by Gaussian processes introducing time-scaling factors.Normalization is performed by variance estimation and rescaling of counts similar to DESeq , but on the previously calculated Gaussian process function rather then directly on the samples.In the next step DyNB uses a Markov-Chain-Monte-Carlo sampling algorithm for marginal likelihoods that enables the DEG analysis.A comparison of the DyNB and DESeq candidates showed that the DyNB outperforms DESeq for the detection of weakly expressed or high noise level genes as well as genes affected by variable differentiation efficiency.A drawback is the implementation in MATLAB®, thereby making it less accessible to a broad range of users.Additional drawbacks are: long running times due to MCMC sampling; genes not expressed in one condition are removed; the test output is a Bayes factor calculated by the ratios of hypothesis probabilities, which is less intuitive than the more common p-value.Finally, according to Jeffreys et al. , a Bayes Factor value higher than 10 is referring to a strong evidence of differential expression, though this threshold might not hold true for all types of datasets and users will have to adapt filtering to identify their candidates of interest."TRAP's is a method that aims to identify and analyze differentially activated biological pathways.In a first step, reads are mapped to a reference genome by the Tophat software and further processed to estimate the expression by Cufflink .In the second step, the DEG analysis is performed by the Cuffdiff software , generating a FPKM output file for each sample.The novelty is the downstream analysis, by directing DEG candidates from the Tophat/Cufflinks/Cuffdiff pipeline into a KEGG analysis .This approach offers three options: One Time Point pathway analysis, Time Series pathway analysis or Time Series clustering.The one time point analysis identifies significant pathways for each time point separately, whereas the Time Series pathway analysis takes all TP into account.For pathway analysis two methods are performed and their p-values combined: Over-representation Analysis using the Gene Ontology database and a Signaling Pathway Impact Analysis .Briefly, ORA identifies significant pathways by hyper-geometric tests that compares the ratios of DEGs to the complete number of genes on a total and pathway level.SPIA takes the effect of other genes in a pathway into account.This is achieved by calculating a perturbation factor of fold change of upstream genes divided by the fold change of downstream genes.Additionally, it introduces a time-lag factor for Time Series analysis.For Time Series Clustering, each gene is assigned to a label at each time point, depending on whether the log-fold change of FPKM is either positively/negatively above a threshold or otherwise categorized as constant.Clusters are generated by grouping genes with the same label and further analyzed by ORA using ratios of pathway genes to total genes and all genes in the cluster.Users can directly start the downstream analysis by providing Cufflink/Cuffdiff data avoiding the time demanding preprocessing steps.The main pipeline is performing a pairwise comparison of TPs.Of notes, it is not making use of the time series parameter of Cuffdiff, but only takes the temporal character in later analysis into account.For the analysis itself, a possible complication is the conversion of gene name Identifiers to match the ones used in the pathway files.Moreover only the first of possible several gene name identifiers for a given pathway is used to find matches among candidates.In our opinion, the major drawback of the pipeline, similar to DyNB, is that the genes that are not expressed in one condition are excluded from further analysis.This is due to an infinite log fold change ratio caused by non-expressed genes, which are assigned zero as expression level.SMARTS is designed to create dynamic regulatory networks based on time series data from multiple samples by iteratively creating models extending the DREM method .First, samples are synchronized to a common biological time scale by pairwise alignment followed by sampling of points.This allows a continuous representation, correction of alignment parameters and a computation of an error metric in order to create a weighted alignment.A second alignment error is calculated between samples to create a matrix for an initial clustering by spectral clustering or affinity propagation for cases with two or more clusters, respectively.Clustering is calculated on the basis of all genes and contains noise.SMARTS takes advantage of the fact that a certain condition is only affecting a small number of genes that are regulated by an even smaller number of transcription factors and up-stream pathways.This in turn, reduces the dimensionality of the data.The clustering is the basis for a first regulatory model that is iteratively adapted to create a final clustering of groups that are co-expressed and regulated throughout the time-series.To iteratively improve the regulatory models, static protein–DNA interaction data, such as DNA-binding motifs or ChIP-seq data, is used to define the path of each gene by modeling the transition between time points applying an Input–Output Hidden Markov Model framework.The regulatory model converges into a final clustering that identifies split time points where a subset of genes that have previously been co-expressed diverge into another path.The resulting graph offers a view of gene sets and their path throughout the timeline illustrating the differences in TF at splits that are most likely responsible for the differences in expression and regulation of subsequent time points.In our opinion, the only drawback of this tool is the requirement of prior knowledge of TF binding to genes of interest used as input to the pipeline.EBSeq-HMM is an extension of the EBSeq package accounting for ordered data by applying an auto-regressive Hidden Markov Model.EBSeq-HMM identifies dynamic processes and classifies genes according to their state into expression paths taking dependencies to prior time points into account.The analysis is based on two steps: first, the conditional distribution of data at each time point followed by the transition of states over time.Parameter estimation for the conditional distribution is performed using a beta-negative-binomial model.Second, an additional implementation to correct for the uncertainty of read counts of genes with several isoforms is offered.Subsequently, a state for each gene at each time point is determined applying a Markov-switching auto-regressive model to account for the dependencies of expression and state of the previous state.Finally, all the states of a gene are combined and classified into an expression path.The developers also tested EBSeq-HMM together with existing static methods and Next maSigPro on simulated and case study data."On the simulated data EBSeq-HMM performed with greater power and F1 scores but had a higher false discover rate of 4.5% in comparison to a maximum of 0.5% compared to the other methods.On clinical data, EBSeq-HMM had a 90% overlap of identified genes with other methods and outperformed these on genes with subtle and consistent changes over time.However, the authors did not make any statement about the genes, which EBSeqHMM was not able to identify.When using EBSeqHMM, the user has to keep in mind that its purpose is to identify dynamic genes; in theory it also identifies constant genes and clusters them accordingly.Practically, in order to be constant, the previous and following TP have to have the exact same mean expression value, resulting that most genes will be classified as up or down regulated at affected TPs and hiding possible non DEG time intervals of genes.DEG analysis may result in hundreds of putative candidates, if not more, a number that cannot be experimentally validated.Therefore, scientists tried to reduce the number of candidates by searching for expression patterns and shared pathways to narrow down essential candidates.This field has been extensively researched and improved over the last two decades offering a great abundance of tools, leading to new scientific questions and simplifying their validation.The purpose of clustering is to statistically group samples according to a certain treat, e.g. for gene expression, to reduce complexity and dimensionality of the data, predict function or identify shared regulatory mechanisms.Depending on the data structure a fitting clustering method has to be used to account for the specific data.Considerations should include:Was the data transformed or does it consist of read counts?,How is it distributed?,Is the data originating from static, short or long TC experiments?, "A plethora of clustering methods have been published, many of them available as R packages on the CRAN Task View page, the Bioconductor website or in other scripting/programming languages made available on the publishers' web sites.However, we cannot discuss the whole spectrum of these methods.Therefore, we would like to point out certain methods which are specific for TC experiments employed for microarray and RNA-seq data and refer to the afore mentioned reviews for the selection of a fitting method.To gain new insights into complex data, one of the most common methods used is functional enrichment analysis.FEA identifies candidates sharing biological function or pathway by statistical over-representation using annotated databases such as Gene Ontology or KEGG and can easily be performed using available free web interfaces or R packages such as DAVID , WebGestalt , PANTHER or FGNet , Finally, several commercial software also exist, such as Ingenuity or Metacore .Other options are the investigation of direct and indirect protein–protein interactions via the STRING database or via Cytoscape applications .Detailed descriptions, comparison and overview of FEA tools can be found in recently published reviews .In the last few years, many algorithms were developed to increase the quality and methodology of existing approaches.A usual procedure is to extend, adapt or update an existing established method.For example, edgeR was updated by multifactor experiments and observation weights factor to more robustly account for outliers.Combining existing methods and new strategies could offer a great improvement in quality of analysis, in static as well as in TC experiments.Here, we present novel advancements in the field that might offer improvements to existing methods and pipelines.Major issues at the level of mapping and the quantification of reads are: ambiguous, multi-alignment and exon-junction reads, which are usually discarded at the counting step.Recent approaches such as GIIRA , ORMAN and Rcounts account for multi-mapping reads by introducing a maximum-flow optimization, minimum-weighted set cover problem of partial transcripts and weighting alignment scores, respectively.These recent improvements allow a better quantification of genes and isoforms, as well as the investigation of repeat elements, which was up do date not very feasible.On the isoform level, WemIQ applies a weighted-log-likelihood expectation maximization for each gene region separately to improve quantification of isoforms and gene expression.Samples that differ highly in read counts create a bias at the normalization step due to the adjustment to a common scale that is calculated over all samples.This problem is addressed by the RAIDA algorithm , which accounts for differences in abundance levels rather than modifying the read counts for normalization.Further studies of the SEQC/MAQC—III Consortium elucidated the negative influence of lowly expressed genes on the DEG detection .Therefore, filtering out genes with low expression might offer another possibility to increase predictive power.Another problematic aspect in analysis arises when working with small sample size.In such cases, for RNA-seq experiments, the calculation of the dispersion factor of negative binomial methods is less accurate.Therefore, a new shrinkage estimation has been introduced in order to analyze data with few replicates, which was incorporated into a new tool sSeq .Moreover, resampling of at least three biological replicates per time point was shown to improve the identification of oscillating genes without increasing false positives rates .Recently, a new adapted exact test has been developed to increase power in order to detect DEGs for experimental designs containing only two replicates.This R package is also able to identify differentially expressed genes that are not abundant .As there is no best fitting method for DEG analysis so far, we recommend using several tools and compare and combine the results in order to obtain confident candidates.To increase precision, sensitivity and reduce the detection of false positives candidates, a combination of statistical tests should be applied.The PANDORA algorithm combines p-values, using one of six possible methods, which have been weighted based on the performance of each statistical test.On the other hand, multiple testing and combination of results involve an increase in time and resources needed to run the analysis, which might outweigh the gain in the power of the statistics.In the beginning of multi-Omics analysis, RNA-seq data was used to improve results of other approaches when the initial method reached it limits.With further advancement and availability of technologies, scientists started to combine several Omics data to ask new scientific questions and to add additional layers of information to their data.Further, a great increase and expansion of databases such as ENCODE , Cancer Genome Atlas, GEO , KEGG and analysis platforms have also facilitated the access to multi-Omics analysis.Nevertheless, the integration of several Omics datasets still harbors several challenges such as quality assurance, data/dimension reduction and clustering/classification of combined data sets , which have to be properly addressed and taken into account when designing experiments and performing analysis.In the following paragraph we would like to highlight methods that combine static or TC RNA-seq experiments with other Omics data.These tools can be categorized on whether they are multi-staged or meta-dimensional approaches, performing different Omics analysis sequentially or combining several data types into a single analysis .In the past decade, great efforts were undertaken to develop and improve tools combining microarrays and ChIP-seq data.Up to date, there are several multi-stage tools available to analyze RNA-seq and ChIP-seq, e.g. INsPeCT and metaseq , but only few integrated meta-dimensional approaches e.g. Beta , CMGRN and Ismara .Nevertheless, none of the mentioned methods offer specific TC algorithms for analysis, and most tools either aim to identify targets of transcription factors and create Gene Regulatory Networks, whereas others use methylation or histone modification data to predict regulatory functions .Different approaches and tools for the integration of other Omics data have been extensively reviewed for proteomics , metabolomics and phenotypic data .Indeed, re-analyzing externally obtained data using the same pipelines used for in-house produced data sets is the best approach in order to guarantee comparable results.In general, more powerful algorithms, which so far have not been implemented due to technical infeasibilities, become more and more available.Nevertheless, the optimization through parallelization and cloud computing is a major goal for the development of such new tools.As the amount of data produced in each experiment is massively increasing, improved pipelines and algorithms are in demand in order to supply the users with a good trade-off between accuracy and resources needed for their analysis.Recently, two approaches emerged, namely co-expression analysis and single cell RNA-seq, that are very promising to improve DEG analysis and offer new application fields such as the study of subpopulations.The assumption behind co-expression analysis is that genes in the same pathway very likely share regulatory mechanisms and therefore should have the similar expression patterns.This allows the identification of biological entities that are involved in the same biological processes and has already successfully been applied to microarray data .Moreover, microarray co-expression data has been also integrated with other data types such as microRNA or phenotypic data and been used for differential co-expression to identify biomarkers .It has further been shown that co-expression analysis is able to improve sensitivity of RNA-seq DEG analysis and more recently to outperform existing clustering approaches .Similarities and differences of co-expression networks in microarrays and RNA-seq as well as factors driving variance at each stage of co-expression analysis have already been investigated .However, no gold standard for RNA-seq co-expression analysis has been established.Single-cell RNA-seq, in contrast to population sequencing, enables to access the heterogeneity of gene expression in cells which otherwise is averaged out or even lost for small subpopulations of cells in bulk experiments.This heterogeneity in expression arises due to differences in kinetics of response to a certain condition, treatment or cell fate decisions of each cell.Single-cell RNA-seq allows studying the subpopulation of interest and investigating mechanisms explaining differences between subpopulations, which might offer advances in drug development, personalized medicine or the creation of differentiation networks.Improvement in protocols and sequencing lead to new methods at a rapid rate: STRT , CEL-Seq , Smart-seq , Quartz-seq and microfluidic platforms , enabling scientists to ask new questions.Nevertheless, protocols and methods for single-cell sequencing are not yet completely optimized and still harbor uncertainties such as noise, sequencing and normalization biases as well as proper tools for analysis.There is great effort to address these problems.It has been recently reported that explicit calculation of gene expression levels using External RNA Controls Consortium spike in controls improved normalization and noise reduction .Finally, up to date the lack of validated genome-wide data slows down the development of new algorithms and models can only approximate the real extent of regulation or networks .There are tools to simulate expression data incorporating noise, such as SimSeq , but still this noise estimation does not completely capture a biological situation and again is just an estimation of the whole picture.As more and more genome-wide experiments are conducted, networks created and candidates validated, the data of several sources could be compiled into a database offering frameworks for model validation.To conclude, in the last decades a plethora of new models, system and networks were created, with the caveat of over-generalization of results in order to fit hypotheses and models.By combining high-throughput data, scientists are now able to correct for this over-generalization by filling gaps with complementary data, allowing fine-tuning and dissection of existing models and networks as well as the upcoming of new intuitive, integrative and explorative tools.Further, the integration of several kinds of Omics data remains the biggest challenge as we have to understand the limitations of each technique before conducting a joint analysis and to develop several tools according to the specific data types and underlying genomic models for powerful integrative analysis .
Analysis of gene expression has contributed to a plethora of biological and medical research studies. Microarrays have been intensively used for the profiling of gene expression during diverse developmental processes, treatments and diseases. New massively parallel sequencing methods, often named as RNA-sequencing (RNA-seq) are extensively improving our understanding of gene regulation and signaling networks. Computational methods developed originally for microarrays analysis can now be optimized and applied to genome-wide studies in order to have access to a better comprehension of the whole transcriptome. This review addresses current challenges on RNA-seq analysis and specifically focuses on new bioinformatics tools developed for time series experiments. Furthermore, possible improvements in analysis, data integration as well as future applications of differential expression analysis are discussed.
49
ICN_Atlas: Automated description and quantification of functional MRI activation patterns in the framework of intrinsic connectivity networks
The analysis and interpretation of functional MRI data activation patterns is usually performed in the framework of brain anatomy.In particular, activation clusters are usually described in terms of their extent and centre of gravity coordinates as defined in standard template spaces, e.g. MNI or Talairach.A variety of macro- and micro-structural atlasing approaches have been proposed to relate activation clusters to anatomical landmarks, e.g. automated anatomical labelling or parcellations based on gyral and sulcal structure, or on cytoarchitectonic structure, e.g. the Talairach Demon or the SPM Anatomy toolbox.Another widely used approach to the description of fMRI activation patterns is based on functional localizers."For example, a target area is identified through a separate localisation measurement after which activations of interest are described with respect to the localizer's functional activations.There is some criticism regarding the improper use of functional localizers, especially when used to constrain the analyses per se or due to the risk of circularity.Furthermore, in the context of pathological activity and in particular in view of the spatio-temporal heterogeneity of epileptic activity-related BOLD patterns this approach may be sub-optimal since it may not provide a comprehensive mapping of all relevant activation foci.Recent developments showing the correspondence of maps obtained with resting-state and task-based fMRI may provide a solid background for developing a whole-brain functional networks-based atlasing tool for the interpretation of BOLD patterns derived either from task-based or task-free measurements.Specifically, the pattern of low frequency correlations in the resting brain have been shown to form well identifiable intrinsic connectivity networks or resting state networks.ICNs are spatially segregated areas representing underlying functional connectivity, i.e. intrinsic connectivity, which is important for development, maintenance, and function of the brain.As functional units they show synchronized BOLD fluctuations both at rest and while performing specific tasks.These networks have been observed consistently across imaging sessions and between subjects and can essentially be seen as forming two large anti-correlated systems corresponding to task disengagement and task engagement, respectively; the former is the so-called default mode network and the latter is composed of several task-based networks: somatosensory, visual, or attention ICN, etc.Data-driven meta-analyses of task-activation data have shown a strong correspondence between the configurations of RSNs and task-based fMRI co-activations both for low and high independent component analysis model orders.In the field of epilepsy, there is an increasing interest of a functional network-based interpretation of the pathological activity."In the particular case of fMRI localisation of epileptic events and discharges a functionally-derived framework may be more appropriate than an anatomical approach, specifically for the discussion of EEG discharge-related activation and deactivation patterns, given the relationship between activation patterns and the seizure's clinical signs.Several studies employing independent component analysis to derive spatio-temporal components related to epileptic discharges evidenced networked activation/deactivation patterns partly overlapping and coexisting with ICN-related components.There is also evidence for altered connectivity outside the core epileptic networks, affecting the ICNs possibly as an effect of epilepsy.A study of BOLD changes associated with different electro-clinical phases of epileptic seizures has shown a link between involvement of the DMN and loss of consciousness.A recently proposed framework emphasizes the importance of the proportion of change produced by epileptic transients relative to steady-state network connectivity in normal controls.This underlines the necessity to interpret epileptic discharge-related activation with respect to the whole connectome.Here we propose an atlasing tool, called ICN_Atlas, for the interpretation of BOLD maps based on the objective quantification of the degree of engagement of a set of intrinsic connectivity networks.Specifically, we aimed to develop a means to describe activations in the framework of ICN by matching data to atlas templates in a similar fashion as anatomy-based atlases do and to calculate various measures of activation extent and level in relation to the chosen atlas maps."We first present the engagement quantification formalism, followed by a validation study and finally an illustration of the new tool's application in the study of epileptic networks.ICN_Atlas is a collection of Matlab scripts that serves as an extension to the SPM toolbox and, as such, works across multiple platforms.It is an extensible non-commercial package that is freely available at http://icnatlas.com and at http://www.nitrc.org.The aim was to provide a toolbox with atlasing capabilities analogous to previously published anatomical information-based tools such as the 3D Talairach atlas, or the Automated Anatomical Labelling.The novelty of the framework lies in the following: it uses functionally-derived atlas base maps based on ICNs; it outputs a series of estimated activation-based metric values to describe the functional activations based on intrinsic functional connectivity."In brief, ICN_Atlas' input consists of a volumetric statistical parametric map representing an fMRI activation pattern and its output consists of a series of numeric values representing different measures of the map's degree of involvement for each atlas base map, for an overview see Fig. 1. "The atlasing algorithm performs labelling of the input map's active voxels according to membership based on voxel-wise correspondence analysis of the activation map and the atlas base maps, and calculates a series of overlap, activation extent, and activation density metrics based on the labelling.In ICN_Atlas’ current implementation, three sets of ICN base atlases are available based on labelled Gaussianised statistical maps representing ICNs resulting from group-wise resting-state fMRI data and BrainMap Project meta-analysis data, see below.In addition, an integer label map representing the whole brain Automated Anatomical Labelling atlas is also included as an anatomical reference.N.B., the atlasing framework is extensible and other atlases can easily be integrated.The three sets of ICN base atlases are as follows:SMITH10: the 10 adult ICNs based on ICA decomposition of resting-state fMRI data, where d is the dimensionality, representing the constraint on the number of independent spatio-temporal components;,BRAINMAP20: the 18 BrainMap co-activation networks and 2 noise/artefact components based on ICA decomposition of the BrainMap Project large-scale neuroimaging experiment meta-analysis data available at http://brainmap.org/icns/maps.zip.BRAINMAP70: the 65 BrainMap co-activation networks and 5 noise/artefact components based on ICA decomposition of the BrainMap Project large-scale neuroimaging experiment meta-analysis data available at http://brainmap.org/icns/Archive.zip,.Each of the resulting atlas base maps are then saved as a matrix of labels and Z-scores, plus information on the defining space and other descriptive data, "The anatomical atlas included with the ICN_Atlas tool, CONN132, is based on the CONN:functional connectivity toolbox's combined representation of the cortical and subcortical ROIs from the Harvard-Oxford Atlas and the cerebellar ROIs from the AAL atlas, transformed from 1 × 1x1mm to 2 × 2x2mm resolution using the SPM toolbox to match the functional atlases' base maps spatial characteristics.In addition to the labelling scheme, in an attempt to capture the essence of ICN involvement embodied in the input map quantitatively as completely as possible, we considered a range of ICN ‘engagement’ metrics.The metrics were inspired firstly by basic descriptive spatial overlap statistics, and secondly by considering the statistical nature of the input maps; for example, the metric Ii below) represents the ratio of activated ICNi voxels to ICNi volume and is purely spatial; another, MAi, is the ratio of the mean of voxel-wise statistical values over the number of activated voxels in ICNi.The metrics fall into the following categories: spatial extent, activation strength, activation density and correlation.Furthermore, the proposed metrics are either ICN-specific or global.A total of 11 ICN-specific metrics and 4 global metrics are implemented in ICN_Atlas and their definitions can be found in Appendix A.In the following, we focus on 4 metrics in order to simplify the presentation.This choice is informed by the results of a Factor Analysis aimed at identifying a parsimonious set of metrics that capture and summarise ICN engagement for a given dataset.The following two metrics are designed to capture the degree of engagement of an ICN in a given input map in purely spatial terms:In other words, Ii is the proportion of ICNi that is activated in the input map."The following two ICNi engagement metrics take each voxel's statistical score into consideration; these are designed to better distinguish between two input maps with similar degrees of spatial involvement of ICNi but different activation strengths, each taking into account the input map's values in the ICNs in different ways:", "The denominator being the sum of the numerator over all ICN, therefore representing the input map's total statistical ICN score, RAN,i is therefore a metric similar to MAN,i but that is relative to the engagement intensity of all ICNs.The metrics are applicable either to input maps previously subjected to statistical significance thresholding or to ‘raw’ statistical maps.The former may be more appropriate for involvement metrics where the spatial extent of activation is the determining factor, while the latter can possibly be advantageous for activation metrics depending on the research question, e.g. to compare activation profiles over whole ICNs for different task or behavioural conditions."The toolbox's primary outputs consist of a table containing the values for all 11 ICN specific and 4 global metrics, and a range of visualization options in the form of bar charts and polar plots, some of which will be illustrated below.This section consists of two parts: 1.Validation, on repeat resting-state fMRI scanning data from 25 healthy volunteers.2.Demonstrations, of a methodology for the identification of a parsimonious set of ICN_Atlas engagement metrics in a particular fMRI dataset, and two illustrative applications of ICN_Atlas on task fMRI data and fMRI maps of epileptic seizures.We validated the ICN_Atlas atlasing methodology using the New York University resting-state test-retest fMRI dataset, which consists of three rs-fMRI scans acquired in twenty-six participants who had no history of psychiatric or neurological illness.The second and third scans were collected between 5 and 16 months following the baseline scan, in a single scanning session 45 min apart.In summary, the validation process consists of: First, we performed group and individual-level ICA analyses of the NYU test-retest data."The results of this analysis are sets of group-level and individual ICs that were subjected to atlasing using SMITH10, BRAINMAP20 and BRAINMAP70 as atlas base maps, to evaluate the proposed methodology's robustness in terms of its ability to identify functionally stereotypical ICNs.Second, we assessed ICN_Atlas atlasing repeatability by quantifying ICN engagement at the individual level across the repeat scans in the NYU dataset.Data pre-processing was performed using the spm8 toolbox with the following steps: realignment and unwarp, normalization to MNI space using the spm8 EPI template as target image, Gaussian spatial smoothing with 6 mm FWHM.The pre-processed NYU dataset was then analysed by means of independent component analysis using MELODIC with the temporal concatenation group ICA approach followed by dual regression, resulting in1500 individual-level ICs.Data from the three scanning sessions were included in the same group ICA, and the number of resulting group-level independent components was limited to 20.The resulting group-level IC statistical maps were then thresholded at Z > 3, and submitted to ICN_Atlas atlasing using the SMITH10, BRAIMAP20 and BRAINMAP70 atlases.Correspondence to the ICNs was quantified using the metrics Ii, MAN,i and RAN,i, where the index i is the name of the relevant atlas base map, for example IICN9 represents ICNi Spatial Involvement calculated based on ICN9 of the SMITH10 atlas and RAN,BM20-8 represents Normalised Relative ICNi Activation calculated based on BRAINMAP20 atlas co-activation network BM20-8, while MAN,BM70-2 represents Normalised Mean ICNi Activation calculated based on BRAINMAP70 atlas co-activation network BM70-2.To obtain an overview of the agreement between base atlases we determined whether the highest three engagement values pertain to the same atlas base maps for any given IC.This number was chosen based on the fact that the top 3 values correspond to between 61-99% and 48–95% of the total Ii for SMITH10 and BRAINMAP20 respectively, and between 21 and 80% of the total Ii for BRAINMAP70.Within- and between-session repeatability of the ICs were quantified as the mode of the intra-class correlation coefficient; ICC was calculated using a formula that does not penalize for systematic differences between scanning sessions, for details of the formulae, see Appendix B.The mode of ICC was calculated over voxel-wise values greater than zero using an 80-bin histogram spanning the interval.In this section we describe two demonstrations of the application of ICN_Atlas: Firstly, we illustrate the problem of selecting a parsimonious subset of the proposed ICN engagement metrics for a given dataset; secondly, we show the results of two applications of ICN_Atlas: using a task-based dataset and in the field of epilepsy by quantifying ICN engagement evolution during epileptic seizures."To demonstrate ICN_Atlas' utility on task-based fMRI data, we selected an open access fMRI dataset from the NeuroVault database corresponding to the experiment described in Vagharchakian et al., which aimed to investigate how the language processing networks cope with fast visual and auditory sentence presentation rates.Briefly, neural activations for visual and auditory sentence presentation rates representing 20, 40, 60, 80 and 100 percent sentence durations with respect to a baseline of 5.9 syllables/s presentation rate were collected using fMRI and then analysed using GLM ANOVA with specific linear and non-linear contrasts and exclusive/inclusive contrast masking.Three distinct response profiles were identified corresponding to: linear increase with stimulus duration, denoted as ‘Sensory profile’ characteristic for bilateral sensory cortices;: response collapse for the shortest presentation times, described by the authors as the ‘Post-bottleneck profile’, characteristic of activations in the bilateral superior and middle temporal gyri, left inferior frontal and precentral gyri, bilateral occipitotemporal cortex and visual word form area; and: maximum activation for intermediate rates, denoted as ‘Buffer profile’, characteristic of activity in the insulae, supplementary motor area bilaterally, anterior cingulate cortex, and left premotor cortex.The authors concluded that these response profiles are consistent with a processing bottleneck that is independent of the sensory limitation.The data available from NeuroVault, consisted of simple group level compression rate vs. baseline contrast maps for each modality and presentation rate, each represented as Z-maps in MNI space according to the available metadata."Here we aimed to show the utility of ICN_Atlas for parametric data by comparing whether atlasing results obtained with anatomical ROI-based atlasing using the CONN132 anatomical atlas for the available maps are consistent with the voxel-wise results published previously, and by evaluating whether the proposed ICN-level engagement metrics for the BRAINMAP20 atlas can enhance the interpretation of the study's results. "For the anatomical ROI comparison, we selected the following CONN132 atlas ROIs based on their correspondence with the activation clusters detailed in: the right and left insular cortices, inferior frontal gyrus, pars triangularis left, inferior frontal gyrus, pars opercularis left, precentral gyrus, left, superior temporal gyrus, anterior division right and left, superior temporal gyrus, posterior division left, lateral occipital cortex, inferior division, right and left, frontal medial cortex, supplementary motor area, left, Heschl's gyrus right and left.Atlasing was performed on unthresholded input maps, reflecting the lack of information in the NeuroVault metadata to support appropriate significance thresholding.Nevertheless, for visualization purposes an input map threshold of Z = 3 was also applied, see Fig. 10, below.To illustrate ICN_Atlas’ potential utility in relating BOLD changes to functional networks, we quantified ICN engagement during epileptic seizures in a patient with severe epilepsy.The patient underwent simultaneous scalp EEG and video recording and functional MRI scanning, during which 7 spontaneous seizures were captured for details of the data acquisition and analysis).The seizures originating in the left temporal lobe were classified as typical, meaning that they are associated with clinical manifestations that are well characterised on clinical video EEG recordings.Ictal semiology was characterised by behavioural arrest, orofacial movements, manual automatisms and loss of awareness.The seizure developed from stage II of sleep with indication that typical semiology did not fully develop given the constraints of the scanner environment.The patient appeared unaware/unconscious during the whole seizure.The ictal onset phase was characterised with a left temporal theta rhythm on EEG and no signs or symptoms.During the ictal established phase the abnormal activity on EEG became widespread.The patient exhibited orofacial movements and some jerks involving his head and hands.We considered that the patient did not only show such elementary motor signs, but probably aborted manual automatisms.During the late ictal phase left temporal slowing was evident on EEG and there was no semiology.As described in Chaudhary et al. the seizures captured during video-EEG-fMRI were partitioned into three ‘ictal phases’ based on close review of the EEG and video: ‘Early ictal’, ‘Ictal established’ and ‘Late ictal’."The ictal phase-based analysis of the fMRI data is designed to reveal BOLD patterns associated with the specific electro-clinical manifestations characteristic of each phase The BOLD changes associated with each phase were mapped in the form of SPM -maps at a significance threshold of p < 0.001 uncorrected for multiple comparisons with a cluster size threshold of 5 voxels, and co-registered with the patient's anatomical MRI scan and normalised to MNI space.ICN_Atlas was applied using the SMITH10 atlas to the fMRI map obtained for each ictal phase and ICN engagement was quantified for each ictal phase using the metrics Ii, RAN,i and MAN,i which were identified in the factor analysis described above.The components obtained with temporal concatenation group ICA were consistent with previously published ICNs and in particular showed strong similarities with those identified by Zuo et al., although their ranking in terms of percentage of variance explained differed.Thirteen ICs were identified that represent parts or combinations of functionally stereotypical ICNs and therefore labelled functional components; these were IC1, IC3, IC5, IC6-IC9, IC11, IC14, IC15 and IC18-IC20.Based on their spatio-temporal characteristics, 7 components were labelled as noise components, which accounted for 34.98% of the variability present in the data.Concerning the functional ICs, IC1, IC6 and IC18 were found to relate to vision, IC6 also covering the superior parietal cortex and the premotor cortex, IC7 corresponded to the primary motor areas along with the association auditory cortices, and IC8 was related to the primary auditory cortices and the medial frontal, cingulate and paracingulate cortices, and the insula, and parts of the executive-control network.We observed that some ICNs were distributed across ICs, e.g. IC3 and IC5 represented the default mode network, IC9 the fronto-parietal networks corresponding to cognition and language bilaterally, IC11 the executive control and cingulate/paracingulate networks.In addition, similarly to Zou et al.: cerebellar, temporal lobe, temporal pole, posterior insula and hippocampus, brainstem, and ventromedial prefrontal components were also identified.For all ICs and for each metric at least two of the top three engagement values pertained to the same atlas base maps for SMITH10 and BRAINMAP20 while at least one of the top three engagement values pertained to the same atlas base maps for BRAINMAP70 atlasing.Comparison of the matching atlas base maps in the top 3 values across engagement metrics and over all IC showed the following: the average numbers of matching atlas base maps were 2.05, 2.10, and 1.80 for the Ii vs. MAN,i; 2.75, 2.60, and 2.10 for the Ii vs. RAN,I; and 2.10, 2.40, and 1.80 for the MAN,i vs. RAN,i comparisons for SMITH10, BRAINMAP20 and BRAINMAP70, respectively.Taken together, the number of matches is significantly lower for the Ii vs. RAN,I comparison for BRAINMAP70 compared against the other atlases, and also significantly lower for the MAN,i vs. RAN,i comparison for BRAINMAP70 vs. BRAIMAP20.Moreover for SMITH10 the and the comparisons were significantly different, and for BRAINMAP20 the Ii vs. MAN,i and Ii vs. RAN,i comparison was significantly different.For the sake of brevity, in the following we summarise the findings by presenting only the highest ICNi Spatial Involvement metric value across all ICN for any given input map; the descriptions of the results for metrics MAN,i and RAN,i can be found in the Supplementary Materials.Ii values for SMITH10, BRAINMAP20 and BRAINMAP70 are plotted in Figs. 4 and 5 and Supplementary Fig. 2 respectively, showing the differing ICN representations in the three atlases.The difference in the total extent of the ICN atlases was reflected in the global spatial engagement metric IT with generally lower involvement for BRAINMAP20 and BRAINMAP70 than for SMITH10, since BRAINMAP atlases cover greater part of the brain; moreover, as BRAINMAP70 can be considered as a subnetwork representation of BRAINMAP20 it is not surprising that their IT results were highly similar.For SMITH10 the temporal lobe and hippocampal components IC14 and IC19 showed low involvement, compared to BRAINMAP20 and BRAINMAP70.Overall, the ICN engagement results of the group ICA matched well their functional role; for SMITH10, for visual components IC1, IC6, and IC18 the highest involvement values were IICN1 = 0.97, IICN3 = 0.45 and IICN2 = 0.61, respectively; for IC3 and IC5, IICN4 = 0.75 and IICN4 = 0.47, respectively; for the sensory-motor and auditory component IC7, IICN6: = 0.84; for the auditory and executive control component IC8, IICN7 = 0.87; for the bilateral fronto-parietal component IC9, IICN10 = 0.82; for cerebellar component IC15, IICN5 = 0.64; for executive control component IC11, IICN8 = 0.56; and for prefrontal component IC20, IICN8 = 0.24.The engagement results for BRAINMAP20 showed a similar pattern, for the visual components IC1, IC6 and IC18 the highest involvement values were IBM20-12 = 0.83, IBM20-7 = 0.75 and IBM20-11 = 0.55 respectively; for IC3 and IC5, IBM20-13 = 0.54 and IBM20-13 = 0.58, respectively; for the sensory-motor and auditory component IC7, IBM20-9 = 0.93; for the auditory and executive control component IC8, IBM20-4 = 0.88; for the bilateral fronto-parietal component IC9, IBM20-18 = 0.85; for cerebellar component IC15, IBM20-14 = 0.62; for executive control component IC11, IBM20-20 = 0.49; and for prefrontal component IC20, IBM20-2 = 0.60.The engagement results for BRAINMAP70 showed a pattern consistent with subnetwork fractionation, when considered against those for BRAINMAP20, in having similarly high involvement values in some of atlas base maps for most ICs while for visual component IC18 there was a single highest involvement value of IBM70-3 = 0.61.For the default mode network, components IC3 and IC5 the highest involvement values were IBM70-61 = 0.82 and IBM70-38 = 0.89, respectively; for the sensory-motor and auditory component IC7, IBM70-35 = 0.98; for the auditory and executive control component IC8, IBM70-52 = 0.98; for the bilateral fronto-parietal component IC9, IBM70-12 = 0.96,; for cerebellar component IC15, IBM70-60 = 0.83; for executive control component IC11, IBM70-17 = 0.74; and for prefrontal component IC20, IBM70-20 = 0.79.The spatial involvement values for the ‘noise’ ICs IC2, IC4, IC10, IC16 and IC17 were all <0.3 for SMITH10, with noise component IC12 and IC13 having the highest values: IICN2 = 0.39 and IICN5 = 0.38, respectively.Similarly, for BRAINMAP20 the involvement values for noise ICs IC2, IC10, IC16 and IC17 were <0.30, with IC4, IC12, and IC13 showing IBM20-3 = 0.31, IBM20-11 = 0.39, and IBM20-5 = 0.32, respectively.Consistent with the sub-network representation in BRAIMAP70, the ‘noise’ ICs had wider range of maximum Ii, ranging from IBM70-56 = 0.17 for IC17 to IBM70-58 = 0.78 for IC16.Across all ICs the modes of the within- and between-session intra-class correlation coefficients <ICCW> and <ICCB> were in the range of 0.18–0.65.Of the functional ICs, IC9, IC3 and IC5, and IC1 exhibited the highest repeatability, with =,, and respectively.Most other functional ICs had <ICCW> and <ICCB> values in the ranges while IC19 and IC20 had lower repeatability, similar to most of the noise ICs.Note the high repeatability for noise component IC2 with <ICCW> = <ICCB> = 0.65.The distribution of engagement metric values for individual dual-regressed single-session ICA maps across base maps were similar to those obtained by atlasing of the group ICA maps; for a visual comparison see Fig. 6.At the level of atlasing for every IC and individual base map combination, within- and between-session ICN engagement repeatability varied considerably; nevertheless median values indicated fair-to-moderate agreement.As expected, within-session ICC tended to be higher than the between-session.In summary, median test-retest repeatability for the SMITH10 atlas were for Ii, and and for MAN,i and RAN,i respectively.The results were very similar for the BRAINMAP20 atlas, with test-retest Ii repeatability of, and and for MAN,i and RAN,i respectively; for the BRAINMAP70 atlas, with test-retest Ii repeatability of, and and for MAN,i and RAN,i respectively.We note a small number of negative ICC values, which were found to reflect minimal or null overlap between the ICs and the atlas base maps, as shown in Supplementary Fig. 9.At the base map level, i.e. collapsed across ICs, test-retest ICN engagement repeatability ranged between moderate and very strong, with median = for Ii, while for MAN,i and RAN,i these were and, respectively for the SMITH10 atlas.The results were very similar for the BRAINMAP20 atlas, with test-retest atlas base map repeatability for Ii of, and and for MAN,i and RAN,i respectively; and for the BRAINMAP70 atlas, with test-retest atlas base map Ii repeatability of, and and for MAN,i and RAN,i respectively.Finally, ICN engagement metric reliability calculated over all subjects, atlas base maps, and ICs, showed strong to very strong agreement, with values of: for Ii, for MAN,i and for RAN,i for SMITH10; for BRAINMAP20, the corresponding values were, and; and for BRAINMAP70, the corresponding values were, and.The five metrics identified at the first stage of the factor analysis using the NYU rs-fMRI data were: two spatial involvement metrics: Ii and IRi, and three activation strength-weighted metrics: MAi, MAN,i, and RAN,i.The second-stage factor analysis, performed to limit the number of metrics to three, revealed that Ii and RAN,i, contributed most to the two latent factors, which explained 68% of the variance, and that MAN,i had a high degree of uniqueness."Engagement as estimated by MAN,i was found to match the ‘Sensory profile’ for the visual stimulus modality in the left and right inferior lateral occipital cortex ROIs and for the auditory stimulus modality in the left and right Heschl's gyri.In addition, the MAN,i values for the auditory presentations followed the so-called ‘post-bottleneck profile’ in the left superior temporal gyrus; for visual stimulation the similar effect was observed for the left posterior superior temporal gyrus, the left inferior frontal gyrus, left precentral gyrus, left SMA, while a pattern of ICN engagement resembling the ‘buffer profile’ was observed in the insular cortices for visual stimulation.ICN engagement as estimated by MAN,i and Ii showed differential involvement of ICNs depending on stimulus modality and stimulus duration.Stimulus modality was clearly visible in the differential engagement of visual and auditory/language ICNs.Regarding stimulus duration, the individual MAN,i and Ii values were found to be stable or increase slightly for easily understood auditory stimuli, with peak values for the difficult but intelligible and a collapse for the unintelligible stimuli, regardless of stimulus modality.This behavior resembled the phase profile suggested for integrative regions.These parametric changes depending on stimulus duration represented a network-wide behavior, i.e. they were not exclusively driven by a single or a small group of ICNs.As illustrated in Fig. 11, ICN engagement as assessed using the SMITH10 atlas fluctuated across ictal phases.Total spatial involvement was generally low, with a value of 0.017 in the ictal onset phase, doubling to 0.035 in the ictal established phase and decreasing to 0.020 in the late ictal phase.With respect to individual ICNs, we note a high degree of involvement in ICN4, ICN5, ICN8 and in ICN9 and ICN10 during the Early Ictal phase.Significant involvement intensity changes were seen in ICN6 and ICN8 during the Ictal Established phase.The Late Ictal phase was characterised by significantly reduced spatial engagement globally.DMN involvement intensity is maintained throughout the seizures.We now focus on three ICN, namely the DMN, sensorimotor network and executive network, in a top down/semiological interpretation perspective on ICN engagement.The DMN shows a pattern of increasing engagement relative to other ICNs across phases.It ranks 5th in terms of ICN spatial involvement at the Early Ictal phase and shows a pattern of increase and subsequent decrease in the Ictal Established and Late Ictal phases, respectively.Its activation level is roughly constant throughout the phases, but goes from being negligible in intensity relative to globally-observed activation in the Early Ictal phase to approximately 4th in importance in the subsequent phases.The sensorimotor network, is the second most spatially involved network) at the early ictal phase and its activation level grows consistently across phases as does its intensity relative to the whole-brain activation level, becoming the most prominent in the late ictal phase.For the executive network the level of spatial involvement is relatively low in the Early Ictal phase while its activation level is roughly constant throughout the phases similarly to the DMN; however in contrast to the DMN, the executive network becomes very prominent relative to globally-observed activation in the ictal established phase.The main objective of the proposed ICN_Atlas methodology is to provide a quantitative and objective framework to characterize fMRI activation maps in terms of ‘functional engagement’ in contrast to methods based on anatomically defined coverage and in particular those based purely on visual description of fMRI map anatomical coverage.To this effect it seems appropriate to base the quantification on atlases derived from maps obtained ‘functionally’, namely sets of intrinsic connectivity networks derived based on fMRI data.We have addressed the issue of validity in terms of repeatability and reproducibility, by applying a commonly used methodology to extract independent components from a publically available longitudinally-acquired resting-state fMRI dataset.The resulting ICNs were then subjected to the proposed atlasing scheme using three ICN base maps, thereby providing an assessment of ICN_Atlas’ robustness in terms of its ability to identify functionally stereotypical ICNs across scanning sessions.The results of this analysis showed that repeatability as measured by the intra-class correlation coefficient is dependent both on the atlased activation maps and the atlas base map used for atlasing.Repeatability for the atlas base maps showed moderate to very strong agreement depending on the metric considered.The overall repeatability calculated by collapsing data across subjects, IC maps, and atlas base maps, showed strong to very strong within- and between-session agreement.The outcome of the repeatability analysis is on par with previous repeatability estimates obtained on the same data with other approaches.To demonstrate the potential utility of ICN_Atlas we applied it to two datasets: firstly, an independently obtained, open access task-based fMRI dataset, selected to show how our tool can capture variations due to parametric modulations; secondly, we also wanted to demonstrate ICN_Atlas’ potential utility in clinical research by illustrating its application to fMRI data in one of own areas of expertise, namely fMRI of human epileptic activity.Conceiving ICN_Atlas as a descriptive tool implies data reduction: from a whole-brain functional map to a set of numbers of a size that that facilitate comprehension and communication."We therefore considered the issue of the atlas' output, in particular the quasi-infinite number of conceivable engagement metrics. "Starting with a wide-ranging set of ICN engagement metrics devised based on general considerations of fMRI maps' spatial and activation intensity, we performed a factor analysis as a rational basis to select a reduced set of metrics; we chose three as a desirable number of metrics to estimate and report on, keeping in mind that this number is multiplied by the number of ICNs in the base atlas, which ranges from 10 to 70 in the three used in this work, as the tool's total output. "We believe that a limited set of metrics, i.e. 30 for the SMITH10 atlas, per fMRI map is manageable at this very early stage of the tool's application.Future similar analyses on other datasets may reveal a pattern which helps us identify an optimal set of metrics; such a consensus would be beneficial as it would help standardising the methodology.The atlases we chose for this validation study and initial demonstrations represent two very different approaches for describing intrinsic connectivity networks: The SMITH10 atlas is based on resting-state fMRI data, while the BRAINMAP20 and BRAINMAP70 are based on ICA decomposition of task-based fMRI data.It has previously been shown that the SMITH10 and BRAINMAP20 atlases yield highly similar results for ten well-matched ICNs, but more recently Laird et al. showed that there are 8 additional ICNs that can be reliably derived from task-based data.The greater number of functional components in BRAINMAP20 results in greater brain coverage, a fact reflected accurately in the global engagement metric values we obtained.Maps obtained with increased ICA dimensionality tend to show the expected subnetwork fractionation with respect to the networks seen at lower dimensionality, without significantly affecting global ICN engagement.The cognitive domain based colouring of ICN_Atlas output further supports the similarities between the base maps.It has previously been shown that ICNs obtained with low model order ICA represent large-scale functional networks, while higher model orders lead to subnetwork fractionation.While the SMITH10 and BRAINMAP20 atlases represent well-documented large-scale functional network obtained for model order d = 20, what model order would be the best suited for ICN subnetwork-based description of functional activations remains an open question.It has been shown that ICA model order 70 can lead to robustly detectable components; furthermore, model orders of 60–80 have been shown to: sufficiently separate signal sources; be repeatable; not over-fit the data; and show significant changes in volume and mean Z-score for the evaluation of ICNs.This was further corroborated by hierarchical clustering analysis on BrainMap metadata matrices, i.e. matrices that were designed to quantify the relationship between ICs and behavioural domains or paradigms, where the quality of hierarchical clustering was found to be highest for ICA model orders d = 20 and d = 70, leading to a more clear-cut correspondence between functional properties and ICNs.Based on these observations, the BRAINMAP70 atlas seems to provide an appropriate description of ICNs on a subnetwork level.Our comparisons of the two lower-dimensionality atlas base maps, SMITH10 and BRAINMAP20, have shown contrasting quantitative functional map descriptions, for example in relation to the temporal lobes, where there is a specific limbic and medial-temporal map in the BRAINMAP20 atlas which has a minimal overlap with the auditory base map of the SMITH10 atlas."Furthermore, we note that the SMITH10 ICNs do not cover the hippocampi, which may limit this specific base atlas's applicability to data from patients with temporal lobe epilepsy for example.It is noteworthy that the anatomical coverage of the BRAINMAP70 atlas is similar to that of BRAINMAP20, as reflected by global engagement metric IT.Given the choice of base atlases presented here, all derived from data collected in predominantly healthy adults, one could argue that the utility of ICN_Atlas is limited to experimental data obtained on neurologically ‘typical’ adults.Indeed, the optimal atlas depends on the population investigated, and no pre-calculated atlas can be considered perfect for all purposes.Still, the Talairach and Tournoux atlas is based on a single 60 years old female, and the AAL atlasis based on the Colin-27 brain template, yet the former is still widely used for neurosurgical planning in non-neurotypical patient, and the latter is widely used in fMRI ROI analyses for both neurotypical and –atypical subjects, and even a high proportion of the CONN132 atlas ROIs are based on it.Moreover, there is no widely accepted standard spatial template space for children, and therefore pediatric rs-fMRI analyses can be performed either in the MNI template space, or age and study specific templates can be created.Therefore the choice of atlas can be seen as one between generalizability and universality, vs specificity.Nevertheless, the ICN_Atlas framework is designed to accommodate multiple atlases, including any derived from pathological data.For example one could envisage the use of a study-specific ICN_Atlas base map creating an ICN template with group ICA from the joint patient-control data, and then co-registering is to any spatial template image, and then converting it to ICN_Atlas base map format.We chose the NYU-TRT data for our validation study because it is substantial in size, longitudinal, open-access and free-to-use, and well characterised).The results of our TC-GICA analysis are similar to Zuo et al., with the main difference being the component ordering based on the ranking of the percentage of variance explained."This may be due to the different motion correction algorithms applied: SPM's algorithm in our case vs. FSL's in Zuo's.We note that although motion correction is a well-known problem in fMRI data analysis, especially for resting-state fMRI, to date no methodological consensus has emerged.We also observed slightly different functional partitioning of the obtained ICs compared to those described by Zuo et al., in which IC3 and IC5 represent the default mode network and IC9, the fronto-parietal networks corresponding to cognition and language bilaterally.These differences may also be related to the different pre-processing pipelines used.Nevertheless, the similarity of our voxel-wise ICC results with those described by Zuo et al., especially given the fact that ICs corresponding to intrinsic connectivity networks have higher ICC values than those corresponding to noise is reassuring.Concerning noise component IC2, its high degree of repeatability is not surprising given that it corresponds to the venous sinuses, an anatomically defined and therefore spatio-temporally stable entity.Reassured by the above results we went on to assess atlasing repeatability for each metric at three levels: individual atlasing steps for every IC - individual atlas base map combination; atlas base maps; and global, i.e. across ICs and atlas base maps.The results showed that repeatability is dependent both on the atlased activation map and the atlas base map used for atlasing.This finding is not unexpected, since activation maps have highly variable spatial distribution, hence there may be very limited or no overlap with some atlas base maps depending on the specific activation pattern which can lead to elevated variability, especially in the border regions of activation clusters thereby influencing the atlasing output due to a small number of voxels with values close to statistical significance.Since our calculation of ICC at the atlas base map level, i.e. collapsing across ICs, reduces the impact of this IC-derived variability, it can be considered more a reliable assessment of the utility of the atlasing tool itself than the level of individual atlasing steps.At this level, the ICC values showed moderate to very strong agreement, on a par with the voxel-wise atlasing results, and similarly with the strong to very strong agreement observed at the global level.Regarding metric reliability we note the markedly lower value for MAN,i, reflecting the maps’ greater inter-session variability in terms of overall activation level, an effect which is compensated for in the corresponding relative metric, RAN,i.This observation suggests that the latter should be favoured in applications.To demonstrate ICN_Atlas’ utility on task-based fMRI data we selected an independent, task-based, open-access data set containing parametric modulated data.Using this data were able to demonstrate parametric modulation effects in the atlasing output, reflecting task difficulty both for auditory and visual sentence presentation, which are compatible with previously published results and the previously proposed model of a temporal bottleneck in the language comprehension network that is independent of sensory limitation.As there was no information in the NeuroVault metadata to support proper significance thresholding, we opted for performing atlasing both on unthresholded input maps, and using an arbitrary threshold, to emphasize the flexibility of ICN_Atlas as a research tool.Indeed, both approaches produced similar results for this data set.Direct ROI-by-ROI comparison of the results was not possible due to the limited available data, and the different nature of ROIs: in Vagharchakian et al. they were derived using GLM ANOVA while those used in CONN132 were derived from anatomical landmarks.Indeed the CONN ROIs were much larger than the originally reported clusters, resulting in some of them including clusters with different response profiles; this means that ICN_Atlas provides a different, more integrative, level of description; this is even more pronounced at the level of ICNs.This is clearly visible in the engagement profiles we obtained: while on the anatomical ROI level with the CONN132 atlas both the sensory, post bottleneck, and buffer response profiles were identifiable, on the ICN level with the BRAINMAP20 atlas the response profile resembled the phase profile suggested for integrative regions both for the Ii and the MAN,i metrics.The latter represented network-wide behavior not exclusively driven by a single or a small group of ICNs, proven by the fact that overall engagements followed the same response characteristic.Moreover, despite the dominant integrative response profile, the stimulus modality could still be identified from the ICN_Atlas output, and there were visible differences in the engagement dynamics of the BRAINMAP20 ICNs, but their detailed analysis is outside the scope of this paper.Note that, the response profiles were identified visually, as it was not possible to characterize ICN_Atlas output on a ROI-by-ROI basis using correlation-based statistics due to the small number of data points available.We obtained results with ICN_Atlas using the dataset of a patient who had repeated seizures during resting-state fMRI scanning, confirmed on simultaneously recorded EEG and video.The results significant and varying engagement of a range of ICN in different epileptic phases.Specifically, there is a degree of correspondence between the patterns of ICN engagement in this seizure and ictal semiology.Our observation of activation of the DMN during the ictal established phase is consistent with observation of disturbance in normal level of consciousness.In turn, DMN activation is not normally associated with activation of the sensorimotor network and associated manifest motor activity nor with activation of the executive and fronto-parietal networks.An implicit observation that results from this particular application of ICN_Atlas is in relation to the eminence of the Default mode network in research.Whilst there has been a level of interest focused on the DMN in epilepsy imaging studies, this may in part be accounted more by its historical pre-eminence in the field of functional imaging than some intrinsic a priori clinical relevance even though fluctuations in awareness and or consciousness are an important consideration in epilepsy.We suggest that use of ICN_Atlas will help to widen the investigation of the role of other intrinsic connectivity networks in Epileptology.Note however, that the importance of the mesial temporal lobe structures in epilepsy highlights a limitation of the SMITH10 atlas in this field of application.ICN_Atlas not only addresses the issue of interpretation bias but also introduces objectivity in providing a standardised approach to the characterization of epileptic networks in the clinical context.We refer to the fact that neuroimaging networks are often labelled and referenced generally in a purely visual and qualitative manner, relying on the investigators knowledge of functional localizers, or that of basic functional neuroanatomy as it is evident from the taxonomy of brain activation databases.Our approach raises the question: To what extent do these patterns of intrinsic connectivity networks activation manifest in seizure semiology?,This is a question which should be addressed on an individual and group level.Whilst the illustrative case study provides notional correlation with manifest semiology as it can be understood in terms of network engagement, it raises interesting questions as to the impact of seizures on normal connectivity and cognition, including executive function in relation to normal levels of consciousness.We will further address these issues in future studies.Furthermore, ICN_Atlas allows for more sophisticated analyses than currently performed via quantitative assessment of ICN engagement and thus may add to the debate in relation to the neurobiological nature of seizure networks.For example on a descriptive level, ICN engagement in terms of voxel numbers as well as sum of statistical values, may reflect a predominance of specific ICNs.Such findings can notionally address questions, raised in the literature with respect to the interpretation of BOLD changes found in deeper structures, i.e. whether they reflect normal network activation consequent to ictal activation or indeed widespread underlying abnormalities.A better understanding of the intrinsic connectivity network composition of clusters in the ictal BOLD maps is likely to improve the interpretation of epileptic activity and therefore improve localisation, particularly in comparison to other relevant investigation and descriptions of ICNs."The current version of ICN_Atlas employs base atlases based on group data, which can be considered as first-degree approximations of each network's representations as independent components, hence they do not reflect inter-individual variability in the networks.Indeed, atlases obtained from meta-ICA decomposition of group-level ICA data are fundamentally different from maps obtained with ICA decomposition of individual data.This criticism applies to all methods that base their interpretation on these atlases.A theoretical solution to this issue would be a probabilistic base atlas based on individual ICN and/or activation data that may be better suited to represent single-subject activation patterns.We envisage the creation of a probabilistic version of ICN_Atlas in which the metrics take into account base atlas voxel weightings.The three ICN base atlases currently provided with ICN_Atlas offer a relatively coherent framework for the description of activations with respect to ICNs.One could envisage that the use of other, custom, base atlases could lead to inconsistencies of description that may hinder comparison across studies.These inconsistencies may nevertheless be reduced by ‘cross-atlasing’ the atlas base maps, e.g. as we presented the comparison of SMITH10 vs BRAINMAP20, and BRAINMAP20 vs BRAINMAP70 in Supplementary Figs. 3 and 4.We have demonstrated ICN_Atlas’ utility and flexibility for the description of group-level task-based parametric fMRI data both on unthresholded input maps, and using an arbitrary threshold; with both approaches having produced similar results for this data set.It is at the discretion of the user to set threshold values, nevertheless there is a default threshold for the atlas base maps set to Z = 3, while the simplest recommended approach for the input maps is to use conventional model-based statistical thresholding."We have demonstrated ICN_Atlas' utility for the description of single-subject epileptic activities derived from EEG-fMRI data.Based on these results we can safely conclude that it can easily and effectively be used as a comparative tool in clinical studies.Nevertheless, it is important to recognise the limitations imposed by standardised and normative tools such as ICN_Atlas in single subject analyses.Whilst speed and standardisation is advantageous even in the clinical context, investigators have to be mindful of the fact that ICN-based assessment of individual activation maps based on each individual patients’ own intrinsic connectivity networks may be advantageous at least in principle.Finally, ICN_Atlas at its current state can be considered a data summarizer.In the current work we have not considered how the outputs can be analysed in order to discover new neuroscientific facts, beyond the factor analysis to identify a reduced set of metrics.We think that the simplest presentation is necessary at this stage, and that this relative functional simplicity and transparency may help the tool being adopted.The scope and spirit of this paper being confined to the presentation, validation and limited demonstration of the utility of our tool, we did not wish to introduce too great a bias in the way it could be applied or how the results should be interpreted.While this can be considered a shortcoming, we believe that the open framework we propose takes into account a degree of uncertainty on the exact nature and extent of the intrinsic connectivity networks.By offering the users the option of choosing a ‘base atlas’ of their own preference, we offer the scientific community the possibility of discovering, or agreeing on, the most suitable or optimal atlas for a given specific purpose, or perhaps a large range of applications.This is equally true for the flexible thresholding options implemented for the analyses, which allow for data input derived from different sources besides the SPM toolbox even without statistical information encoded in the file headers."ICN_Atlas is a just a tool and, with every other tool, it is the users' responsibility to adhere to proper analysis standards.The extensible nature of ICN_Atlas provides opportunity to include atlas base maps derived from different sources, e.g. probabilistic anatomy, pediatric ICNs, multi-modal anatomical parcellations, or even study-specific functional localizers.These extensions could help fine tune the toolbox for the investigators specific needs.On the same token, as the toolbox expects its input to be in the same anatomical space as the data, species specific atlas base maps can also be used for the processing of animal-derived data.Overcoming the current limitation of being a data summarizer would require the implementation of in-depth analysis approaches.These could include statistical inference for group comparisons, function decoding based on e.g. the BrainMap database using their taxonomical meta-data labelling scheme for reverse inference, etc.We have already demonstrated the utility of factor analysis for identifying the most relevant metrics from the wide range of possible output parameters, but including factor analysis for group-level processing may shed light for differential importance of metrics depending on the research question, or the clinical group investigated.We developed ICN_Atlas with EEG-fMRI in our focus of attention, but the toolset is not limited nor to this acquisition method, neither for the discussion of epileptic activities.Indeed, the approaches to analysis discussed above may provide quantitative assessments of activation data in relation to a range of neuroscientific and clinical questions.Regarding the study of epilepsy-related activations: it is evident that there are significant differences between ictal phases, and as reflected by the metric values.In a departure from the quest for localisation by virtue of cluster classification in terms of statistical significance, ICN_Atlas provides a description of intrinsic connectivity network engagement that lends itself to a depiction of activations in terms of functional significance and could be a potential contributor to the current pre-surgical cluster interpretation in EEG-fMRI studies as well as providing information on semiology in such studies."ICN_Atlas provides a fast, flexible and objective quantitative comparative approach for characterizing fMRI activation patterns based on functionally-derived atlases of the investigator's choice.It can be applied to activation studies of any nature, providing objective, reproducible and meaningful descriptions of fMRI maps.Based on the presented case demonstration it may open new avenue of research into the cognitive aspects of a range of neurological conditions.
Generally, the interpretation of functional MRI (fMRI) activation maps continues to rely on assessing their relationship to anatomical structures, mostly in a qualitative and often subjective way. Recently, the existence of persistent and stable brain networks of functional nature has been revealed; in particular these so-called intrinsic connectivity networks (ICNs) appear to link patterns of resting state and task-related state connectivity. These networks provide an opportunity of functionally-derived description and interpretation of fMRI maps, that may be especially important in cases where the maps are predominantly task-unrelated, such as studies of spontaneous brain activity e.g. in the case of seizure-related fMRI maps in epilepsy patients or sleep states. Here we present a new toolbox (ICN_Atlas) aimed at facilitating the interpretation of fMRI data in the context of ICN. More specifically, the new methodology was designed to describe fMRI maps in function-oriented, objective and quantitative way using a set of 15 metrics conceived to quantify the degree of ‘engagement’ of ICNs for any given fMRI-derived statistical map of interest. We demonstrate that the proposed framework provides a highly reliable quantification of fMRI activation maps using a publicly available longitudinal (test-retest) resting-state fMRI dataset. The utility of the ICN_Atlas is also illustrated on a parametric task-modulation fMRI dataset, and on a dataset of a patient who had repeated seizures during resting-state fMRI, confirmed on simultaneously recorded EEG. The proposed ICN_Atlas toolbox is freely available for download at http://icnatlas.com and at http://www.nitrc.org for researchers to use in their fMRI investigations.
50
Healthy ageing through internet counselling in the elderly (HATICE): a multinational, randomised controlled trial
Cardiovascular disease is the leading cause of morbidity and mortality worldwide, and is strongly related to unhealthy behaviours.1,2,Despite widespread preventive programmes, cardiovascular disease risk factors, including hypertension, hypercholesterolaemia, smoking, diabetes, unhealthy diet, obesity, and physical inactivity, remain highly prevalent.3,4,Long-term adherence to lifestyle and medication regimens remains a serious challenge and target values for cardiovascular risk management are often not reached because of both patient and doctor factors.5,6,This gap between evidence and practice leaves room for substantial improvement.7,Optimisation of cardiovascular risk factors might also contribute to the prevention of cognitive decline and dementia, which can be an extra motivator to increase adherence.8,Self-management might empower individuals and improve adherence to lifestyle change and pharmacological prevention programmes to reduce risk of cardiovascular disease.9,Increasing global access to the internet facilitates delivery of preventive interventions without the need for frequent face-to-face contact, creating the potential for scalability at low cost across a variety of health-care settings.10,Previous meta-analyses showed modest, but consistent, beneficial effects of coach-supported eHealth interventions on individual cardiovascular risk factors, but sustainability over time is an important challenge.11–13,Because effects of preventive interventions require long-term risk factor improvement, studies evaluating whether effects are sustainable beyond 12 months are needed.Despite rapidly increasing internet use in older populations, little is known about the feasibility and effectiveness of eHealth interventions in older people, who are often at increased risk of cardiovascular disease.In the healthy ageing through internet counselling in the elderly trial we investigated whether a coach-supported interactive internet intervention to optimise self-management of cardiovascular risk factors in older individuals can improve cardiovascular risk profiles and reduce the risk of cardiovascular disease and dementia.Evidence before this study,In a recent systematic review we concluded that web-based interventions in older people can be moderately effective in reducing individual cardiovascular risk factors, particularly if blended with human support, but that effects decline with time.We updated our systematic review, from inception to July 24, 2019, in MEDLINE, Embase, CINAHL, and the Cochrane Library with search terms designed to capture all systematic reviews and trials using web-based interventions on self-management of cardiovascular risk factors to reduce the risk of cardiovascular disease in older people.Search terms included all cardiovascular risk factors, cardiovascular disease and web-based interventions.We found three systematic reviews and meta-analyses, one on hypertension only, and two on primary and secondary prevention of cardiovascular disease.For participants with and without a history of cardiovascular disease, web-based interventions might improve different individual risk factors of people from midlife onwards, but it is not clear whether these effects are sustainable.The evidence for an effect on cardiovascular outcomes is inconsistent.Increasing internet access across the globe has considerable potential for improving cardiovascular risk management to reduce the global burden of cardiovascular disease.Added value of this study,To the best of our knowledge, this is the largest trial on web-based, multicomponent cardiovascular risk self-management in older people in primary care to date.We show that this type of intervention is feasible in different health-care systems in three European countries.Our intervention had a small but sustained effect on a composite score of systolic blood pressure, LDL, and body-mass index over 18 months, with consistent improvements on individual risk factors, although effects were modest and only significant for BMI.Using pre-specified subgroup analyses we identified that the younger age group and those with the lowest educational attainment might benefit most.Whether these effects will translate into a reduction of incident cardiovascular disease when implemented on a larger scale, over longer periods of time, is unclear.Implications of all the available evidence,Our study provides evidence that coach-supported self-management of cardiovascular risk using eHealth is feasible in older people and could reduce the risk of cardiovascular disease.This type of intervention might be most effective when targeting people at increased risk, who are not enrolled or insufficiently controlled in existing care programmes.Our results show that a coach-supported interactive internet intervention to optimise self-management of cardiovascular risk factors in older individuals is feasible with sustainable engagement, and resulted in a modest reduction of cardiovascular risk after 18 months.This effect was largely driven by a significant reduction in BMI, with point estimates for all components of the primary outcome, and most self-reported lifestyle risk factors also in favour of the intervention.There were consistent small improvements in risk of cardiovascular disease as estimated with the SCORE-OP and risk of dementia as estimated with the CAIDE score.Although this trial was not powered to detect an effect on clinical outcomes, the incidence of stroke was lower in the intervention group than the control group.There was no effect on total cardiovascular disease, and no serious adverse events occurred.Previous studies have shown beneficial effects of blood pressure treatment in older people.23,Effects of lifestyle interventions on other risk factors in older people are less consistent, but those targeting physical exercise might be beneficial up to high age.24,Despite our inclusion criteria, our study population might have had limited room for improvement.The low response rate to the initial invitation and a high percentage already taking statins and antihypertensives is likely to reflect participation of motivated people concerned about their health.Many of the participants had a history of diabetes or cardiovascular disease and these people are more likely to partake in a cardiovascular risk-reduction programme, leaving limited room for further improvement beyond usual clinical care, which has intensified in recent years in most European countries.Therefore, when implemented in a population with higher cardiovascular risk and less access to prevention programmes, the potential beneficial effect might be larger.The intervention platform was carefully designed using an iterative process involving the end users throughout development, leading to good usability, as confirmed in a qualitative substudy.17,18,25,Our pragmatic multicomponent approach makes it difficult to disentangle effects of different components of the intervention, particularly to differentiate between the effects of the application itself and of the coach.A limitation of our primary outcome is the difficulty to establish its clinical relevance.However, we deemed a composite Z score of three relevant and objectively measurable risk factors most appropriate to reflect the effect of our intervention on overall cardiovascular disease risk in our mixed population of primary and secondary prevention.The observed treatment effect of 0·05 was smaller than the effect size of 0·06 on which our sample size calculation was based, but was nonetheless significant.There could be several reasons for this, including a slightly higher sample size than needed according to the sample size calculation, lower drop-out rate than expected, and the absence of an anticipated loss of power due to clustering in participating couples.The result is consistent with a modest reduction in cardiovascular disease risk, as measured with the SCORE-OP, and dementia risk, as measured with the CAIDE score, further strengthening the potential relevance of this finding.Uptake of the intervention was reasonable, with a median of almost two logins per month, almost all participants setting at least one goal, and the majority of participants using the platform during the full study period.The increasing effect size with every additional goal set during the study supports the notion of a dosage–effect relationship and the additional potential for a larger effect if the participant had interacted more frequently with the application and the coach.An embedded qualitative study25 indicated that interaction with the coach in person at baseline and during the study was pivotal.This is in line with previous reports suggesting intensive counselling interventions can be effective in reducing cardiovascular risk and disease,26,27 whereas less intensive interventions are not.28,Estimation of the potential effect of this intervention in other settings and countries might depend on contextual information, including cultural aspects and the fit of the intervention with the local health-care system.The slightly higher drop-out in the intervention group needs further exploration, because it might suggest burden associated with the intervention, although those who dropped out in the intervention group hardly differed from those in the control group at baseline and multiple imputation of missing values did not change the results.The contrast between intervention and control group was largest in Finland.In the Netherlands and France, a combination of Hawthorne effects and the initiation of treatment by the GP in response to baseline measurements could have led to improvements in the control group, limiting overall study contrast.The lower frequency of GP visits and higher frequency of emergency room visits in Finland might reflect a different health-care structure and could explain the lack of improvement in the control group.Previous research showed that there seems to be little room for improvement in high-income settings with a digital approach in patients with a high cardiovascular disease risk, even with good uptake, because most people already participate in cardiovascular disease prevention programmes.29,Especially in old age, achieving further lifestyle changes might be challenging.Prespecified subgroup analyses in our study suggest the largest effect in the younger age group and in those with the lowest level of education.These groups had a higher baseline risk, yielding a larger room for improvement.The effect size was also larger in those who were adherent to the intervention.Taken together, this suggests that targeting high-risk populations with more efforts to stimulate engagement might be effective and needs testing.Absence of clinical effects on cognition or depressive symptoms does not preclude potential long-term effects on these parameters.This is supported by the significant reduction on the CAIDE risk score.The effect of the intervention on incident stroke should be interpreted with caution because absolute numbers are small and this was a secondary outcome.We decided to design a generic, scalable, and cheap intervention, implementable across a range of health-care settings.With rapidly increasing internet literacy in most parts of the world, including in older people, an eHealth approach is likely to become less of a barrier in the near future.A potential limitation of our approach is that it was not embedded in, or aligned with, the local primary care systems.For example, in the TASMINH4 study,30 in which GPs were actively involved in the intervention, self-monitoring of blood pressure with and without tele-monitoring was more effective, with substantially decreased systolic blood pressure values after 12 months.Furthermore, this study used more frequent measurements and reminders, which might have additionally stimulated engagement and adherence.However, such a study design might not be feasible in large parts of the world with underdeveloped primary care systems.Major strengths of our study are the large sample size, the blinded outcome assessment, the multicomponent approach including several modifiable risk factors, and considerable study duration for an eHealth study, documenting sustained engagement with the intervention.The low overall drop-out and the high level of complete data collection further increase the robustness of our findings, while execution in three countries improves the generalisability of its results.Small but sustained improvements of common risk factors over 18 months, such as those detected in our study, might favourably affect the rate of incident cardiovascular disease at the population level long term.Further development of eHealth and mobile health applications could offer opportunities for broad implementation at low cost in a variety of settings, including low-income and middle-income countries, where internet access is rapidly increasing.Embedding interventions in local health-care infrastructures might enhance adoption and effectiveness.eHealth interventions offer the opportunity to scale up and do larger implementation trials with clinical outcomes, including incident cardiovascular disease, cognitive decline, and mortality.Coach-supported self-management of cardiovascular risk factors using an interactive internet-based intervention is feasible in an older population at increased risk of cardiovascular disease and was associated with a modest improvement of cardiovascular risk profile without any indication of adverse events.When implemented at the population level, this could provide a low-cost way of reducing the burden of cardiovascular disease.The effect might be largest in those with considerable room for improvement and who actively engage in self-management.Large-scale implementation research and adaptation to different high-risk populations is warranted to confirm sustainability and effects on clinical outcomes including cardiovascular disease, dementia, and mortality.The HATICE consortium is in principle inclined to share data collected in the HATICE trial with external researchers.This will concern the data dictionary and de-identified data only.The study protocol and the statistical analysis plan are published in the appendix.Data will not be made available to any commercial party.Researchers from academic institutions interested in the use of the data of HATICE, will be asked to write a short study protocol, including the research question, the planned analysis and the data required.The scientific committee of the HATICE consortium will then evaluate the relevance of the research question, the suitability of the data, and the quality of the proposed analysis.Based on this, the committee will provide the data or reject the request.Analysis will then be done in collaboration with and on behalf of the HATICE consortium.A data access agreement will be prepared and signed by both parties.Any analysis proposed which is already in the HATICE analysis plan and planned to be done by members of the HATICE consortium will either be rejected, or proposed to be done as a collaborative effort, to be determined on a case-by-case basis.The HATICE trial was a pragmatic, multinational, multicentre, investigator-initiated, randomised controlled trial using an open-label blinded endpoint design, with 18 months intervention and follow-up.Details of the study design have been published previously.14,Participants were eligible if they were community dwelling, aged at least 65 years, had two or more cardiovascular risk factors, or a history of cardiovascular disease or diabetes, or both, and had access to the internet using a laptop, desktop computer, or tablet.Exclusion criteria were prevalent dementia, computer illiteracy and any condition expected to hinder successful 18-month follow-up.The full study protocol is provided in the appendix.Recruitment took place in the Netherlands, Finland, and France from March 9, 2015, to Sept 20, 2016.Detailed recruitment and enrolment procedures in each country are described in the appendix.Medical ethical approval was obtained from the medical ethical committee of the Academic Medical Centre, the Northern Savonia Hospital District Research Ethics Committee, and the Comité de Protection des Personnes Sud Ouest et Outre Mer.All participants gave written informed consent.After completion of the baseline assessment, participants were individually randomly assigned in a 1:1 ratio using a central, computer-generated sequence, which was linked to the online case record form.In case of spouse or partner participation, both participants were automatically allocated to the same treatment group to prevent contamination.All participants were informed about randomisation to one of two internet platforms, without further details on the contents of the platforms.Complete masking of participants and the coaches delivering the intervention was not possible because of the nature of the intervention.An independent assessor unaware of treatment allocation did the final assessment, including outcome assessment.The primary outcome consisted solely of objectively measurable parameters.Intervention group participants received access to a secure internet-based platform with remote support from a coach trained in motivational interviewing and lifestyle behaviour advice, based on the stages of change model.15, "The platform was designed to facilitate self-management of cardiovascular risk factors by defining health priorities, goal setting, monitoring progress with feedback, and a combination of automated and personal feedback from the coach, based on Bandura's social-cognitive theory of self-management and behavioural change, and was described in detail elsewhere.16,17",After developing a conceptual framework, the platform was designed in an iterative process engaging end users, which included an 8-week pilot study with 41 participants.The main components of the intervention are described in the panel.All advice was according to European and national guidelines for the management of cardiovascular risk factors.18,Coaches motivated participants via a computer messaging system to set at least one goal to improve a cardiovascular risk factor, encouraged them to interact with the platform, set additional goals over time, and provided motivating feedback.The full coaching protocol is provided in the appendix.Participants allocated to the control condition had access to a static platform, similar in appearance, with limited general health information only, without interactive components or a remote coach.After telephone screening, eligible participants were invited in person.During the screening visit, blood pressure and anthropometrics were assessed.Full study logistics and procedures are provided in the appendix.Medical history and medication use were registered.Mini Mental Status Examination was used to screen for cognitive impairment.Before the baseline assessment, participants were invited to fill out a series of online questionnaires, mainly for secondary outcome assessments.Symptoms of depression were assessed using the 15-item Geriatric Depression Scale, anxiety with the Hospital Anxiety and Depression Scale, diet with the Mediterranean diet adherence screener, disability and functioning with the late-life function and disability instrument, self-efficacy with the Partners in Health questionnaire, and physical activity with the Community Health Activities Model Program for Seniors questionnaire.Blood was drawn for assessment of lipids, glucose, and glycosylated haemoglobin A1c.2 weeks after the screening visit, the baseline visit took place, with assessment of physical fitness with the Short Physical Performance Battery and cognitive functioning with the Stroop colour–word test, Trail Making Test A and B, Rey Auditory Verbal Learning test, and semantic fluency test.All measurements were repeated at 18 months.Any finding requiring medical attention, such as an elevated blood pressure, abnormal laboratory values or signs of cognitive impairment or depression led to the advice to visit their general practitioner.Participants in both conditions received a 3-monthly online questionnaire about the occurrence of adverse events and clinical outcomes.At 12 months, a telephone call to all participants was scheduled for assistance with self-reported outcome assessment questionnaires, and in the intervention group only, with a motivational conversation to enhance adherence and address potential challenges with goal-setting and lifestyle improvement.The primary outcome was the change from baseline to 18 months on a composite score of systolic blood pressure, LDL cholesterol, and body-mass index.For each of the three parameters at baseline and at the 18-month visit, the baseline means and SDs combined were used to calculate Z scores.The Z scores were then averaged for the baseline and the 18-month visit separately, leading to the composite Z score for the respective visits.We decided on this primary outcome on the basis of the following considerations: we deemed a composite outcome appropriate to capture the potential effect of our multidomain intervention; our mixed population of primary and secondary prevention precludes the use of a single existing cardiovascular risk score; including only objectively measurable parameters reduces the risk of reporting bias; and weighing of risk factors was considered not appropriate, because the exact weight of each risk factor was unknown in this population.Full considerations for this primary outcome have been detailed previously.14,The main secondary outcomes were the difference at 18 months in systolic blood pressure, LDL cholesterol, BMI, HbA1c, physical activity, dietary intake, smoking cessation, estimated 10-year cardiovascular disease risk based on the Framingham cardiovascular disease risk score and the Systematic Coronary Risk Estimation-Older People,19 and dementia risk as measured with the Cardiovascular risk factors, Ageing and Incidence of Dementia score.20,Other outcomes reflecting cardiovascular disease risk included difference in level of physical activity, dietary intake, and smoking cessation.Clinical outcomes included disability, physical functioning, cognitive functioning, depression and anxiety, incident cardiovascular disease and mortality.GP consultations, emergency room visits and hospital admissions were registered.Process evaluation outcomes to assess the intervention delivery were determined post hoc and include login frequency, number of messages exchanged between coach and participant and number of goals set.Independent, blinded-outcome adjudication committees in each country evaluated all clinical outcomes on the basis of available clinical information.We based our sample size calculation on the effect sizes of the HATICE primary outcome as observed in the preDIVA21 and FINGER22 trials after 24 months of follow-up.With 80% power, a 0·05 two-sided significance level, accounting for an estimated 14% attrition, an intracluster correlation coefficient of 0·25 for an anticipated 17·5% participants in couples, and an effect size of 0·06 the required sample size was estimated to be 2534 participants.17,We decided on this target effect size because the difference on this composite outcome after 2 years between those who did and did not develop cardiovascular disease or dementia during a mean of 6·7 years of follow-up in the preDIVA trial was 0·06.The statistical analysis plan was completed and published at ISRCTN on June 27, 2017 before unblinding of the data on March 31, 2018.All analyses were completed by the study group and verified by an independent epidemiologist.All analyses were according to the intention-to-treat principle for participants with available data for each outcome.For the primary analysis, we used a general linear model.Accounting for correlations between partners using a random intercept was evaluated, but not included in the final model because this resulted in a worse model fit.We additionally did a per-protocol analysis, including only those who logged onto the platform in at least 12 out of 18 months study participation, and who set at least one goal or entered one or more measurements.We did predefined subgroup analyses for country, sex, age group, educational level, prevalent cardiovascular disease and diabetes, or both, partner participation, participation in a cardiovascular risk management programme, and level of self-efficacy.Sensitivity analyses were done excluding 53 participants who did not have a masked final assessment, excluding those who had switched coach during follow-up, and using multiple imputation by chained equations to evaluate the effects of missing data.General linear models were also used for analysis of secondary outcomes, both for change scores for continuous or binary outcomes.For parameters assessed at baseline, and months 12 and 18 we used multiple-measurements general linear models.We used standard Cox proportional hazard models with time since inclusion as timescale to analyse the effect on incident cardiovascular disease and mortality, for which participants were censored at time to event or last available follow-up.This trial is registered with the ISRCTN registry, 48151589, and is completed.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the Article.All authors had full access to all data in the study and the corresponding author had final responsibility for the decision to submit for publication.Of the 45 466 people invited, 4857 were interested and screened for eligibility.1818 were excluded as ineligible and 242 did not wish to proceed.Of the 2797 who were eligible, 72 declined to participate further and one requested for the data to be withdrawn, leaving 2724 participants at baseline.Of these, 1389 were allocated to the intervention group and 1335 to the control group.The groups were generally well balanced at baseline.After a mean follow-up of 17·7 months, data on the primary outcome were available for 2398 participants.Participants not completing the study were slightly older, had lower educational attainment, and more often participated with their partner.In the intervention group, the composite score of systolic blood pressure, LDL, and BMI improved by 0·09 versus 0·04 in the control group, resulting in a mean difference of −0·05 in favour of the intervention.Prespecified sensitivity analysis showed that the effect was slightly larger in those who were adherent to the intervention with a mean difference of −0·06.Results from prespecified subgroup analyses are shown in figure 3 and show that the effect was largest among participants who were Finnish, younger than age 70 years, and had the lowest education.Results of post-hoc subgroup analyses by country, and by age, education, and cardiovascular risk are provided in the appendix.The high degree of similarity between those who dropped out in the intervention and control groups suggests no selective drop-out occurred.Sensitivity analyses using multiple imputed data did not affect the main finding.The effects of the intervention on secondary outcomes are provided in table 2.Comparing the change in individual components of the primary outcome in the intervention versus the control group, systolic blood pressure declined 1·79 versus 0·67 mm Hg, BMI declined 0·23 versus 0·08 kg/m2, and LDL declined 0·12 versus 0·07 mmol/L.The effect on all three components of the primary outcome was largest in Finland.There were no major differences in self-reported lifestyle outcome measures, except for smoking cessation, which was reported by 24 intervention participants versus 16 control participants.The mean number of risk factors that improved was 2·9 in the intervention group versus 2·7 in the control group.The 10-year risk of cardiovascular mortality as expressed by the SCORE-OP was reduced by 0·32% in the intervention versus 0·14% in the control group.The 20-year risk of dementia as expressed by the CAIDE score decreased by 0·19 in the intervention group versus 0·04 in the control group.Symptoms of anxiety decreased more in the intervention than the control group.There were no significant differences on symptoms of depression, or any of the cognitive tests.Stroke incidence was lower in the intervention group versus the control group.There were no significant differences in the incidence of other cardiovascular disease, dementia, and mortality, or in health-care use as measured by hospital visits, hospital admissions, and GP visits.The total number of logins was 59 441 in the intervention group versus 17 014 in the control group.The median number of logins in the intervention group was 1·8 times per month, compared with 0·7 times per month in the control condition.In the intervention group, 25 356 messages were sent between coaches and participants: 114 of the 1189 participants who completed the primary outcome sent zero messages, 403 sent one to five messages, 345 sent six to ten, and 327 sent more than ten messages.Participants in the intervention group set a median of one goal.Most goals were set on weight loss.The effect size of the primary outcome increased with every additional goal set during the study.Participation in lifestyle groups was low in all three countries: 15 of 1471 in the Netherlands, 106 of 885 in Finland, and 23 of 368 in France.In the intervention group, more people started lipid-lowering drugs than in the control group.
Background: Although web-based interventions have been promoted for cardiovascular risk management over the past decade, there is limited evidence for effectiveness of these interventions in people older than 65 years. The healthy ageing through internet counselling in the elderly (HATICE) trial aimed to determine whether a coach-supported internet intervention for self-management can reduce cardiovascular risk in community-dwelling older people. Methods: This prospective open-label, blinded endpoint clinical trial among people age 65 years or over at increased risk of cardiovascular disease randomly assigned participants in the Netherlands, Finland, and France to an interactive internet intervention stimulating coach-supported self-management or a control platform. Primary outcome was the difference from baseline to 18 months on a standardised composite score (Z score) of systolic blood pressure, LDL cholesterol, and body-mass index (BMI). Secondary outcomes included individual risk factors and cardiovascular endpoints. This trial is registered with the ISRCTN registry, 48151589, and is closed to accrual. Findings: Among 2724 participants, complete primary outcome data were available for 2398 (88%). After 18 months, the primary outcome improved in the intervention group versus the control group (0.09 vs 0.04, respectively; mean difference −0.05, 95% CI −0.08 to −0.01; p=0.008). For individual components of the primary outcome, mean differences (intervention vs control) were systolic blood pressure −1.79 mm Hg versus −0.67 mm Hg (−1.12, −2.51 to 0.27); BMI −0.23 kg/m2 versus −0.08 kg/m2 (−0.15, −0.28 to −0.01); and LDL −0.12 mmol/L versus −0.07 mmol/L (−0.05, −0.11 to 0.01). Cardiovascular disease occurred in 30 (2.2%) of 1382 patients in the intervention versus 32 (2.4%) of 1333 patients in the control group (hazard ratio 0.86, 95% CI 0.52 to 1.43). Interpretation: Coach-supported self-management of cardiovascular risk factors using an interactive internet intervention is feasible in an older population, and leads to a modest improvement of cardiovascular risk profile. When implemented on a large scale this could potentially reduce the burden of cardiovascular disease. Funding: European Commission Seventh Framework Programme.
51
Using SPIN for automated debugging of infinite executions of Java programs
The complexity of current software development is pushing programmers towards more automated analysis techniques, instead of the traditional interactive or postmortem debuggers.For instance, unit testing allows the execution of test cases against a program, checking parts of the code such as single methods or classes in isolation.Runtime monitoring tools usually carry out some controlled executions of instrumented code on real or emulated target platforms, xxxx; Kraft et al., 2010).Model checking can produce and inspect all possible execution traces of a program, checking the presence or absence of failures along each trace.In the case of a failure, this technique records a trace to replay the failed execution.To overcome some of the shortcomings of these automatic methods when used in isolation and to extend their domain of application, there have been several proposals that combine a few of them.This paper discusses an approach to automated software debugging by the combination of model checking and runtime monitoring.We focus on its application to analyze the executions of a given reactive and/or concurrent Java program.Model checking allows the software developer to describe correctness properties with specification languages such as Temporal Logic.The properties could represent safety requirements, like □p and p U q, or liveness properties expressed with formulas such as ◊p, ◊ □ p, and □, p and p being any kind of proposition or even temporal formulas.The most common use of LTL is to express complex liveness behaviors of infinite traces, which are the traces produced by reactive and/or concurrent software.In order to check whether or not a program satisfies an LTL formula, model checking algorithms were designed to produce the whole execution graph in a concurrent program and to efficiently detect execution traces violating a formula, presenting these traces as counter-examples.Counter-examples provide the sequence of instructions to the error, and they are the main source of information used to debug the program.When we do not wish to produce all traces or check liveness properties, other less-expensive approaches, like the use of runtime monitors can be used to check only the subset of LTL representing safety properties.Other monitor based approaches adapt the semantics of full LTL to finite executions, as done by Java PathExplorer.The original design of Java PathExplorer only considered finite executions, and to the best of our knowledge, the extension for infinite traces is still not available.Tools such as Verisoft and CMC avoid storing the states of the program during monitoring, so they can perform a partial analysis of very large systems with little memory consumption.Unfortunately, this stateless approach does not permit the analysis of LTL for infinite traces.The analysis of an LTL formula along one or several potential infinite execution paths cannot be carried out with standard monitors, but requires storing the states of the program and the use of algorithms based on automata to recognize special cycles, like Büchi automata.Stateful approaches, like the one implemented in Java PathFinder keep a stack with the current execution trace to control backtracking, to produce counter-examples and to check cycles, so they could check LTL on infinite traces.However, at the time of writing this paper, the extension for checking LTL formulas can only detect a few program events.In the following sections we expand on the current status of LTL verification with JPF in a comparison with our proposal.In this paper we propose a method to convert a Java execution trace into a sequence of states that can be analyzed by the model checker Spin.We use runtime monitoring to generate just the Spin oriented execution paths from real software, thereby allowing the formulas to be evaluated by Spin.Our work focuses on two major issues of software model checking, analysis of infinite executions and efficient abstraction of execution paths.As Spin implements the analysis of LTL formulas by translation to Büchi automata, thanks to our method to feed Java executions to Spin as input, we can check the formulas on Java programs with infinite cycles.Furthermore, the Spin stuttering mechanism for dealing with finite execution traces allows us to deal with any kind of program without redefining the original LTL semantics.In order to address the second issue, the abstraction of execution paths, our conversion of Java traces into Spin oriented traces is based on two efficient abstraction methods of the full state of the program.The counter projection abstracts the Java state by preserving the variables which appear in the LTL formula and adding a counter to distinguish the rest of the state.As we do not keep all the information, the counter projection is very efficient at the cost of being useful only for finite executions.The hash projection abstracts each Java state with the variables in the formula plus a hash of the whole state.The way of constructing the hash makes the probability of conflict for two different states negligible, so we can rely on the Spin algorithm to check LTL based on cycle detection.The paper provides a formal study of the correctness of both abstraction methods.We have implemented the proposed approach in TJT, a tool that combines runtime monitoring and model checking and allows Java application developers to debug programs by checking complex requirements represented with temporal logic in a transparent way: the actual Java program is analyzed on the final target platform without additional modifications by the user, while the test execution is managed in the usual integrated development environment.Specifically, we combine the Spin model checker and the runtime debugging API Java Debug Interface.Checking each execution means evaluating a temporal formula representing a failure, over the observable states in the program.Such observable states are provided for Spin by a runtime monitoring module built on top of the JDI support in the Java virtual machine.Both modules are integrated as a new Eclipse plug-in for automatic debugging.TJT stores the failed executions, so that the programmer can later replay them in Eclipse to locate and fix the bugs.In summary, our method for combining Spin with runtime monitoring offers several advantages to Java developers:Linear time temporal logic is a compact and rich formalism to represent both failures and desirable behaviors regarding a temporally ordered sequence of events and/or conditions to be checked along one execution.Checking the LTL formula naturally considers the history of the execution, providing clear advantages compared with the isolated evaluation of invariants, assertions or just the values returned by methods.Model checking algorithms record the history of the failed execution, which we then use to implement a controlled replay to locate and fix the bugs.The support for model checking permits the analysis of potentially infinite executions, which may have two origins.On the one hand, they are produced by reactive software like servers or daemons which are always in execution responding to interactions with an environment.On the other hand, bugs may introduce infinite loops that should not happen.In both situations, model checking can be used to locate the cycles and to decide whether they should be considered as failures."The use of runtime techniques removes the extra work required to produce model checking oriented models and makes it possible to start the debugging work directly on the programmer's code.We do not intend to perform “full” model checking of Java programs, like Java PathFinder.Full model checking requires a specific virtual machine to control the Java execution in order to carry out the exploration of all possible execution paths, which is time- and memory-consuming.Our approach consists of using only some features of model checking to have a light automated debugging method that helps the programmer to locate errors.Potential errors are described with temporal logic and we use the capability of model checking to check the temporal logic formula in “one execution path”, even if the execution path corresponds to the infinite behavior of a reactive program.This execution path is naturally produced by the execution of the program in the real environment, with the standard Java virtual machine.This is a cost-effective application of model checking to program traces that is nevertheless useful for finding faults in concurrent programs and debugging their causes.The use of LTL formulas and a reduced set of variables of interest produces traces as counterexamples, which are very valuable when locating bugs.This is specially relevant in concurrent programs where it is usually more important and difficult to find interleaving of actions that produce faults, than failure-inducing test inputs.While the analysis of the traces for finding the root cause is still a manual process, the formulas and the selected variables of interest significantly reduce the size of the traces to analyze.This paper is an extension of previous work of ours presented in.In particular, the description of our approach, its implementation, the experimental results and comparison with related work have been significantly expanded.The rest of the paper is organized as follows.Section 2 introduces the use of model checking for debugging Java programs using a real example.Section 3 presents the architecture of TJT for combining model checking and Java runtime monitoring.The formalization of the abstraction approach and the preservation results are presented in Section 4.Experimental results of the case studies are summarized in Section 5.In Section 6 we compare our tool with related proposals.Finally, Section 7 presents some conclusions and points of interest for future work.In this section we outline how model checking can be applied for the debugging of real Java programs, as the motivation for the development of our tool.We introduce a real example and show several tests where the use of LTL formulas would be useful.Then we introduce the semantics of the LTL formulas that we consider in our approach, i.e. the usual one for infinite traces.Finally, we discuss how Spin performs the analysis of LTL formulas, translated into Büchi automata.To illustrate our proposal we use an FTP server.This server understands the usual commands and can handle several concurrent user connections.We show several tests that a programmer may want to perform on the code of the server, using LTL formulas where the variables and events of the program can be referenced.The formulas are presented in a formal notation, but using helper functions, such as “loc” for checking the program counter location, that are available in our tool.It is worth noting that the following three formulas represent liveness properties to be evaluated on potentially infinite executions, and they cannot be handled by other runtime checkers cited in Sections 1 and 6.The code in Fig. 1 corresponds to the main loop in the server.The programmer may want check if the program variables in the loop are correctly cleared between client connections.For instance, to check that the incoming variable is set to null after each iteration we could use the following formula:This formula states that, after reaching line 285 of the FTPServer.java file in Fig. 1), the incoming variable should be null at some point in the future.Fig. 2 shows the method that handles CWD commands.If a client performs an erroneous request the operation should fail, but the server should recover from the exception and return the appropriate error code.The programmer may want to check if this code is reachable when a client misbehaves, using the following formula:Testing these and other properties requires the use of controlled mock clients as part of the test fixture.In this case, the client tries to send several CDUP commands, which are executed as CWD commands in the server and should lead to the behavior described above.The final condition that we want to check deals with thread scheduling and fairness.In addition to synchronization problems, multithreaded programs are prone to fairness issues: some of the threads may take all the CPU time, leaving others starving.The programmer may want to check whether this is a possible outcome under the default scheduling employed by the JVM or under other schedulings that may be forced in the execution.For instance, we can check the fairness between two clients that compete with each other to interact with the FTP server in a loop.For clarity, in the previous formula we used boolean propositions, such as req1, instead of referencing program variables like in the first two formulas.These propositions refer to auxiliary boolean variables in the FTP clients, i.e. the clientFTP and clientFTP2 classes.In this section, we give a formal characterization of LTL formulas for Java, like the three examples used above.Let Prog be a Java program and Var an enumerable set of variable names used by Prog.The variables’ names may be recursively constructed by appending the name of class members to object identifiers.For instance, if o is a reference to an object of class C, and f is an instance variable of C, o .f is the name of the variable recording the value of field f in the object instance o.Statesω being the set of all possible infinite sequences of elements from States, called traces.Note that the operator implies “→” is usually omitted in these rules and transformed into a combination of negation and disjunction.In what follows, we use the same LTL semantics as Spin, without the next operator as usual.Note that in t ⊨ f, t may be a prefix of a complete Java trace, i.e. it may not be necessary to generate the whole trace in order to check the satisfaction of a property.Spin is a well known model checker for analyzing models of software and other complex systems, defined with the Promela language.Promela contains constructions for describing concurrent and non-deterministic behavior which, combined with the right tool, makes it easier to discover unexpected events or interactions which could be difficult to find in the traditional debugging tools available for programming languages.A Promela model produces a set of possible executions called execution traces or paths.The role of Spin is to look for traces that satisfy or violate a given set of properties.Properties include deadlocks, assertions, code reachability or non-progress loops.However, the most interesting set of properties are complex requirements described with linear temporal logic.Spin implements the algorithms by Vardi and Wolper to check LTL properties, which are based in the translation of the negated LTL formula into a Büchi automaton.A Büchi automaton is defined as a standard automaton that recognizes states in a programm to be analysed, but with the addition of final states that restrict the number of executions allowed by the automaton.In particular, we say that one execution of the program violates the original LTL formula if the corresponding Büchi automaton visits, at least, one of the accepting states infinitely often.This method is well suited to check LTL liveness properties in infinite program executions, and has been adapted in Spin to be used for finite executions as well.Fig. 4 contains a simplified graphical representation of the Büchi automaton generated from Formula, from the examples above.This automaton is executed synchronously with the Java trace, inspecting the Java states to decide which transition must be taken, and stopping when no transition is possible.A trace is accepted if it contains a finite sequence of states including an accepting state, which repeats infinitely often.Accepting states are represented with a double circle in the figure.Using this automata to recognize a given Java execution trace, Spin could find a violation of that formula, i.e. an execution where one client makes the other starve.The violation would include the instructions executed in the program, up to the point where the error was found.For instance, a simplified trace for a violation of this formula, only including the locations where the variables from the formula change their value, is shown in Fig. 3.These variables are initialized on line 25 on both files, they change on line 270 to indicate that a request has been issued, and change again in lines 275 and 280, when said request has been satisfied.Others variables can be included in the trace if requested, as well.Given a Promela model, Spin performs an exhaustive exploration of its state space.Full-state on-the-fly explicit model checking, as implemented in Spin, requires two main data structures to manage model states: the stack and the hash table.While performing a depth-first search, Spin stores the states of the current path in the stack.This allows Spin to backtrack to a previous state and also to find cycles, both in the model under verification and in the Büchi automaton which represents the temporal property.The hash table is used to store all unique states visited while exploring the model, so that Spin does not explore the same path twice.The model checking algorithm requires the full representation of each state to be included in both data structures.This might pose a problem for large models, where the number of states to be stored can be higher than 1020.In order to deal with such large models, Spin has been extended with several optimization techniques, some of which can be used in TJT.Hash-compact reduces the use of memory by compressing the representation of the states without losing information.Bit-state hashing represents states as single bits in a hash table, which may lead to a partial analysis of the model in some cases.Currently, work is being carried out in order to obtain parallel versions of Spin that preserve most of these optimizations.Finally, there are other strategies that deal with scalability, such as the automatic transformation of the models to implement abstraction methods or the abstract matching proposed in.This section gives an overview of our approach for debugging Java programs using model checking and runtime monitoring.The main idea is to make Spin handle the states produced by Java instead of the states produced by a regular Promela model.In the standard use of Spin, states are produced by the execution of Promela specifications.Such states include all the local and global variables in the Promela specification and other information, such as the program counters of the processes or the contents of the communications channels.The entire space state generated from the Promela code is managed with the stack and hash table in order to check properties such as deadlocks and LTL formulas.In our particular use of Spin, states are produced by the execution of Java programs.However, in order to reuse Spin features transparently, we still use a special Promela specification that is able to transform sequences of Java states into sequences of Promela-like states.Thereby, we can check complex correctness requirements, like LTL properties, on the Java execution.Fig. 6 shows an overview of the architecture and the workflow of our tool TJT, which is divided into three modules: the model checking module, the runtime monitoring module and the Eclipse plug-in.The programmer must supply two inputs in this workflow: the main entrypoint of the Java program being analyzed and an XML file with the test specification.This main entrypoint may be the real one from the program, or a specific main method associated with a particular test scenario.The test specification includes the correctness requirement: a complex behavior described in a formalism supported by Spin, such as an LTL formula.The user must also declare the objective of the formula, i.e. whether it represents a behavior that should be checked for all traces, none or if it is enough for any trace to satisfy it.This specification also contains additional information for carrying out the tests, like the program parameters, and their ranges, for generating test inputs.The model checking module, implemented with Spin and a special Promela template, creates a series of Java Virtual Machines to execute the Java program with all the values considered for the configuration variables.The executions are actually launched and monitored by the runtime monitoring module, which detects the events that are relevant for checking the LTL formula.Each event provokes the creation of a Java state that is sent to the model checking module.Spin processes the information reported by the monitoring module for each execution of the program, and checks the LTL formula.When Spin detects that a Java execution does not match an LTL formula and objective, it sends information to the Eclipse plug-in in order to show the steps that have led to the failed execution.In the following section we discuss model checking in detail, focusing on efficient methods for abstracting the Java states.Each Java execution is carried out in the target platform under the control of our runtime monitoring module, which has been implemented in Java using JDI.The monitor and the program being tested run in different JVMs.JDI offers an event based framework, where the application can be notified of certain events in a remote JVM, such as breakpoints, exceptions, changes in object fields or thread states.The monitoring module watches the events relevant to the specified property and sends the information to the model checking module.At present, our tool can check LTL properties on finite and infinite traces, asserts, and deadlocks.The LTL property can reference class variables present in the Java program, thrown exceptions or breakpoints set at specific locations in the code.When Java executions are finite, we take advantage of the stuttering mechanism implemented in Spin, and we assume the semantics derived from considering the last state of the trace repeated forever.So, there are no limitations to using the LTL formulas supported by Spin.In addition, deadlocks can be detected by the monitoring module by checking the status of each thread before processing each event.TJT analyzes each program trace independently.Different traces can be generated by providing information about the input parameters of the program, which will generate different test inputs.These test inputs are currently passed on to the main method of the program as command line arguments.A main method developed specifically for a test may use these arguments to set different parameters in the program, or to execute slightly different test scenarios.In addition, the program may be run more than once with the same test input, in order to produce different schedulings for threaded programs.In connection with this, we are experimenting with the automatic insertion of calls to methods that alter thread scheduling, e.g. yield and sleep, to cover a greater range of program schedulings.We have also developed an Eclipse plug-in to make executing tests and reviewing their results more user-friendly.The plug-in includes a form-based editor for creating test specification files, instead of writing error prone XML code.This includes selecting fields to be monitored, setting breakpoints, writing the LTL property to be checked and declaring the test input parameters.Once the specification has been finished, it can be executed within Eclipse and its progress tracked in the TJT console.After the test has finished, a dedicated view shows the erroneous traces that were found, i.e. the execution paths that led to a property violation.Clicking on a trace line takes the user to the corresponding Java line of code.A screenshot of the plug-in is shown in Fig. 7.The tool TJT and several examples can be downloaded from.This section explains our approach, which uses Spin as the core of the model checking module of our debugging tool for Java.In addition to the capability to check properties with Büchi automata, Spin also allows embedding C code in the Promela models, using c_code blocks.These blocks are executed atomically by Spin and may interact with global state variables or call external library functions.The c_expr allows the evaluation of a C expression free from side effects, e.g. to use it as a loop condition.Furthermore, C variables can be treated as if they were part of the global state.Using c_track, existing C variables can be tracked and included in the global state, even as unmatched variables, i.e. they are stored in the stack but not in the hash.Unmatched variables are restored when backtracking, but they are not taken into account when deciding whether two states are equal for details).Note that states in the stack contain all the information whereas the hash table contains only part of the information.We take advantage of these C oriented features to communicate Spin with the JDI-based monitor, to represent the Java states in Spin, and to implement our abstraction methods for Java states, explained in Section 3.1.3.As explained above, while Spin is generally used to check program specifications written in its own Promela language, TJT uses a special Promela specification, part of which is shown in Fig. 8, to drive all the automatic debugging work.Such Promela code contains the logic to generate the values for the configuration variables that produce different executions, to communicate with the runtime monitoring module and to check whether a Java execution fails.The code is automatically generated using an initial Promela template and the information provided by the user in the correctness specification file.When an LTL formula is present in this file, it is translated into a Büchi automaton, and then included in the resulting Promela specification as a never claim definition.If the formula represents a behavior that must be satisfied in all traces, it is negated first, in order to find counterexamples.The execution of this Promela code by Spin is summarized in Algorithm 1.This algorithm shows how Spin produces and inspects several Java traces, depending on the potential values for the configuration variables in the correctness specification file.For each combination of input values, Spin launches a new execution and then enters a loop to collect the sequence of Java states for that execution, checking the LTL formula and reporting failed executions to the Eclipse plug-in.This is described in more detail in the following sections.TJT main loop: Spin executing the Promela code in Fig. 8.The loop represented in Algorithm 1 actually corresponds to the execution of lines 17 to 22 of the code in Fig. 8.The first two functions, initialization and createSocket are executed as if they were a single instruction, using the c_code mechanism.They create all the data and communication structures needed to connect the model checking and the runtime monitoring modules.Note that the communication is done with a socket, so if necessary, e.g. to increase performance, they can run in different computers.To ensure interoperability of these two modules in different nodes, we also use standard XDR-based encoding for the data transferred in this socket.Each Java execution corresponds to a possible combination of values for configuration variables defined by the user in the correctness requirement.The generation of one combination of values is done by generateConfig.The function execute launches the Java program being tested with the given configuration values as test input under the supervision of the monitoring module.When the current Java execution finishes, Spin backtracks to generateConfig to select another set of values for the configuration variables.Then, execute is again called to run the program under the new test input.This backtracking-based process continues until no more combinations are possible.The result is the exploration of all the Java executions defined by the programmer in the test specification file.The current Java execution trace is reconstructed in Spin thanks to the getNextState function partially shown in Fig. 9.The next state is either read from the socket with the runtime monitoring module or retrieved from a list of already visited states in the case of backtracking, as will be explained below.For each new state, we check events such as program termination or assertion violations, which are also communicated through the socket.The current list of failure-related events include a dozen cases.The most interesting analysis is checking LTL properties.Spin checks each execution path using a double depth-first search algorithm that maintains a stack of program states.The state of the Büchi automaton, which is used to track the satisfaction of an LTL property, is also stored as part of the global state.Each state si handled by Spin is composed of three components 〈j, ρ, bi〉, where bi is the state of the Büchi automata which is executed synchronously with the system, σj is the current Java state provided by the runtime monitoring module, and ρ is a projection function used to simplify the Java states before being analyzed by Spin.The fact that the indexes of si and σj are not necessarily equal will be explained below.Although the execution of a Java program results in a linear sequence of states, the addition of the Büchi automaton representing the LTL formula may result in several branches that must be explored exhaustively.To support this, variable values received from the runtime monitoring module are first stored in a Java trace stack, and then retrieved from there, as needed.Therefore, if Spin backtracks during the search, the Büchi automaton will produce new states but the Java states will be a replay of the previously visited states.Note that we have acknowledged this in Fig. 6 by not necessarily showing the same subindex for the whole state si and the corresponding Java part ρ.The main drawback that usually has to be taken into consideration when applying model checking to programming languages is state space explosion: states may be too large and too many to be stored in the memory.Apart from taking advantage of some of the Spin optimization methods described in Section 2, our tool TJT deals with these problems with several novel techniques.The first one consists of selecting the Java states to be sent to the model checking module: we only send those states produced after relevant events in the Java execution.These events include exceptions, deadlocks, update of designated variables, method entry and exit, interactions with monitors, breakpoints, and program termination.The second optimization consists of abstracting the Java state when it is converted to a Spin state.The simplest abstraction method generates a Spin state with only some of the variables of the current Java state.These variables are stored in C variables, which are tracked by Spin and part of the global state.In this case, the runtime monitoring module only sends the ρ part of the original σj Java state.These selected variables are those that are relevant for checking the user requirements, like the LTL formula.We include one additional variable, the index j, in the Spin state, which is useful to retrieve the appropriate Java state ρ when backtracking, as shown in Fig. 9.An alternative, and more complex, abstraction method consists of building an optimized Spin state with all the information in the Java state.Note that this information may be huge, and includes all the variables in objects, thread state and static variables and σj Java would be the same).This state is optimized in two steps.First, we collect strings representing the Java hash value for all objects, threads and static variables.Then, we apply the hashing algorithm MD5 to a canonical concatenation of these strings.The result is extremely efficient in both processing time and size of the final state.This abstraction method, called state hashing, is suitable for checking cycles in Spin, and it can be used to detect cycles in the Java program and to check LTL liveness formulas in infinite executions of Java programs.Both abstraction methods are implemented, making use of the Promela features for embedded C code as explained above.The next section is dedicated to describing and proving the correctness of these abstraction methods supported by TJT.In this section we formalize the Java state abstractions mentioned in previous section and which enable the analysis of infinite Java execution traces.Fig. 10 shows the projection ρV of a trace.Observe that V divides each state σi into two parts: the part concerning the variables of V in state i, and the rest.The projection simply takes the first part from each state and ignores the rest.The effect of this projection is similar to that of the “cone of influence” technique.However, while this technique simplifies the code to include only variables which are on the set V before executing it, we execute the program as is and then simplify the generated trace.We do not automatically include variables not in V, though.As a general result of this definition of projection, if all the variables required for evaluating an LTL formula are present in the projection, the evaluation of the formula is not affected.Let f be an LTL formula and let us denote the set of variables in f as var.As described in Section 3, temporal formulas can be used in debugging with different use cases.In contrast to model checking, testing works with a subset of program traces instead of every possible trace.Test cases may pass when a property is checked in all, some or none of the given traces.Thus we extend ⊨ for sets of traces and the ∀ and ∃ quantifier operators.Due to the elimination of most program variables in the projected states, it is very likely that a projected trace ρV contains many consecutive repeated states.This represents a problem for the model checker since it can erroneously deduce that the original trace has a cycle due to the double depth search algorithm used by Spin to check properties.Note that this does not contradict Proposition 1, since in this result we do not assume any particular algorithm to evaluate the property on the projected trace.In the following sections, we use relation ⊨s to distinguish between the LTL evaluation carried out by Spin through the DDS algorithm, and the satisfaction relation ⊨ defined above.To correctly eliminate consecutive repeated states in traces, we propose two different techniques that we discuss in the following subsections, along with the corresponding preservation results.A simple solution is to add a new counter variable count to the set of visible variables V.This counter is increased for every new state, thus removing the possibility that Spin erroneously finds a non-existing cycle.Observe that this also precludes Spin from detecting real cycles present in the Java program.This case will be discussed in the following subsection.We extend the notion of trace projection given in Definition 1, by adding the state counter variable as follows:In this section, we assume that Java states have a canonical representation, which makes it possible to safely check whether two states are equal.We know that canonical representation of states in languages that make an intensive use of dynamic memory is not trivial.We are currently evaluating an extension of the memory representation described in.But a detailed explanation of this extension would exceed the goals of this paper.Besides, in this section, the actual representation is not relevant for the results obtained.We only need to assume that given two logically equal Java states σ1 and σ2, there exists a matching algorithm able to check that they are equal.We use a proper hash function h : State → int to represent each state in the projected trace.It is worth noting that as not all of the Java states σ have to be stored), we may assume that function h is very precise, producing a minimum number of collisions.That is, h = h ⇒ σ1 = σ2, with a high degree of probability.To put this into perspective, we present a very brief study of the MD5 hash function, which we used in our implementation.This function transforms the given input into a 128-bit digest.Thus, there are 2128 possible values of this function.We are interested in the likelyhood of a birthday attack, i.e. the probability of a collision between any two states belonging to the same trace or, conversely, in the number of different states that could be generated before a collision is found with a given probability.For instance, if we assume that a probability 10−12 is enough for our analysis, this number is approximately 2.6 × 1013.Given a state size of 64 bytes, about 1.5 × 106 gigabytes of memory would be required to store this number of states.This is well beyond what current computers carry, and therefore computationally unfeasible.Thus, we conclude that such a hash function is adequate for our uses.Now, we extend the notion of state projection given in Definition 1, by adding the codification of the whole state as follows:We now discuss how the results are preserved regarding the satisfaction of temporal properties in Java and in the projected traces.Here we assume that the algorithm for checking the satisfaction of a property uses the double depth search algorithm as implemented by Spin.We focus on the preservation of results using the counter and hash projections as described in Definitions 3 and 4, respectively, which were introduced to deal with cycles as required by the model checking algorithm implemented by Spin.This theorem asserts that any temporal formula which is satisfied in the original Java trace t, is also satisfied in the hash projection of the trace.The converse, while generally true for practical purposes, is limited by the quality of the hash function h.In addition to projecting the variables in f, as established in Proposition 1, the hash projection includes a variable computed by h that identifies the global state and is used to detect cycles in the trace.In this section we propose an optimization approach to minimize the number of states of the projected trace that need to be generated and transferred to Spin.To do this, we slightly modify transition relation → defined above by labeling transitions as follows.A Java trace is now given by a sequence of states,That is, we only project to Spin those states where some visible variable has just been modified.However, this definition of folding is not enough to allow a precise cycle detection, which was the main reason for introducing the hash projection.If an infinite cycle is located in the folded states, Spin will not be informed of any new Java state, and thus Spin will not be aware that the Java program is going to loop endlessly in those states.An implementation may choose to use a timer as a limit instead of a state counter, which may be more practical and would not affect the results given below.This projection may be further refined in the implementation with an adaptive limit, e.g. a limit which decreases progressively.Although we could define a limited folded counter projection in a similar fashion, there would be no benefit in doing so, since the counter prevents any kind of cycle from being detected.We now show how the results are preserved with these projections.This result is not affected by the folding in the projection, because the folded states are not required to evaluate the boolean tests of the temporal formula, and cycle detection is not affected since it is not supported by the counter projection, as discussed in Section 4.2.3.Again, this result is not affected by the folding in the projection thanks to the limit, which covers the detection of cycles in folded states.If there is a cycle in an infinite sequence of states the transition labels of which are Mi∩ V = ∅, the limited folding only removes a subset of the states.Since a cycle is by definition a finite sequence of states, it is guaranteed that eventually two equal states will be projected, and thus the cycle will be detected.TJT has been successfully used with real Java applications.In this section, we show some example properties evaluated on public open source Java projects, some of which were also evaluated in.These applications include three servers: FTP, NFS and HTTP.It is worth noting that, in order to test these servers, we had to implement mock clients to simulate the behavior required by each test.In addition, we studied the elevator problem, a typical example of control software.The temporal formulas used in each test have been gathered together in Table 1.Note that all formulas, with the exception of F4 and F6, represent liveness properties, and, in the case of programs with infinite executions, they can only be analyzed with runtime checkers that implement mechanisms like the cycle detection considered in this paper).FTPD server.The first application is the FTP server described in Section 2.1.We tested the three formulas from Section 2.1, shown in Table 1 as formulas F1, F2 and F3, and two additional ones.Of all these formulas, F2 was prepared to uncover a manually introduced bug, while the rest were analyzed over the normal server.F4 deals with security in the server, checking that no user is able to perform a STOR operation without being authenticated first.Finally, formula F5 is a twist on F1, but using the temporal operator “until”.Using this formula we check that, at some point, the variable incoming, which holds the socket just opened for attending the client, should be non-null until a specific thread is created for attending that client.Elevator problem.The next application is a typical example of concurrency: a shared resource and several clients trying to use it at the same time.This example has been implemented in Java using locks and conditions.Figs. 14 and 15 show part of the elevator and client code, respectively.We can use a temporal formula to check that the elevator does not wait for clients if it is not free.For the sake of illustration, we tested this formula to debug an incorrect implementation, where the wait condition of the server is wrong.We also used this incorrect implementation to test the opposite condition.In both cases, the generated traces led us to this manually introduced error.NFS server.We have also debugged an NFS server implemented in Java using a client that tries to mount a directory provided by the server.We can test some conditions related to an incorrect or unauthorized request, by checking whether some internal error fields are updated or specific exceptions are thrown.Jibble Web Server.Finally, we also studied a Java web server that manages HTTP requests.One possible condition to check would be that the server throws the right exception if it is launched from an incorrect root directory.In addition, we can test whether the server fails to start on any port from a given range, which can be specified as a test input parameter.All executions should thus avoid the location of the exception that would be thrown in this case.We performed some the aforementioned tests using the folded counter projection.It is worth noting that not all of the proposed tests can be carried out using this projection.Formulas that would require cycle detection cannot be checked, as per Proposition 2."However, formulas where the cycle that Spin's stuttering mechanism creates using the last state is enough for detecting every accepting cycle in the never claim automata generated from the formula.Most of the programs we are debugging are infinite, i.e. they are servers with an infinite reactive loop, and this cannot be checked with finite resources and this projection.We modified these programs to produce finite versions that could de checked with both projections, for the purpose of comparison.The results are summarized on the left hand side of Table 2, averaged over a series of test executions.The third column shows the number of projected Java states, while the fourth column indicates the number of state transitions in Spin.The next two columns show the size of a Spin state and the total time of analysis.The last column of the table shows an approximation of the size of the Java states, before any projection.This number only takes into account the size of objects allocated in the heap.Since the size of the heap changes dynamically, we report the maximum value the we observed during the execution of each program.The size of states in Spin after the projection is influenced by several factors.First, Spin has an overhead of 16 B for a Promela specification with a single process, and a Büchi automata adds another 8 B. Then, a step integer variable is added to track the Java state that is retrieved in each state.The variables used for generating test inputs, if any, are added to the global state as well.Furthermore, the counter projection requires an additional variable as described in Section 4.2.1.It is worth noting that the variables being monitored are not part of the global state, but are kept in a separate data structure, in order to support backtracking with minimal impact in the size of Spin states.TJT also has a deadlock detection algorithm.Although the purpose of formula F7 was to detect the incorrect wait condition, it uncovered a deadlock in each execution of the program detected by the monitoring module.Although deadlocks may be detected while using any property, omitting the temporal formula is recommended when specifically searching for them in order to prevent the trace being terminated early due to a specified property.We also evaluated the hash projection, using all the properties described above.Thanks to its cycle detection capabilities, we could use it for more tests than the counter projection.The right hand side of Table 2 shows these results.The table shows that, compared with the counter projection, the hash projection is generally slower, due to the computation penalty associated with visiting the whole Java program state and computing its hash.As these results suggest the tests with more Java states are the ones where the test time increases the most.Also, the difference between the size of Spin states between the counter and the hash projections is constant: the hash projection adds a variable to store the hash of each state, but removes the counter variable.Although the size of the hash proper is 16 B, in our implementation it is stored as a 32 B character array, which explains the total difference of 28 B.Furthermore, we included some additional tests that required true cycle detection, which is only possible under the hash projection.First, we tested Formula 3 from Section 2.1 on the FTP server.In addition, we also tested a simple Java program that deals with lists in an infinite loop.The program adds elements to the list and then removes them, and we checked that the list ended up with exactly one element an infinite number of times.To end our experiments, we performed a small comparison between TJT and the LTL extension for JPF, using the hash projection.Although both tools are based on model checking and can test LTL properties, their scopes are different.JPF performs an exhaustive search over the complete space state of the program, while TJT analyzes a range of execution traces.We compared both tools with some examples available with JPF-LTL, summarizing the results in Table 3.In this table, the first two columns show the name of the example and the formula being analyzed, “Transitions” are the number of state transitions traversed, and “Time” the total time required to check the formula against the program.It is worth noting the disparity in time and space required for the analysis of the second formula with TJT, compared to the other two.This program deals with random number generators, and the property requires cycle detection.Although checking whether or not a single trace violates the property is relatively quick, the first few traces generated and analyzed by TJT did not violate the property.Thus, when a violating trace was generated, the cost of the analysis had accumulated the analysis of the previous traces.The most notable tools for analyzing Java programs using some variant of full-state model checking are Bandera and Java PathFinder.Bandera is a model extraction based tool that requires the Java program to be transformed into a model composed by pure Promela plus embedded C code.This model is optimized by applying a data abstraction mechanism that provides an approximation of the execution traces.As Bandera uses Spin as the model checker, it can check LTL on infinite traces and preserve correction results according to the approximation of the traces.Compared with Bandera, TJT only checks a set of traces.However the use of runtime monitoring to avoid model transformation, and the two abstraction methods guarantee the correctness of the results.Java PathFinder is a complete model checker for Java programs that performs a complete coverage of a program, while our testing tool does a partial analysis of the program.In addition, thanks to a matching mechanism, JPF does not revisit the same execution path twice, while TJT analyzes each trace in isolation without checking whether several traces share already visited states.However due to our integration approach, we can still gain some advantages from reusing the well known model checker Spin, instead of building a new one from scratch.Some realistic Java examples of reactive software are not suitable for verification by JPF.For instance, we tried analyzing our elevator problem with JPF, but it ran out of memory after 58 minutes.The verification of LTL with JPF-LTL in JPF is still under development and has a limited visibility of the program elements for writing the formula.At the time of writing, JPF-LTL only considers entry to methods in the propositions, and it requires the user to explicitly declare whether the formula should be evaluated for infinite or finite traces.TJT allows a richer set of propositions to be used in the formulas and, due to the stuttering semantics used by Spin, the user does not need to declare whether the trace is finite or not.The specification of LTL properties to analyze programming languages at runtime has been proposed by other authors, which we discuss in the rest of this section.Probably, the most complete overview of the approaches can be found in a paper by Bauer et. al.Bauer et. al consider the runtime verification of LTL and tLTL with a three-valued semantics suitable to check whether or not a partial observation of a running system meets a property.They generate deterministic monitors to decide the satisfaction of a property as early as possible.They use these three-values as a way to adapt the semantics of LTL to the evaluation of finite traces.The authors write that “the set of monitorable properties does not only encompass the safety and cosafety properties but is strictly larger”.However, the general case of liveness properties for infinite traces is not considered.Compared with our work, they develop the foundations to create monitors to support the new semantics of LTL for infinite traces, while our work relies on the already existing algorithms and tools to check Büchi automata for infinite traces.Java PathExplorer, developed by Havelund and Roşu, uses the rewriting-logic based model checker Maude to check LTL on finite execution traces of Java programs.The authors provide different semantics for LTL formulas in order to avoid cycle detection.Java PathExplorer also supports the generation of a variant of Büchi automata for finite traces developed by Giannakopoulou and Havelund.We share with Java PathExplorer the idea of using the model checker to process the stream of states produced by Java.However, our use of Spin allows us to check infinite execution traces.The tool Temporal Rover can check temporal logic assertions against reactive systems at runtime.The author considers that both finite and infinite traces are possible.However, only finite traces are evaluated, and a default fail value is returned for formulas like ◊p when p has not been satisfied at the end of the trace and there is no evidence that the program has terminated.TJT can provide a conclusive verdict when inspecting the infinite trace.Bodden uses AspectJ to implement a method to evaluate LTL, inserting pieces of Java code to be executed at points where the behavior specified by the formula is relevant and must be evaluated.This method is useful to check only safety properties.d’Amorim and Havelund have developed the tool HAWK for the runtime verification of Java programs, which allows the definition of temporal properties with the logic EAGLE.In addition, the user must supply a method that must be called when the program terminates in order to produce a finite trace.FiLM also gives a specific semantics to LTL to check both safety and liveness in finite traces.However, in the case of liveness, manual inspection is required when the tool reports a potential liveness violation.All these tools for runtime monitoring of LTL are focused on finite traces.The main difference with TJT is the support of cycle detection due to the way in which the states are abstracted and stored, and the use of Büchi automata.Note that we have not included further experimental comparison of TJT with some of these runtime monitoring tools due to the lack of comparable public examples, or of the tools themselves.We have presented the foundations of TJT, a tool for checking temporal logic properties on Java programs.This tool is useful for testing functional properties on both sequential and concurrent programs.In particular, we explained how the use of Büchi automata combined with storing the states from runtime monitoring can be used to check liveness properties in non-terminating executions of reactive programs.Our tool chain includes the model checker Spin and JDI, which are integrated in the well known development environment Eclipse.The use of JDI instead of instrumented code makes it possible to detect deadlocks and provides wider access to events in the execution of the program, while being completely transparent.Our current work follows several paths.One is to apply static influence analysis to automatically select the variables relevant to the given property, as we proposed in.The second one is to implement methods to produce more schedulling in multithreaded programs for the same initial state.Finally, we plan to take advantage of multicore architectures to speed up the analysis, due to the already decoupled interaction between Spin and JDI modules.
This paper presents an approach for the automated debugging of reactive and concurrent Java programs, combining model checking and runtime monitoring. Runtime monitoring is used to transform the Java execution traces into the input for the model checker, the purpose of which is twofold. First, it checks these execution traces against properties written in linear temporal logic (LTL), which represent desirable or undesirable behaviors. Second, it produces several execution traces for a single Java program by generating test inputs and exploring different schedulings in multithreaded programs. As state explosion is the main drawback to model checking, we propose two abstraction approaches to reduce the memory requirements when storing Java states. We also present the formal framework to clarify which kinds of LTL safety and liveness formulas can be correctly analysed with each abstraction for both finite and infinite program executions. A major advantage of our approach comes from the model checker, which stores the trace of each failed execution, allowing the programmer to replay these executions to locate the bugs. Our current implementation, the tool TJT, uses Spin as the model checker and the Java Debug Interface (JDI) for runtime monitoring. TJT is presented as an Eclipse plug-in and it has been successfully applied to debug complex public Java programs. © 2013 The Authors.
52
A catchment-scale method to simulating the impact of historical nitrate loading from agricultural land on the nitrate-concentration trends in the sandstone aquifers in the Eden Valley, UK
Excessive nitrate concentrations in water bodies can cause serious long-term environmental issues and threaten both economy and human health.Nitrate in freshwater remains an international problem.Elevated nitrate concentrations in groundwater are found across Europe.For example, the European Environment Agency reported that the proportion of groundwater bodies with mean nitrate concentration > 25 mg L− 1 in 2003 were 80% in Spain, 50% in the UK, 36% in Germany, 34% in France and 32% in Italy.Despite efforts made under the EU Water Framework Directive by 2015 to improve water quality, there is still a continuous decline in freshwater quality in the UK.For example, nitrate concentrations are exceeding the EU drinking water standard) and have a rising trend in many rivers and aquifers.It is estimated that about 60% of all groundwater bodies in England will fail to achieve good status by 2015.Agricultural land is the major source of nitrate water pollution.Point source discharges have been estimated as contributing < 1% of the total nitrate flux to groundwater in the UK.Agricultural yields are increased by the addition of nitrogen in fertilisers, but this leads to nitrate leaching into freshwaters.Nitrate concentrations in groundwater beneath agricultural land can be several to a hundred-fold higher than that under semi-natural vegetation.During the last century, the pools and fluxes of N in UK ecosystems have been transformed mainly by the fertiliser-based intensification of agriculture.In response to this growing European-wide problem, the European Commission implemented the Nitrates Directive to focus on delivering measures to address agricultural sources of nitrate.In the freshwater cycle, nitrate leached from soil is subsequently transported by surface runoff to reach streams or by infiltration into the unsaturated zone.Nitrate entering the groundwater system is then slowly transported through the USZs downwards to groundwater in aquifers.Recent research suggests that it could take decades for leached nitrate to discharge into freshwaters due to the nitrate time-lag in the USZs and saturated zones.This may cause a time-lag between the loading of nitrate from agricultural land and the change of nitrate concentrations in groundwater and surface water.For example, Dautrebande et al. found that the anticipated decrease in nitrate concentrations in the aquifer following the reduction of nitrate loading from agricultural land was not observed.However, current environmental water management strategies rarely consider the nitrate time-lag in the groundwater system.The Eden catchment, Cumbria, UK is a largely rural area with its main sources of income being agriculture and tourism."The Environment Agency's groundwater monitoring data show that some groundwater exceeds the limit of 50 mg L− 1 in the Eden Valley.In recent years, the increasingly intensive farming activities, such as the increased application of slurry to the grazed grassland, have added more pressures on water quality in the area.Efforts have been made to tackle agricultural diffuse groundwater pollution in the area.For example, the River Eden Demonstration Test Catchment project was funded by the Department for Environment, Food & Rural Affairs to assess if it is possible to cost-effectively mitigate diffuse pollution from agriculture whilst maintaining agricultural productivity.The Environment Agency defined Groundwater Source Protection Zones in the Eden Valley to set up pollution prevention measures and to monitor the activities of potential polluters nearby.However, without evidence of the impact of nitrate-legacy issues on groundwater quality, it is difficult to evaluate the effectiveness of existing measures or to decide whether additional or alternative measures are necessary.So a key question for nitrate-water-pollution management in the area is how long it will take for nitrate concentrations in groundwater to peak and then stabilise at an acceptable level) in response to historical and future land-management measures.Therefore, it is necessary to investigate the impacts of historical nitrate loading from agricultural land on the changing trends in nitrate concentrations for the major aquifers in the Eden Valley.Wang et al. studied the nitrate time-lag in the sandstone USZ of the Eden Valley taking the Bowscar SPZ as an example.Outside of the study area, efforts have been made to simulate nitrate transport in the USZ and saturated zone at the catchment scale."For example, Mathias et al. used Richards' equation to explicitly represents fracture–matrix transfer for both water and solute in the Chalk, which is a soft and porous limestone.Price and Andersson combined a simple USZ nitrate transport model with fully-distributed complex groundwater flow and transport models to study nitrate transport in the Chalk.These catchment specific models, which require a wide range of parameters and are computationally-demanding, are of limited value for application to catchment-scale modelling for nitrate management.There is a need to develop a simple but still conceptually feasible model suitable for simulating long-term trend of nitrate concentration in groundwater at the catchment scale.In addition, the nitrate transport in low permeability superficial deposits has rarely been considered in existing nitrate subsurface models.Low permeability superficial deposits, however, overlay about 20.7% of the major aquifers in England and Wales, and 54% of the Permo-Triassic sandstones in the Eden Valley.Based on a simple catchment-scale model developed in this study, the impact of historical nitrate loading from agricultural land on the nitrate-concentration trends in sandstones of the Eden Valley was investigated.By considering the major nitrate processes in the groundwater system, this model introduces nitrate transport in low permeability superficial deposits and in both the intergranular matrix and fractures in the USZs.Nitrate transport and dilution in the saturated zone were also simulated using a simplified hydrological conceptual model.The Eden Catchment lies between the highlands of the Pennines to the east and the English Lake District to the west.The River Eden, which is the main river in the catchment, runs from its headwaters in the Pennines to the Solway Firth in the north-west.The area is mainly covered by managed grassland, arable land and semi-natural vegetation.Carboniferous limestones fringe much of the Eden Catchment and have very low porosity and permeability, thus making a negligible contribution to total groundwater flow.Therefore, their storage and permeability rely almost entirely on fissure size, extent and degree of interconnection.They only constitute an aquifer due to the presence of a secondary network of solution-enlarged fractures and joints.Ordovician and Silurian intrusion rocks form the uplands of the Lake District and can also be found in the south-east of the catchment.The Eden Valley in the central part of the catchment consists of thick sequences of the Permo-Triassic sandstones, i.e. St Bees Sandstones and Penrith Sandstones.The early Permian Penrith Sandstone Formation dips gently eastwards and is principally red-brown to brick red in colour with well-rounded, well-sorted and medium to coarse grains.According to Waugh, these Penrith Sandstones, which were deposited as barchans sand dunes in a hot and arid desert environment, can be divided into three zones, i.e. ‘silicified Penrith Sandstones’, ‘non-silicified Penrith Sandstones’ and ‘interbedded Brockram Penrith Sandstones’.The sandstones in the northern part of the formation are tightly cemented with secondary quartz, occurring as overgrowths of optically continuous, bipyramidal quartz crystals around detrital grains.The sandstones in the rest of the formation are not silicified, but the sandstones in the southern part are interbedded with calcite-cemented alluvial fan breccias.The study of Lafare et al. showed that the borehole hydrographs within the ‘silicified’ zone are characterised by small amplitudes of seasonality in groundwater levels.This indicates that silicified sandstones prevent the aquifer responding efficiently to localised recharge.However, the non-silicified sandstones in the middle and southern parts of the Penrith Sandstone show a greater relative variability of the seasonal component.The Eden Shale Formation mainly consists of mudstone and siltstone and is an aquitard that confines the eastern part of the Penrith Sandstone aquifer.St Bees Sandstone formation conformably overlies the Eden Shale Formation and occupies the axial part of the Eden Valley syncline.The formation consists of very fine to fine-grained, indurated sandstone.The borehole hydrographs in the St Bees Sandstones showed that they are more homogeneous than the Penrith Sandstones and tend to act as one aquifer unit.This study focused on these Permo-Triassic sandstones that form the major aquifers in the catchment.The ranges of transmissivity, storage coefficient and porosity of the Permo-Triassic sandstones are 8–3300, 4.5 × 10− 8–0.12 and 5–35 respectively in the study area.> 75% of the bedrocks in the Eden Catchment are covered by Quaternary superficial deposits.Only the Lake District and escarpment of the Northern Pennines have extensive areas of exposed bedrock.Glacial till is the most extensive deposit in the catchment.It is typically a red-brown, stiff, silty sandy clay to a friable clayey sand with pockets and lenses of medium and fine sand and gravel and cobble grade clasts.The presence of glacial till has the potential to cause increased surface runoff and reduced groundwater recharge.According to BGS 1:250 000 bedrock and superficial geological maps, glacial till covers about 46% of the Eden Catchment and 54% of the Permo-Triassic sandstones in the Eden Valley.The NTB model was initially developed to simulate nitrate transport in the USZs and to estimate the time and the amount of historical nitrate arriving at the water table at the national scale.It requires the datasets of a uniform nitrate-input-function, estimated USZ thickness, and lithologically-dependent rates of nitrate transport in the USZs.More details about the NTB model are provided by Wang et al.However, the NTB model was extended in the following ways to simulate the average nitrate concentration in an aquifer zone for the catchment-scale study.The single nitrate-input-function derived in the study of Wang et al. has been validated using mean pore-water nitrate concentrations from 300 cored boreholes across the UK in the British Geological Survey database.It reflects the trend in historical and future agricultural activities from 1925 to 2050.For example, a rapid rise of 1.5 kg N ha− 1 year− 1 nitrogen loading was caused by increases in the use of chemical-based fertilisers to meet the needs of an expanding population.The nitrate loading in the UK peaked in 1980s and then started to decline as a result of restrictions on fertiliser application in water resource management.It was assumed that there would be a return to nitrogen-input levels similar to those associated with early intensive farming in the mid-1950s, i.e. a constant 40 kg N ha− 1 loading rate.However, this single-input-function only generated a national average, rather than a spatially distributed input reflecting historical agricultural activities across a region.Therefore, a spatio-temporal nitrate-input-function was introduced for this catchment-scale study to represent nitrate loading across the Eden Valley from 1925 to 2050.The NEAP-N model, which has been used for policy and management in the UK, predicts the total annual nitrate loss from the agricultural land across England and Wales.It assigns nitrate-loss-potential coefficients to each crop type, grassland type and livestock categories within the June Agricultural Census data to represent the short- and long-term increase in nitrate leaching risk associated with cropping, the keeping of livestock and the spreading of manures.The NEAP-N data in 1980, 1995, 2000, 2004 and 2010 from the Department for Environment, Food & Rural Affairs were used in this study.The trend of nitrate loading from the single nitrate-input-function was used to interpolate and extrapolate the data for the years other than the NEAP-N data years.The section of results shows some examples of the spatio-temporal nitrate-input-functions derived in this study.A simplified hydrogeological conceptual model was developed to simulate nitrate transport and dilution processes in the groundwater system at the catchment scale as follows:Water and nitrate are transported by intergranular seepage through the matrix and by possible fast fracture flow in the USZs,Groundwater recharge supplies water to the Permo-Triassic sandstones as an input,The thickness of glacial till affects the amount and timing of recharge and nitrate entering the groundwater system,Groundwater in the Permo-Triassic sandstones flows out of the Eden Valley via rivers in the form of baseflow as an output,Groundwater is disconnected from rivers where glacial till is present,The year by year total volume of groundwater for an aquifer in a simulation year is the sum of the groundwater background volume and the annual groundwater recharge reaching the water table.Groundwater recharge and baseflow reach dynamic equilibrium whereby the amount of recharge equals that of baseflow in a simulation year,Nitrate entering the Permo-Triassic sandstones is diluted throughout the Voltotal in a simulation year,The velocity of nitrate transport in the Permo-Triassic sandstones is a function of aquifer permeability, hydraulic gradient and porosity,The transport length for groundwater and nitrate can be simplified as the three-dimensional distance between the location of recharge and nitrate reaching the water table and their nearest discharge point on the river network.More details about nitrate transport and dilution in the groundwater system will be described in the following sub-sections.Low permeability superficial deposits not only control the transfer of recharge and soluble pollutant to underlying aquifers but also affect the locations of groundwater discharge to surface waters.It was assumed in the original NTB model that the presence of low permeability superficial deposits stops recharge and nitrate entering aquifers.This simplification is sensible to reduce the number of parameters for a national-scale study.However, field experiments in the Eden catchment undertaken by Butcher et al. showed that the thickness of low permeability glacial till affects the amount of water and nitrate entering the groundwater system.It was found that glacial till can be relatively permeable when its thickness is < 2 m, within which the superficial deposits are likely to be weathered and fractured.Therefore, the spatial distribution of the thickness of glacial till exerts a strong influence on groundwater recharge processes to the underlying USZs in the study area.In the study area, more than half of the Permo-Triassic sandstones are overlain by glacial till as described above.According to Lawley and Garcia-Bajo, about 59% of glacial till has a thickness < 2 m. Therefore, it is important to consider the water and nitrate transport in glacial till for this catchment-scale study, rather than have the same assumption as the national-scale study of Wang et al.The thicker the glacial till, the less water can transport through glacial till thus reducing recharge rates; and the reduction of recharge will be diverted to surface runoff.The following sub-section describes how the reduction of recharge is estimated.A soil-water-balance model SLiM was used to estimate distributed recharge in this study using the information on weather and catchment characteristics, such as topography, land-uses and baseflow index.Based on the BGS database of the spatial distribution of the thickness of superficial deposits, glacial till was divided into five thickness classes, i.e., 0–2 m, 2–5 m, 5–10 m, 10–30 m, and > 30 m.A parameter was introduced into the soil-water-balance model to represent the reduction of recharge or the increase of runoff for five thickness classes of glacial till.Monte Carlo simulations were undertaken to calibrate the recharge model for this study.The parameter of reduction of recharge was randomly sampled within 0–100% for each thickness class; and the modelled results were compared with the surface component of observed river-flow data for 19 gauging stations in the study area.Scatter plots for RRch against the performance of the recharge model from MC simulations were produced to identify the reduction of recharge for each thickness class of glacial till.The results section provides more details.The reduction of recharge affects the amount of both water and nitrate entering the groundwater system as described below.According to the ‘piston-displacement’ mechanism, water and nitrate are displaced downwards from the top of the USZs.Therefore, instead of travelling through the USZs, water and nitrate reaching the water table are displaced from the bottom of the USZs.This explains why observations show that the water table responds to recharge events at the surface on a time-scale of days or months, whilst the residence time for pollutant fluxes in the USZs is in the order of years.According to the study of Lee et al., the value of Rpqis controlled by several factors, such as the thickness of the USZs, moisture content and fractures in the USZs, and rainfall densities.The way of identifying Rpqwill be described in the model construction section.Eqs.– were used only when an aquifer cell does not overlap with a river cell.If aquifer cells spatially overlap with river cells, the nitrate travel time in them was assigned to be zero without using these equations.Based on BGS 1:250 000 bedrock geological map, the Permo-Triassic sandstones in the Eden Valley were divided into four aquifer zones, i.e. ‘St Bees Sandstones’, ‘silicified Penrith Sandstones’, ‘non-silicified Penrith Sandstones’ and ‘interbedded Brockram Penrith Sandstones’ as described previously.These aquifer zones were then discretised into 200 m by 200 m cells.The St Bees Sandstone formation is separated from the Penrith Sandstones by impermeable Eden Shale Formation.Since the groundwater flow in the study area is dominated by flow to the River Eden, the groundwater-flow direction in the Penrith Sandstones is almost parallel to the boundaries between aquifer zones of the Penrith Sandstones.This indicates that the groundwater interaction between aquifer zones is limited and can be ignored in this study.In order to determine the water-table-response time to rainfall events Rpq, the cross-correlation method, which is a time series technique, has been adopted in this study.Cross-correlation has been used to reveal the significance of the water-table response to rainfall.Datasets used for this calculation include the time series of monthly rainfall from the Meteorological Office Rainfall and Evaporation Calculation System, and groundwater level in the Skirwith Borehole in the study area.Rpq was set to the period of time over which there is a correlation between groundwater level and rainfall at the 95% confidence level, assuming that it is homogenous in the study area.Fig. 3 shows that the vertical bars are above the 95% confidence level for 15 months, thereby indicating that it takes 15 months for the groundwater level in the Permo-Triassic sandstones in the study area to fully respond to the monthly rainfall event.The information on glacial till thickness ThicknessDRIFT , i was derived using the BGS 1:250 000 superficial deposits geological map and BGS database of the thickness of superficial deposits.According to the field experiments undertaken by Butcher et al., the nitrate velocity in glacial till VDRIFT , i is 0.6 m year− 1 whilst nitrate travels, on average, 3.5 m year− 1 in the USZs of the Permo-Triassic sandstones.The river network derived from the gridded Digital Surface Model was used to calculate the distance to river points for each cell.BGS 1:250 000 superficial geological map was also used to identify the locations where aquifers are disconnected from rivers due to the presence of glacial till.These locations were not considered when calculating the distance to the river.The hydraulic gradient Gi was calculated using the long-term-average groundwater levels GWLi and river levels RLi derived from the NextMap DSM data in the study area.The USZ thickness ThicknessUSZ , i was calculated using GWLi and the NextMap DSM data in the study area.The yearly distributed recharge estimates from the calibrated recharge model mentioned above were used to simulate nitrate-transport velocity in the Permo-Triassic sandstone USZs and the groundwater volume Voltotal, respectively.Monte Carlo simulations were also undertaken to calibrate the extended NTB model developed in this study.In this study, MC parameters include Φaquifer, Syaquifer, Rfaquifer, Taquifer, Daquifer and RFF.These MC parameters were randomly sampled within a finite parameter range to produce one million parameter sets.The upper and lower bounds of the range for each of parameter were defined based on literature, observed results or expert judgment.For example, the aquifer properties of active groundwater depth, porosity, transmissivity and specific yield were based on the collation of Allen et al., assuming that they are homogenous in each aquifer zone.Performing MC simulations is a computer-intensive task especially when multiple parameters are involved.Therefore, it is good practice to reduce the number of parameters for MC simulations by fixing some parameters using available information on the aquifer zones.Parameters that can be identified or calculated based on existing datasets, methods and hydrogeological knowledge from hydrogeologists were fixed.These fixed parameters of this model include Ai, qi, Rpq, GWLi, RLi, ThicknessDRIFT , i, VDRIFT , i, ThicknessUSZ , i, Gi, and etc.For example, the time-variant distributed recharge was estimated using the SLiM model.The section of model construction describes the parameterisation of some of these fixed parameters.A zero NSEscore indicates that modelled data are considered as accurate as the mean of the observed data, and a value of one suggests a perfect match of modelled to observed data.The model with the highest NSE score in a set of MC simulations is deemed to have the optimum parameter set.The NSE score was also used in calibrating the recharge model mentioned above.In the second MC simulation, the observed nitrate concentrations were partitioned into two sets of 70% for MC simulation and 30% for validation.The average value of the NSE scores in the calibration of the NTB model for four aquifer zones is 0.48, whilst that for aquifer zones in the validation is 0.46.This indicates that the risk of overfitting in calibrating the NTB model is limited.A spatio-temporal nitrate-input-function was derived using a combination of NEAP-N predictions and the single nitrate-input-function as mentioned above.It contains a nitrogen loading map for each year from 1920 to 2050.Fig. 4 shows some examples of time series of nitrate loading at locations randomly selected within the study area.The actual nitrate loading for a cell depends on land-use type, livestock density and the measures of farming activities at this location.In general, it shows that the improved grassland and arable land-uses have higher nitrate loading than the woodland land-use type.The low nitrate loading between 1925 and 1940 reflect the pre-war low level of intensification with very limited use of non-manure-based fertilisers.The gradual rise of nitrate loading from 1940 to 1955 was the result of the intensification of agriculture during, and just after, World War II.Nitrate loading reached its peak value during the 1980s after a rapid rise due to increases in the use of chemical based fertilisers, and then started to decline as a result of restrictions on fertiliser application.Fig. 5 shows the spatial distribution of nitrate loading in some years.Generally, the western part of the ‘silicified Penrith Sandstones’ and the eastern and northern parts of the ‘St Bees Sandstones’ have higher nitrate loading than the rest of the study area.A sensitivity analysis was conducted when carrying out MC simulations to calibrate both the recharge model and the extended NTB model.The purpose was to determine which parameters contribute most to the model efficiency, and which of them are identifiable within a specific range linked to known physical characteristics of different hydrological or hydrogeological processes.Each MC run was plotted as a dot in the scatter plots to show the model performance of a MC run in the vertical axis when using a parameter value on the horizontal axis.In each scatter plot, many MC runs form a cloud of dots to represent a response surface that indicates how the model performance changes as each parameter is randomly perturbed.As mentioned above, the reduction of recharge was identified for each thickness class of glacial till through the MC simulations of the groundwater recharge model.It shows that the recharge model is sensitive to the parameter of RRch for all five thickness classes of glacial till.The performance of the recharge model reached its highest when RRch of the 0–2 m thickness class was set to 55.6%.However, the recharge model produced better results when RRch for rest of thickness classes, i.e. 2–5 m, 5–10 m, 10–30 m and > 30 m, were close to 100%.This indicates that about 44% of water and nitrate can travel through the thin glacial till with a thickness of < 2 m, whilst no water and nitrate enters the underlying Permo-Triassic sandstones when the thickness of overlying glacial till is larger than 2 m.This is consistent with the findings from the field experiments in the study area undertaken by Butcher et al.In the first set of MC simulation, specific yield Syaquifer, porosity Φaquifer and the retardation factor Rfaquiferwere initially varied together resulting in a group of behavioural runs.In order to clearly demonstrate parameter sensitivity, further MC simulations were undertaken by varying one parameter at a time whilst fixing the other two parameters.The values of fixed parameters were determined by a set of parameter values chosen from one of the behavioural models from the initial MC runs.Fig. 6 shows that the extended NTB model is very sensitive to these parameters in all four aquifer zones.These parameters, therefore, can be easily identified based on the shapes of response surfaces in these scatter plots.The optimum parameter values, which are denoted by black dots, result in the minimum bias in the MC simulations.In the second set of MC simulations for calibrating the extended NTB model against the observed nitrate concentrations, it was found that the model is sensitive to the depth of active groundwater Daquifer and transmissivity Taquifer, but slightly less sensitive to the ratio of fracture flow RFF in the USZs.The annual nitrate concentrations from 1925 to 2150 for four aquifer zones in the Eden Valley were simulated based on the calibrated extended NTB model.The NTB-Model performance was evaluated using the simulated nitrate concentrations from the best model and all observed data.The values of/Observed range show that the NTB model may overestimate or underestimate the nitrate concentrations.The NTB model produces the best performance in the ‘St Bees Sandstones’ with a NSE value of 0.68 and a Root Mean Square Error value of 2.8 mg L− 1, which is 9.9% of the range of observed nitrate concentrations).In the ‘non-silicified Penrith Sandstones’, the NSE and RMSE values are 0.29 and 5.9 mg L− 1 respectively.The performance of the NTB model in other aquifer zones of the study area is in between that of these two.The visual inspection also shows that the modelled time series of nitrate concentrations are in good agreement with the observed data, and can well reflect trends in the observed data in the study area.Therefore, this is acceptable for such a modelling study that focuses on the long-term trends in the annual average nitrate concentrations in aquifer zones.Spatially distributed recharge values between 1961 and 2011 were used in simulating nitrate concentrations.The spatially distributed long-term-average recharge values were used for years outside of the period from 1961 to 2011, thus resulting in less fluctuation in the modelled nitrate concentrations from year to year.The results show that the nitrate concentration in the ‘St Bees Sandstones’ keeps rising until the peak value of 36 mg L− 1 is reached at the nitrate-concentration-turning-point year 2021.It was estimated that the nitrate concentration in the ‘silicified Penrith Sandstones’ will have reached its peak value of 43 mg L− 1 in the year 2051.The nitrate concentration in the ‘non-silicified Penrith Sandstones’ is at its peak value that will last from 2015 to 2034.After reaching peak values, the nitrate concentrations in these three aquifer zones will decline to stable levels.It was also found that the nitrate concentration in the ‘interbedded Brockram Penrith Sandstones’ has passed it peak and started to level off after a slight decrease.An extended NTB model was developed in this study to assess long-term trends of nitrate concentrations in aquifer zones at the catchment scale.The extended NTB model fits the purpose of this study, and the modelled nitrate concentrations can well reflect the trends in the observed data of the s03tudy area.The results show that the nitrate concentration in the ‘interbedded Brockram Penrith Sandstones’ has passed its peak value and is declining slightly.The nitrate concentrations in ‘St Bees Sandstones’ and ‘silicified Penrith Sandstones’ are rising until their nitrate-concentration-turning-point years are reached, whilst that in the ‘non-silicified Penrith Sandstones’ is at its peak value and will start to decrease from 2034.However, nitrate concentrations in all four aquifer zones will eventually level off with values below the EU drinking water standard).Although the nitrate-concentration trends in aquifers are controlled by many factors in each cell of the modelling area, such as recharge, nitrate loading, glacial till thickness, USZ thickness, USZ fracture flow, and aquifer properties.However, the thickness of UZSs may partially explain the different nitrate-concentration trends in these four aquifer zones.Since ‘interbedded Brockram Penrith Sandstones’ has an average USZ thickness of 2 m, which is much lower than that in ‘St Bees Sandstones’, ‘silicified Penrith Sandstones’ and ‘non-silicified Penrith Sandstones’, its USZ has a much shorter nitrate time-lag than that in the other three USZs.Therefore, it will be easier for the peak nitrate loadings in ‘silicified Penrith Sandstones’ than that in ‘silicified Penrith Sandstones’ to reach the water table.These results are valuable for the management of both groundwater and surface water in the study area.Groundwater is essential for maintaining the flow of many rivers in the form of baseflow when rivers are connected with high permeability aquifers.Nine river gauging stations in the Permo-Triassic Sandstones in the Eden Valley have an average baseflow index of 43%.This indicates that the nitrate remaining in the Permo-Triassic Sandstones will affect the long-term quality of surface water and hence the ecological quality in the study area.Since there is limited information about the fracture flow in the Permo-Triassic Sandstone USZs of the study area, MC simulations were undertaken to better understand the fracture flow in these USZs.As mentioned in the section of sensitivity analysis, the ratio of fracture flow in the USZs RFF is identifiable for each aquifer zone.It was found that the optimum RFF values for the USZs of ‘St Bees Sandstones’, ‘silicified Penrith Sandstones’, ‘non-silicified Penrith Sandstones’ and ‘interbedded Brockram Penrith Sandstones’ are 0.035%, 19.4%, 0.134% and 0.002% respectively.This indicates that 19.4% of water and nitrate travel through fractures in the USZ of ‘silicified Penrith Sandstones’, whilst > 99.8% of water and nitrate is transported by intergranular movement through the matrix in the other three USZs.Fig. 1 shows that there are more faults cutting through the sandstones in the ‘silicified Penrith Sandstones’ than other aquifer zones where the density of faults is low or faults have developed along the aquifer boundary.This may partially explain why the ‘silicified Penrith Sandstones’ have higher values of RFF than other three aquifer zones.The approach developed in this study can represent the major hydrogeological processes in the groundwater system using a simplified conceptual model.It can be used to investigate the impact of historical nitrate loading from agricultural land on the long-term trends of nitrate concentrations in aquifers at the catchment scale.Although it was demonstrated by its application in the Eden Valley, UK, the methods for preparing most of the model parameters have been described.Therefore, this simple approach is reproducible by other scientists to address nitrate-legacy issues faced by many countries.However, a number of limitations should be considered when interpreting the modelled results in this study.First, the seasonal interactions between aquifers and rivers have been ignored in this modelling study with an annual time-step.The magnitude of water exchange between aquifers and rivers is governed by aquifer-river connectivity, the permeability of the riverbed, and the seasonal fluctuations of groundwater levels and river stages.Water and nitrate in rivers could infiltrate into aquifers via hyporheic zone due to the fast rise of river stage after storms in dry seasons.Doussan et al. found that nitrate concentrations are consistently reduced when river water infiltrates into the first few metres of bed sediments.The denitrification rates in hyporheic zones are affected by temperature, sediment grain size, and dissolved oxygen.However, these detailed processes in hyporheic zones have been simplified in this study, which focused on the annual average nitrate concentrations in aquifer zones.In general, groundwater eventually flows to surface water and maintains the flow of many rivers.Groundwater flow in the study area is dominated by flow to the River Eden.Therefore, groundwater recharge can be treated as the input of the groundwater system in the Eden Valley; and the output of the system is the baseflow entering the River Eden.It was assumed that the input and output of the Permo-Triassic sandstone aquifers in the Eden Valley reach a dynamic balance on an annual basis.As mentioned previously, the aquifers are disconnected from rivers where glacial till is present in the study area.This method, however, can be improved to represent the detailed water and nitrate interactions between aquifers and rivers when simulating seasonal or monthly nitrate concentrations in the future.Denitrification, which is generally facilitated by the absence of oxygen, is a microbial process in which nitrate is progressively reduced to nitrogen gas.Denitrification is considered to be the dominant nitrate attenuation process in the subsurface system.However, Rivett et al. suggested that denitrification only accounts for a loss of 1–2% in the USZs in general.Butcher et al. found no evidence for denitrification and concluded that denitrification is not a significant process controlling nitrate concentrations in the USZs and unconfined aquifers of the Permo-Triassic Sandstones in the Eden Valley.Furthermore, evidence from the porewater chemistry of the cored borehole in the study area showed that oxidation of ammonium to nitrite occurs instead in the USZs.In spite of this, detailed information on the distribution of nitrification in the study area is unavailable.Therefore, it was assumed that nitrate is conservative and its biogeochemical processes in the groundwater system have been ignored in this study.However, the attenuation factor ATT has been introduced into the model and it can be parametrised when more information on nitrate attenuation in the groundwater system is available.The long-term annual average recharge between 1961 and 2011 has been used to simulate nitrate concentrations in the future because the agreed future change scenarios were unavailable for this study.Consequently, nitrate concentrations estimated after 2011 should be treated with caution and are simply indicative of the most likely peak average nitrate concentrations.When future recharge values under different climate change scenarios become available, better simulations can be performed.Since there were no time-variant groundwater-level data available for this study, it was assumed that the groundwater background volume Volbackground remains the same in each simulation year.When the information on time-variant groundwater flow in the study area becomes available in the future, the model can be improved to produce a better estimation of the time-variant total volume of groundwater and hence the nitrate concentrations.It was assumed that the future nitrogen-fertiliser-application rate will be reduced to the level in the mid-1950s, i.e. 40 kg N ha− 1 when deriving the spatio-temporal nitrate-input-function using the NEAP-N data and the single nitrate-input-function.This explains the levelling off nitrate concentrations after passing their peak values in four aquifer zones of the study area.Once a set of agreed scenarios of future nitrate loading becomes available, the model is able to better estimate nitrate concentrations into the future.In addition, the lateral migration of nitrate in the USZs was ignored in this study due to the lack of information on this.Finally, hydrodynamic dispersion of nitrate in the unsaturated zones due to both mechanical dispersion and diffusion will occur, but was not accounted for in this study.All these simplifications, however, facilitated the modelling procedures for this simple approach, which requires relatively modest parameterisation but provides a framework for better informing land-management strategy that needs to take better account of nitrate-legacy issues.This paper presents a catchment-scale approach to modelling the long-term trends in nitrate concentrations in aquifers.It requires relatively modest parameterisation and runs on an annual time-step.However, it considers enough hydrogeological processes at the catchment scale but in simplified ways.For example,the spatio-temporal nitrate loading from agricultural land has been derived,the impact of low permeability superficial deposits on nitrate transport was considered,a simple methodology was introduced to represent the nitrate transport via both the intergranular matrix and fractures in the USZs,a simplified hydrological conceptual model was developed to simulate nitrate transport and dilution in aquifers.The model provides useful estimates of present and future nitrate concentrations in aquifers.These results can help policymakers understand how the historical nitrate loading from agricultural land affects the evolution of the quality of groundwater and groundwater-dependent surface waters.It also helps develop time frames to assist in understanding the success of programmes of measures at the catchment scale.This model is also valuable for evaluating the long-term impact and timescale of different scenarios introduced to deliver water-quality compliance, such as the changes of land-use and fertiliser-application rate under future climate-change impacts.However, the assumptions of the model should be considered when using the modelled results to solve the localised nitrate problems.More complexities could be introduced into the model in the future when it is applied to other areas with complicated hydrogeological settings to better represent the nitrate processes in the groundwater system.
Nitrate water pollution, which is mainly caused by agricultural activities, remains an international problem. It can cause serious long-term environmental and human health issues due to nitrate time-lag in the groundwater system. However, the nitrate subsurface legacy issue has rarely been considered in environmental water management. We have developed a simple catchment-scale approach to investigate the impact of historical nitrate loading from agricultural land on the nitrate-concentration trends in sandstones, which represent major aquifers in the Eden Valley, UK. The model developed considers the spatio-temporal nitrate loading, low permeability superficial deposits, dual-porosity unsaturated zones, and nitrate dilution in aquifers. Monte Carlo simulations were undertaken to analyse parameter sensitivity and calibrate the model using observed datasets. Time series of annual average nitrate concentrations from 1925 to 2150 were generated for four aquifer zones in the study area. The results show that the nitrate concentrations in ‘St Bees Sandstones’, ‘silicified Penrith Sandstones’, and ‘non-silicified Penrith Sandstones’ keep rising or stay high before declining to stable levels, whilst that in ‘interbedded Brockram Penrith Sandstones’ will level off after a slight decrease. This study can help policymakers better understand local nitrate-legacy issues. It also provides a framework for informing the long-term impact and timescale of different scenarios introduced to deliver water-quality compliance. This model requires relatively modest parameterisation and is readily transferable to other areas.
53
Temporal dynamics in local vehicle ownership for Great Britain
This study on vehicle ownership is set in the context of the island of Great Britain, which is part of the United Kingdom.Here vehicle ownership grew rapidly in the 20th century, as illustrated by the trends in Fig. 1 which show the shift in vehicle ownership levels since the early 1950s.After previous rapid increases in vehicle ownership, the 21st century shows a more stable pattern, with one vehicle households accounting for around 45% of all households and two or more vehicle owning household accounting for a further 30%.The causes of this transition are well studied in the literature.Income is often cited, with households choosing to spend rising incomes on acquiring their first or subsequent vehicles.Household composition is also important, linked to life stage changes, e.g. the birth of the first or second child, a parent re-entering the work force, a late teenage child desiring independence and finally mobility and health issues in later life.In regards to wider factors, urban sprawl has tended to encourage vehicle ownership, although in Great Britain this has occurred to a lesser degree than in similar English speaking nations, particularly those outside Europe."Environmentally, the UK's adoption of the recommendations of the Intergovernmental Panel on Climate Change in its Climate Change Act has necessitated policy interventions to try and mitigate the environmental impacts of vehicle use.These have included a gradation in the amount of vehicle duty, now levied by engine size and fuel type, and the pricing mechanism of a fuel duty on petrol and diesel.Having set the policy and historic context of vehicle ownership in Great Britain, the research question for this study is concerned with whether there is a spatial dimension to this changing pattern of ownership in Great Britain.In particular we ascertain if the likelihood of a neighbourhood to change its level of ownership relative to others is influenced by its surrounding neighbours and if so, how strong this influence is.This is the first study to use the technique of spatial Markov chains to quantify the strength and duration of this relationship.Thus, whilst this study is not an attempt to model the determinates of vehicle ownership, it answers the question as to whether models that attempt to explain such ownership need to explicitly take account of neighbourhood effects in their specification.The following section provides a brief summary of the existing literature in the area of vehicle ownership modelling.Section 3 introduces the analysis technique of spatial Markov chains.Section 4 describes the data used in this study, the United Kingdom decennial censuses from 1971 to 2011.Section 5 presents the results of the analysis and is followed by Section 6 which provides a discussion of the results.Vehicle ownership is commonly examined using either individual disaggregate data or summary aggregate data.Access to disaggregate individual data, particularly if it is of a panel or longitudinal nature, allows the vehicle owning decisions of the individuals or households to be placed in the historic and contemporary context of their circumstances, e.g. the acquisition or disposal of a motor vehicle can be related to changes in household characteristics."Such data is however expensive to collect, has issues associated with being potentially disclosive of the individual's identify and, since it is ‘merely’ a sample of the population, cannot provide a complete picture.Most importantly here, the geographic coverage of such surveys is patchy, making local variations in any relationship difficult to identify.Recognising this shortcoming, adaptations of traditional econometric models to incorporate a spatial component into panel type data are beginning to be discussed and reported.Alternative aggregate data has the advantages that it is not generally disclosive; it may also be constructed from a number of sources, e.g. governmental; commercial; or administrative and has the potential to provide a complete picture of a population under study.Periodic population censuses are common sources for such aggregate data and with the comprehensive coverage of such data, it becomes possible to examine local variations in aspects such as transport, travel and vehicle ownership.It is this aggregate form of vehicle ownership data that is used in this study.Whilst this study is not concerned with the underlying mechanisms motivating vehicle ownership, merely with their outcome in a spatial context, it is worth highlighting some of the relevant spatial mechanisms so that it is possible to better understand these results.Firstly vehicle ownership has a utility in its own right; it confers the ability of individuals to travel to a destination of their choice, at a time of their choice, to receive a service.There are, however, impedances associated with this choice, primarily time, cost and congestion which need to be judged against alternatives.The availability and quality of public transport varies by place.In urban areas alternative public transport in the form of metro, trams, trains or buses are available.In sub-urban areas sustainable modes become more attractive, such as walking or cycling.In rural areas alternatives become less attractive, public transport is poor and distances to the nearest workplaces, schools, shops or health clinics may be large.Vehicle ownership is also used as a proxy for other things.The most common is its use in some facet of area classification or the calculation of a deprivation measure where it is acting as a proxy for income.However this relationship between vehicle ownership and income is not straight forward.Typically these two are positively related but, as highlighted already, in rural areas a vehicle becomes almost an essential item, the encumbrance of which may cause strains on household budgets.Conversely, in affluent urban areas with good quality public transport infrastructure, such as central London, the need for a private vehicle becomes almost unnecessary and potentially burdensome.Whilst the above studies have identified how individual characteristics and ideas influence car ownership, there is also a case to be made to consider place and it is this aspect that is the most germane to this study.Pioneering work by Schelling showed that only subtle tendencies towards a desire to live with people of a similar background could, over time, produce dramatic segregation effects.Since neighbourhood are largely shaped by their inhabitants, if you live in a neighbourhood with high car ownership then you may be more likely to purchase a car or if you are re-locating and have a desire to remain or become a car owner then neighbourhoods with high car ownership will be more attractive."Looking beyond the immediate neighbourhood, if Tobler's law of geography is to be believed, the influence is not just from the immediate neighbourhood but also surrounding neighbourhoods.Historically, aggregate models of vehicle ownership fail to explicitly take account of this influence and do not adopt a specification that accounts for spatial autocorrelation - although more recently a greater number of models that incorporate spatial relationships have begun to appear in the literature.If the importance and significance of place could be established, then there would a strong case for such models to recognise this influence.It is the strength of this place influence that this study is aimed at establishing.To explore the local dynamics of vehicle ownership, recent methods of spatial distribution dynamics are employed.The ownership rate is viewed from the lens of discrete Markov chains and the transitions of neighbourhoods across levels of ownership over time.The point of departure is the classic discrete Markov chain framework where ri , t, the ownership rate in area i at time period t, is discretised into one of K states.More specifically, state boundaries are taken as the quintiles of ownership over all spatial units in a given time period such that xi , t = j ⇔ Qj − 1 , t < ri , t ≤ Qj , t, with Qj , t the jth quintile threshold for period t, denoted as qj.Here, we assume the chain is temporally homogeneous, meaning the transition probabilities are time invariant.Whilst the classic discrete Markov chain is a flexible framework for modelling transitional dynamics, it has some potential limitations when applied in a spatial context.A key assumption is that the time series of transitions for each spatial unit provides independent information on the dynamics of change.However, this independence assumption rules out any interdependencies between the changes in one area and those in the surrounding area.To the extent that such interactions are at work, the classic discrete Markov chain would obscure the role of spatial spill overs in the dynamics.The specific tests to employ include a likelihood ratio test and a χ2 test which have asymptotic chi-square distribution under the null hypothesis.Rejection of the null hypothesis in favour of the alternative, that the transition probabilities are different, leads to the question of how the long run dynamics of vehicle ownership rates may be impacted by neighbourhood context.To answer this, estimates of the steady state distributions as well as the first mean passage times can be obtained for each of the conditional chains by substituting the estimated conditional probability transition matrix into Eqs. and, respectively.This study uses data from the 1971, 1981, 1991, 2001 and 2011 United Kingdom censuses.The measurement of interest is the level of vehicle ownership, expressed as the mean number of vehicles per household, which is a continuous measure bounded below by zero.Data published in the National Travel Survey shows that nationally over the past 20 years this values has remained fairly stable at around 1.50 cars or vans per household, although with some regional variations; for example it is much lower in London, at around 0.80 cars or vans per household.The advantage of studying the island of Great Britain is that there are no unknown edge effects, where information on neighbours is missing,so wi , b is complete.The significant challenge associated with using these five censuses however is that there is little consistency in the output geography, making the comparison of how ownership levels change in a neighbourhood over time problematic.This inconsistency is understandable; over time, as places change the geography also needs to change to remain relevant.What is therefore required is a way to impose a consistent and meaningful geography on all five censuses.The reporting geographies with the smallest populations are enumeration districts in the 1971, 1981 and 1991 censuses and Output Areas in 2001 and 2011.The EDs are an operational geography, defined as an area allocated to an enumerator to hand out and collect census forms, and also the EDs used in 1971 are not necessarily the same as those used in 1981 or 1991.The more recent OAs are a statistical geography designed using detailed census outputs to create homogeneous areas for the dissemination of census counts.Again, the EDs and OAs are not a consistent geography across the five censuses; this is illustrated in Fig. 2 which shows the 1981 census ED and 2011 census OA boundaries and population centroids for the same geographic area in northern England.To carry out this analysis what is needed is a consistent geography so that the level of vehicle ownership in the exact same area can be measured between two censuses.Additionally the areas need to be: able to provide reliable estimates; of a size to reveal the potential neighbourhood effects that are under investigation; conceptually robust and capable of being estimated for all the censuses.The geography adopted for this purpose is the 2011 Middle layer Super Output Areas in England and Wales and the Intermediate Zones in Scotland.Taking point , the techniques used to provide estimates of household vehicle ownership involves the creation of a look-up of a small population geography to a larger population geography.The typical number of EDs or OAs allocated to each MSOA IZ in this lookup varies by time and space but is typically 15 to 25, which means that the estimate of ownership is based on a reasonable number of observations.Whilst being large enough to capture a reasonable number of EDs or OAs, the MSOA/IZs are still of a suitable size to embody a neighbourhood as required by point .In regards to point , the MSOAs IZs are a statistical geography designed to cover an area which has a degree of homogeneity and are largely consistent for the latter 2001 and 2011 censuses.There are no equivalent middle layer geographies for the earlier three censes.The final point is trivial given the availably of GIS files that define the PWC of the OAs and EDs and the boundaries of the MSOAs IZs.The count of households and vehicles in each ED or OA are allocated to the MSO IZ that its PWC falls within.There are 8480 MSOAs and IZs in Great Britain and after the geo-conversion process, the distribution of the size of these MSOAs/IZs and number of households is shown in Table 1.This approach provides us with a consistent temporal and spatial representation of vehicle ownership in Great Britain for the five decennial census years.Fig. 4 shows the map for the level of vehicle ownership rates, ri , t, for the census years 1971, 1991, 2001 and 2001.This approach provides us with a consistent temporal and spatial representation of vehicle ownership in Great Britain for the five decennial census years.Fig. 4 shows the map for the level of vehicle ownership rates, ri , t, for the census years 1971, 1991, 2001 and 2001.Before the spatial Markov matrices are estimated for these data, it would be useful to assess what the degree of spatial correlation there is present in these cross sectional data sets."This is done by calculating Moran's I statistic for each data set and Fig. 3 shows how this varies over a spatial range in each census.There is a clear pattern for the spatial correlation to diminish as the distance increases and also there is greater spatial correlation in the more recent census data.This suggests that there is a greater spatial polarisation in rates of vehicle ownership in more recent years.The rates of vehicle ownership in each census are discretised into quintiles with the thresholds derived separately for each census year and the boundaries between these quintiles are shown in Table 2.This approach allows the rate of ownership for an area to be put in a national context and the relative change in ownership levels between censuses to be captured, rather than just change in the rate of ownership.Recall that these values represent ten year inter census periods, so an MSOA IZ in the lowest quintile will take, on average 116 years to transition to the second lowest quintile and 675 years to transition to the highest level of ownership.An MSOA IZ in the highest quintile will take 90 years to transition to the second highest quintile.These are long time spans.In section 3.2 a pair of statistics was referenced that tests whether the probability transition matrices are different amongst the levels of ownership.The results of these tests of are shown in Table 3.Both the likelihood ratio and the χ2 tests are significant at the 0.1% level which provides supporting evidence to conclude that the transition probabilities are different for each quintile of ownership.This justifies the estimation of the transition matrices and ergodic values for the spatial Markov chains.Inspecting those MSOAs IZs in the lowest quintile with surrounding neighbours also in the lowest quintile, it is seen that they will take on average 141 years to move to the second lowest quintile and 432 years to move to the middle quintile.Those in the lowest quintile but with surrounding neighbours who are in the middle quintile move to the second quintile after just 80 years and to the middle quintile after 164 years.Looking at movements down the levels of ownership, those with surrounding neighbours in the highest quintile who are in the highest level of ownership take 113 years to move down to the second highest level and 313 to move down to the middle level.The overall pattern in matrix M6 is that transitions to higher levels of ownership are slower when neighbours of the MSOAs IZs have low ownership levels but quicker when the neighbours have higher levels.Also transitions down from high levels take longer when the neighbours are also in the higher levels, but quicker when the neighbours are in the lowest levels.Thus neighbours can be seen to exert a ‘drag effect’ that inhibits the movement of an MSOA IZ away from the surrounding neighbourhood level of ownership.Whilst the time spans for the non-spatial Markov chains was seen to be long, these spatial time spans are very long and attest to how stable the relative level of ownership in neighbourhoods is likely to remain in the future.Econometric models examining vehicle ownership do not always account for the potential influence of surrounding neighbourhoods.If this neighbourhood influence is real and neglected, such models will be ill-specified and any parameters estimated will be unreliable, biased or non-transferable.This study has applied a novel method to gauge not just the repeated cross-sectional extent of this influence from such neighbourhoods but also capture the temporal strength over a 40 year time span.The key findings and contributions of this study are that: a comparison of the steady state probabilities and transition times in the non-spatial and spatial cases clearly demonstrates that neighbourhood context influences the transition of neighbourhoods through the levels of vehicle ownership; the duration to transition between the extreme levels of ownership is seen to be long when the neighbourhood context is different to that at the end of the transition; and the incorporation of spatial effects into models of behaviour are therefore likely to produce substantially different estimates and conclusions.In the context of the existing literature, this is one of the few studies that has estimated the strength of the spatial relationships in vehicle ownership over time using aggregate data.The aggregate nature of the data provides a comprehensive picture of area level vehicle ownership rather than the partial picture obtained from sparser dis-aggregate data.The time span of this study is also long, covering 40 years and during this time the UK economy has experienced periods of economic stability; rapid growth; and deep recession, allowing us to provide estimates that are not influenced by the short-term economic cycles that some individual panel type data may capture.Whilst there are no directly comparative studies to put these GB results in context, it is possible to replicate this study in other geographic contexts, given access to spatially consistent counts or estimates of vehicle ownership over time.This would provide some context for the dynamics described here for GB.As highlighted in the introduction, a potential criticism is that this is a descriptive assessment of the spatio-temporal dynamics of vehicle ownership and not an attempt to model the determinants of vehicle ownership.It has, however, quantified how the dynamics in local vehicle ownership are influenced by both neighbourhood and the passage of time in one estimation consistent framework.These estimates have been obtained without the reliance on any statistical assumptions regarding the form of the data or model.This knowledge on the importance of spatial dependence provides an encouragement to those who are advancing the area of spatio-temporal modelling.Whilst the time span of the data used in this study is long, it is rather course, relying on the decennial nature of the UK censuses.More frequent observations within this time span would allow a more dynamic relationship to be captured, e.g. the example of Australia which has had quintennial censuses since 1961.Also the technique relies on the usual Markov assumptions in regards to the Markov process and the time homogeneous nature of the process.In terms of policy relevance, the finding of this study are also important for those who are interested in setting policies or making long term investment plans where such decisions are influenced by likely vehicle ownership rates.A neighbourhood with relatively low levels of ownership situated in the context of surrounding neighbourhoods with low ownership is likely to remain low.Thus when developing retail, health or community services to serve such communities, car parking provision is not so critical.But a neighbourhood of low ownership whose surrounding neighbours have middle or high levels of ownership will not stay at this level long and will transition to higher levels, meaning the demand for vehicle parking on local centres and local traffic is likely to increase.Planning guidelines in England require that planners take into account ‘local car ownership’ in setting guidelines for parking standards for both non-residential and residential developments.Studies have also demonstrated how local residential car ownership influences neighbourhood design and travel patterns, either through the desire for car-free environments or environments with constrained parking provision.Finally, aside from these practical aspects, there are instances where vehicle ownership is used as a proxy for the nature of society, e.g. through the calculation of deprivation measures, here then neighbourhoods will tend to homogeneity too.This presents important challenges, both practically,and methodologically,.This is important since such deprivation measures often influence the apportionment of resources aimed at tackling dis-advantage.The following are the supplementary data related to this article.Example geo-conversion data from EDs or OAs to MSOA E02002412 for 1971, 1981, 1991, 2001 and 2011.Spatial distribution of vehicles per household rates for 1971, 1981, 1991, 2001 and 2011.Spatial transition probabilities,Animation of spatial distribution of vehicles per household rates for 1971, 1981, 1991, 2001 and 2011.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.jtrangeo.2017.05.007.
This article explores the stability of local vehicle ownership rates in Great Britain using the technique of spatial Markov chain analysis. Non-spatial Markov chain processes describe the transition of neighbourhoods through levels of ownership with no regard to their neighbourhood context. In reality however, how a neighbourhood transitions to different levels of ownership could be influenced by its neighbourhood context. A spatial Markov chain accounts for this context by estimating transition properties that are conditioned on the surrounding neighbourhood. These spatial Markov chain properties are estimated using a long run census time series from 1971 to 2011 of household vehicle ownership rates in Great Britain. These processes show that there is different behaviour in how neighbourhoods transition between levels of ownership depending on the context of their surrounding neighbours. The general finding is that the spatial Markov process will lead to a greater homogeneity in levels of ownership in each locality, with neighbourhoods surrounded by relatively low ownership neighbourhoods taking longer than a non-spatial Markov process would suggest to transition to higher levels, whilst neighbourhoods of high ownership surrounded by high ownership neighbourhoods take longer to transition to lower levels. This work corroborates Tobler's first law of geography “Everything is related to everything else, but near things are more related than distant things” but also provides practical guidance. Firstly, in modelling ownership, spatial effects need to be tested and when present, accounted for in the model formulation. Secondly, in a policy context, the surrounding neighbourhood situation is important, with neighbourhoods having a tendency towards homogeneity of ownership levels. This allows for the effective planning of transport provision for local services. Thirdly, vehicle ownership is often used as a proxy for the social and aspirational nature of an area and these results suggest that these properties will persist for a prolonged period, possibly perpetuating and exacerbating differentials in society.
54
Can integration of PV within UK electricity network be improved? A GIS based assessment of storage
Deployment of grid-connected photovoltaic systems in the United Kingdom has grown rapidly in the recent years."Cumulative PV installed capacity is now over 10GWp and PV contribution towards the UK's electricity generation has quickly grown from 0.1% in 2011 to 2.2% in 2015.Typically installed within the network much closer to the point of electricity consumption than conventional generation, PV deployment contributes to give to the UK energy system a more decentralised structure.Decentralised energy systems provide new benefits, better locational matching of supply and demand, reducing losses and creating the possibility to avoid network upgrade work if local generation can meet some of the demand peak.However PV is intermittent and generally considered a non-dispatchable electricity source as it generates electricity when there is available irradiation not when it is most needed by the electricity system.This means that whilst PV can generate electricity close to where there is demand, there is limited control over when it is generated.This may lead to significant changes on how power flows into the network.Several contribution within the literature have highlighted how PV capacity might have an impact on power flow and network operational limits both at local and system scale.High PV penetration could lead to voltage rise, caused by a high local photovoltaic power feed-in, and overloading issues at low voltage distribution network level.At higher system level it could also lead to upstream reverse power flow, where generation is greater than local demand causing power to flow back to higher voltage levels of the network and leading to challenges in terms of conventional power plant re-despatching and network congestion management.Yet, increasing levels of PV deployment in the UK raise concerns over the amount of PV that can be hosted within the network, and more broadly how the value of PV to the electricity system can be maximised by better matching electricity supply and demand over time and spatially.Several technical solutions can be considered to mitigate impacts both at local and system level.These includes control voltage rise through reactive and active power control, curtailment as well as network reinforcement both at the regional and international level aimed at increasing balancing capabilities, flexibility, stability and security of supply.Reliable alternatives are strategies to better match local PV generation and electricity demand throughout the day.In demand terms, this is achieved through demand-side management, where electricity usage is shifted to times when generation is high.Generation can instead be shifted using storage technologies, which store electricity from the time of production to the time when it is required.Storage technologies are increasingly deemed to be an effective solution to expand PV hosting capacity and minimise network impacts as well as increasing UK electricity system flexibility and security.Indeed, a global momentum around storage technologies highlights an increasing consensus over their potential to provide vital grid services needed for the integration of PV and other intermitted renewables into the electricity systems.Storage systems can not only help in shifting load and balancing intermittency, but also provide other ancillary services to the network such as frequency response, voltage support and reserve capacity.As such they can be implemented at distributed level, as well as standalone within the transmission and distribution networks.Along with the increased recognition at the UK institutional and energy markets level of the potential important role of storage to support low-carbon energy transition and in the context of the solar power sector, the academic literature have begun to study its technical advantages as well as implementation conditions and implications for the UK electricity network.Major UK techno-economic analyses on grid balancing challenges and the role of storage within the UK electricity system have looked at both standalone and distributed storage concluding that distributed storage offers higher value to the electricity system as it could reduce the need for distribution network reinforcement.They also highlight how the magnitude of such value would depend on the characteristics of local generation and demand, despite it cannot be assessed within the given modelling framework.Indeed, a whole system modelling approach is used, where the distribution network is modelled on the basis of representative UK networks and data are not disaggregated at local level, in particular electricity demand data.These characteristics limit the model ability to assess and measure both local grid impacts of embedded PV generation and the potential contribution of distributed storage to mitigate them.Yet, little work has to date specifically looked at how storage could optimise embedded PV generation output and minimise local impacts within the UK power system.This paper aims at addressing this gap in the literature by looking at how combining PV systems with distributed storage could improve local generation and demand balances and the integration of PV within the UK distribution network.In particular, it looks at PVS potential contribution from two perspectives:Assessing how PVS could help in reducing impacts on power flows due to local supply and demand imbalances driven by PV deployment within the UK domestic and non-domestic PV market segments.Assessing the potential of PVS of improving PV capacity credit, i.e. the contribution of installed PV to meeting electricity demand within the UK electricity system.Few studies have looked specifically at the potential of distributed storage to mitigate embedded PV generation impacts on the UK distribution network.For example, Pholboon et al. explore the use of storage in a specific small energy community in Nottingham with grid-connected PV systems showing that it can shave peak power demand and increase self-consumption, thus increasing grid stability and reducing distribution system losses.Wang et al. model storage associated with residential PV within a representative UK LV network case study showing its ability to relieve network congestion.However, such type of analysis have to date been limited to specific case studies or small sections of the network, focused on UK PV domestic market segment only and based on demand load profiles simulations.This paper instead looks beyond specific case studies, encompassing wider geographical data and non-domestic PV market segment and is based on primary PV deployment and electricity demand data.As the potential benefits of PVS are expected to be strongly influenced by the local characteristics of PV generation and electricity demand, the analysis is based on the United Kingdom Photovoltaic Deployment framework, a Geographical Information System framework which maps current UK PV deployment and electricity demand disaggregated to sensitive spatial resolution and by market segment.While GIS approaches have been used to map generation potential of renewable energy sources such as PV or wind and optimisation of solar project siting, the UKPVD framework extend their use to the assessment of the local balance of PV generation and electricity demand, analysing impacts on power flows between LV and higher levels of the UK electricity network.Moreover, the UKPVD framework is based on actual PV deployment figures and primary electricity demand data.Thus this study allows to have a realistic picture of where and how storage implementation could provide benefits in terms of increased PV hosting capacity in relation to the amount of local PV generation and electricity consumption across UK geographical areas.The paper is structured as follows: Section 2 discusses the UKPVD GIS framework, the methodology and the data used.Section 3 presents the results of the analysis on how storage can contribute to improving PV hosting capacity within the UK electricity system.Section 4 concludes and provides some policy recommendations.The UKPVD is a geographical information system based framework mapping current UK PV deployment and electricity consumption across PV market segments, i.e. domestic, non-domestic and ground mounted.It is based on the most detailed spatial disaggregation available, in order to capture as accurately as possible local variations in PV generation and electricity demand and consequent impact on high voltage/low voltage power flow.This paper focuses on a specific distribution region, the South-West England licence area which serves around 1.5 million demand customers.This area has the highest level of PV deployment within the UK and as such is expected to experience greater impacts from PV generation than other network areas.The area is disaggregated into 1888 Lower Layer Super Output Areas which have been used as geographical units.LSOAs are spatial areas containing on average 600 households, designed by the Office of National Statistics to characterise the socio-economic characteristics of the UK.As population is relatively constant per LSOA they vary considerably in size and shape).LSOA basemaps are acquired from the ONS Geoportal.For each LSOA complementary data-sets are developed for PV deployment and electricity demand.Fig. 1 show spatial and statistical distribution of PV deployment across LSOA in the SWE distribution region, calculating PV penetration for each LSOA.The map on the right shows significant variation in the in the amount of PV deployed per LSOA, with some local clustering characterized by higher PV penetration.Such variation is also evident in the histogram on the left which shows distribution of LSOAs across different levels of PV penetration: a majority of LSOA have relatively low penetration, several clusters between 10% and 20% PV penetration and greater PV installation in rural areas.The analysis is based on two case studies representing a summer and a winter scenario, chosen to depict two worst case scenarios in terms of local balance between PV generation and local electricity demand:a summer time case when PV generation is at its peak and daytime demand is typically low;,a winter case characterized by low PV generation and high peak demand.During the summer, high daytime PV generation and low daytime demand could lead to local electricity supply and demand imbalances with resulting impacts on power flows and network operational limits.PV generation might cause back-feeding presenting challenges for the LV distribution network operation.PVS could reduce these effects by allowing surplus PV generation to be absorbed during the day and utilised locally later in the evening to meet demand.It can thus help in smoothing daily power flow, in minimizing the occurrence of power feed-in and in reducing the need for power flows management and network upgrade.Moreover, at the higher electricity system level, reducing upstream reverse power flow would allow better management of UK electricity market balancing, better utilisation of other generation sources and reducing reliance on peaking plants to balance the system.During winter instead PV generation is generally lower than local demand.In this context PVS has the potential to improve PV capacity credit by storing PV generation from the daytime and discharging during the hours of peak winter demand in the evening.It thus reduces the peak electricity imported into the LV distribution network from high voltage network, deferring or reducing reinforcement in transmission and distribution, as wires and transformers are typically sized and upgraded to meet winter peak.This is particularly relevant in a context of expected increase in UK electricity load, due to natural growth combined with expected progressive electrification of heat and transport.As summarized in Table 1, local electricity demand, PV generation and storage operation are considered and calculated out of the UKPVD dataset for each case study.Next sections present and discuss assumptions and data-sources used for electricity demand, PV generation and storage operation.A data set for total electricity demand and time of use has been constructed within the UKPVD framework for each market segment, i.e. domestic and non-half-hourly metered non-domestic customers, both assumed to be connected to LV distribution network.It is based on available data on: 1.annual electricity consumption aggregated per LSOA administered by DECC and 2.half hourly profile shapes for domestic and non –domestic customers developed by Elexon.Such data have been used to derive half hourly electricity demand profiles for domestic and non-domestic market segments and for each LSOA of the distribution region considered.For the purpose of the case study analysis, in order to depict the two worst case scenarios, it was needed to identify within the dataset a summer day with lowest demand and a winter day with highest demand.However, the demand profiles differ across the market segments considered: consumption is higher in the non-domestic sector during weekdays, whereas in domestic sector it is higher at weekends.The combination of the two consumption profiles has an impact on the net electricity consumption behaviour at LSOA level.Indeed, the maximum and minimum demand days are expected to vary with the proportion of domestic and non-domestic electricity consumption within each LSOA.Fig. 2 presents the proportion of domestic electricity consumption as a percentage of total LSOA consumption, to investigate how this proportion varies across the LSOAs in the distribution region considered.The figure shows that in the majority of the LSOAs domestic consumption is significantly larger than non-domestic.The figure also reveals that it is most common for LSOAs to have around 80% domestic electricity consumption within them.Therefore an LSOA with 80% of the total electricity consumption originating from the domestic sector is taken as representative within the target distribution region.In order to identify when daily maximum and minimum demand occur within it, its daily electricity consumption is plotted over a year, disaggregated by domestic and non-domestic customers.Firstly the plot shows the expected seasonal trend of higher electricity consumption during winter.At shorter time-scales, significant patterns in daily demand can also be observed with demand typically peaking mid-week and reducing at weekends.This result highlights that even 20% of non-domestic energy consumption is enough to shift minimum demand to the weekend.Based on this evidence a summer Sunday has been chosen as the minimum demand day and a winter weekday as the maximum one).Different PV generation data have been used for the summer and winter case studies.For the summer case study PV generation data are derived by combining data on PV deployment in domestic and non-domestic segments with PV generation profiles, i.e. how much electricity a given PV system is expected to generate and how generation is distributed across the day.Sources for PV deployment data are the central feed-in tariff register administered by Ofgem as well as the renewable energy planning database administered by the Department of Energy and Climate Change.Such PV deployment data have been combined with a PV generation profile derived from the Customer Led Network Revolution project, resulting from the aggregation of monitored performance of 100 domestic PV systems with different tilts and orientation.The average July generation profile has been used here.In addition, to account for the summer worst case scenario empirical evidence presented by the Low Voltage Network Templates project on maximum coincident generation over a year of monitored PV systems located in the same spatial area has been used: the recorded maximum coincident generation was 81% of the aggregated PV capacity.Such figure has been here assumed in calculating PV generation per LSOA.For example an LSOA with a total of 100 kWp of PV installed would have a PV generation profile that peaked at 81 kW.The aim of the winter case study is to explore the potential of PVS to improve the contribution of PV systems to meeting electricity demand within UK electricity system, i.e. to increase PV Firm Capacity Credit.FCC is here defined as the minimum power that can be dispatched daily at peak demand.The FCC which PVS could provide is strongly dependent upon the variation in daily PV generation.Therefore data on monitored PV system operation over a year has been used to investigate how PV generation varies across an entire winter, as shown in Fig. 3.It is noted that, due to data availability, generation data comes from the monitored performance of a solar farm in South-West of England.While data from monitored domestic rooftop systems would ideally be used, what is here relevant to capture is the longer run trends in daily variability which is equally captured by solar farms performance.Optimum operation for energy storage is here defined in order to meet the objectives of each case study, i.e. absorbing peak PV generation in the summer months and providing firm-capacity to meet peak demand during winter.Storage is assumed to be deployed at a ratio of 1 kWh per kWp of PV installed, with assumed round-trip efficiency of 80%.Storage penetration varies under the different scenarios outlined in the following section.In the summer storage is assumed to absorb as much of the peak PV generation as possible during the day and to release electricity later in the evening.This operation has the net effect of smoothing the power flow through the day and helps in minimizing impact of reverse power flow when it occurs.The shapes presented in Fig. 4 compare PV generation profile without and with storage, i.e. replacing a PV only profile with PVS.The level of peak shaving is optimised to maximise peak shaving without surpassing storage maximum state-of-charge.The absorbed electricity is then discharged as the sun begins to set, starting at around 6 p.m.The discharge is assumed to continue until the storage is at 20% SOC, i.e. discharge stops after 80%, reflecting the assumed RTE of 80%.For each of the two case studies a range of deployment scenarios combining PV and storage are investigated beginning with a baseline level of electricity demand and PV generation and extending to hypothetical increased levels of PV ad storage deployment.Four different scenarios are considered:a base case, based on current levels of PV deployment and without implementation of storage;,a 50% storage scenario in which it is assumed that one in two PV systems currently deployed in the target distribution region have 1 kWh of storage installed per kWp of PV;,a 100% storage scenario where all PV systems are assumed to have storage attached to them.To explore the potential of storage to allow higher penetration of PV within the UK electricity network in the future, two scenarios with higher levels of PV deployment are assumed:in the 200% PV scenario PV capacity within each LSOA is doubled across the entire distribution region, whilst maintaining the current proportion of domestic versus non –domestic deployment.the 200% storage scenario assumes a deployment of a 1 kW of storage per kWp of PV deployed as under 200% PV scenario.The analysis characterises how storage changes the local balance of electricity supply and demand during the summer and winter worst case scenarios discussed above.This approach allows to quantify at a geographically disaggregated level how the introduction of storage can help in smoothing power flow and reducing peak PV export during the summer, on one hand, and in improving PV contribution to winter peak demand on the other.To interpret the results of the summer case study, i.e. the potential of storage of reducing impact on power flows, it is relevant to present results of a recent work which has investigated the impact of PV generation on power flow by comparing PV generation and electricity demand throughout the day within each LSOA of the South-West England, the same region here targeted.The study is also based on the UKPVD framework and the same datasets for electricity demand and PV generation used in this paper.It analyses how much PV generation is contributing to local demand, by calculating the maximum proportion of demand load met by peak PV generation for each LSOA.The study assumes that all the electricity is consumed within the LV network and, therefore, that such LV load met by PV equates to a reduction in the power flow from high voltage to LV network.In other words, in this context the lower is the percentage of LV load met by PV the lower is the impact on HV/LV power flow.Moreover, the study shows how different amounts of PV deployed and electricity consumption lead to different impacts on power flow across LSOAs.In particular, it shows how in most of the LSOAs current PV generation meets less than 20% of the daytime load, meaning a very limited impact.PV generation is higher than demand only in a very small number of LSOAs and these areas are expected to experience power feed-in and reverse power flow.According to this previous analysis, results for the summer case study are presented for two different LSOAs, in particular:an ‘average’ LSOA, where PV generation meets 20% of the LV load;,a ‘maximum’ LSOA, where PV meets more than 100% of LV load and reverse power flow is experienced.Summer case study results are shown for the minimum demand day chosen.Fig. 7 presents results for an ‘average’ LSOA.The figure presents the aggregate load demand and the residual load demand, i.e. demand net of PV generation.Fig. 7 depicts the base case scenario, based on current level of PV deployment, and the introduction of 50% and 100% storage scenarios.Under the base case scenario PV generation has an impact on power flow, with a significant reduction around midday, followed by a ramping of required power as the sun begins to go down.The addition of storage to 50% of the installed PV systems has the effect of smoothing the overall power flow, by absorbing PV generation around midday and discharging later in the evening.Increasing storage deployment to 100% further smooths the power flow.When PV deployment is doubled) these effects are more pronounced: PV generation has a stronger impact on power flow, with daytime demand dropping below night time levels.The introduction of storage helps considerably in smoothing the power flow.Fig. 8 shows results for a ‘maximum’ LSOA, where PV generation has a stronger impact on residual load demand, leading to RPF.In the base case the reported RPF is of around 350 kW, which is reduced under 50% and 100% storage scenario by respectively ~140 kW and ~100 kW.When PV penetration is higher the impact on daytime demand is stronger and the introduction of storage helps in reducing RPF by ~500 kW.Analysis up to now has focused on single LSOAs, but it is interesting to look at how representative they are of the total distribution region considered.Fig. 9 presents the distribution across LSOAs of the percentage of daytime load met by PV.The dark green line depicts the base case situation, i.e. based on current PV deployment and demand.As also noted above, for around 1/3 of the LSOAs PV meets less than 10% of load and for a progressively smaller number of LSOAs higher amounts of load are met by PV.The addition of storage has two effects."The overall smoothing effect resulting from the storage operation, observed here as a reduction in daytime load met by PV, pushes LSOA's distribution towards zero.The RPF reduction effect of storage instead reduces the number of LSOAs experiencing it.Under 100% storage scenario the smoothing effect is more pronounced, implying a greater number of LSOAs with low percentage of demand met by PV, i.e. low power flow impact.The number of LSOA experiencing RPF is halved, highlighting how storage could be successfully deployed to mitigate RPF in the target distribution region.When PV deployment is doubled impact on power flow are more pronounced, as the number of LSOAs where high levels of LV load are met by PV increases dramatically.Furthermore, RPF occurs in 121 LSOAs, 30 time more than the base case scenario.This is significant as it shows that if PV deployment trends continue as they have historically, the occurrence of RPF is due to increase considerably.Installing storage on all of the PV systems however has a substantial effect.Firstly it smooths power flow, as shown by the decline in LV load met by PV across LSOAs.Secondly it reduces the number of LSOAs where RPF occurs from 121 to 33.This highlights the benefits of installing storage in areas characterized by increasing levels of PV penetration.Considering that the aim of the winter case study is the potential of PVS to improve PV generation contribution to evening peak demand, results are here presented for the maximum demand day chosen and for an LSOA with maximum PV installed relative to load.As in the summer case study, Fig. 11 presents aggregated daily load demand.In absence of storage PV generation does not contribute to evening peak demand.The addition of storage instead enables a 5% contribution of PV generation to the evening peak demand of the given LSOA.Such contribution is greater under 100% storage deployment scenario.Results are similar under the 200% PV scenario: in absence of storage PV generation does not contribute to evening peak demand, but its introduction allows a contribution of about 20%, hence greater than in the base case PV scenario.Thus, higher deployment of PV, if coupled with storage, can substantially increase PV Firm Capacity Credit.Fig. 13 expands the analysis across all LSOAs of the distribution region.In absence of storage PV contribution to evening peak demand is below 0.1% in all LSOAs.Under 50% storage scenario PV contribution increases, with evening peak demand reducing across LSOAs.Increasing storage to 100% leads to further reductions in peak demand across LSOAs: about 50% the LSOAs show a peak reduction of 0.1–1%, with the remaining 50% showing a reduction of >1%.The largest impact recorded is 10% reduction, i.e. a 10% contribution of PV generation to the LSOA winter evening peak demand.Finally, under the 200% PV deployment scenario as well PV alone does not provide any contribution to evening peak.Similarly to the base case scenario the introduction of storage improves the situation.However, in this case PV generation contribution is larger: about 75% of LSOAs show a peak reduction above 1%, with maximum recorded reduction of 20%.These results show how storage could allow newly deployed PV capacity to increasingly contribute to meeting UK peak evening demand and hence benefit the wider UK electricity system.This study provides evidence of increasing importance of storage technologies within the UK electricity system, quickly progressing toward higher deployment levels of renewable and intermittent generation including distributed PV.It tells where and how distributed storage could provide benefits in terms of increased PV hosting capacity in relation to the actual amount of local PV generation and electricity consumption across different geographical areas.In this context the evidence provided is twofold.Firstly, the potential of distributed storage to reduce power flow impacts is assessed across South West England.The net effect is a smoother power flow from higher voltage levels, due to the better matched local supply and demand, reducing the need of local power flow management and network upgrade.Results shows that, under current levels of PV deployment, in the majority of the areas analysed PV generation does not exceed local demand and sums up to a relatively small fraction of the total peak demand.In other words, few are currently the areas in the South West England where distributed storage might be needed.However, results change for higher levels of PV deployment where the number of areas where distributed storage could help in minimizing power feed in and smoothing power flow increase.Secondly, the analysis indicates how storage can help in increasing PV firm capacity credit, i.e. PV generation contribution to evening daily peak electricity demand.This could provide benefits to the network operator, reducing or deferring reinforcement in transmission and distribution network, allowing a better utilisation of other generation sources and reducing reliance on peaking plants to balance the system.In other words, the analysis shows that the implementation of storage can not only optimise solar output by offering a firmer power delivery into the network at a local level, but can also increase PV generation contribution at wider system level.As above, results show that, under current levels of PV deployment, contribution of PV generation to peak demand achieved with the introduction of storage is overall relatively small across the area considered, but it would increase in presence of larger PV penetration.These results are particularly relevant in a context of expected increase in UK electricity load) and the need of the UK electricity network to progressively upgrade to increase its flexibility to allow the transition toward a secure, low carbon energy system.This paper adds to the academic and modelling efforts surrounding PV deployment integration into the UK electricity grid, by providing a novel locally disaggregated framework for the analysis of embedded generation integration into the distribution grid.By being disaggregated to sensitive spatial resolution and based on actual PV deployment figures and primary electricity demand data, it provides a useful tool to consider which areas would most likely incur in power flow imbalances and potentially benefit from storage deployment along with progressive deployment of PV across UK territory.Therefore it could be used as a basis to consider and evaluate potential strategies for better integration of PV within UK distribution network.For example, it can be used to determine how PV could optimally be deployed across the network to provide the best match between local generation and demand.It can also be used to identify areas where expected high PV penetration would be combined with peak demand close to equipment limits and which have potentially the most to gain from deploying storage, which would both help to integrate PV into the network smoothly and also reduce the need for upgrading equipment.It is recognised that the framework and the analysis presented have some limitations and further work is envisaged to improve them.Both the spatial and temporal resolution can be improved, despite heavily relying on the availability of data.Integrating finer temporal details across an entire year would allow to extend the analysis beyond the worst case scenario, e.g. assessing how frequently it actually occurs.Finer spatial analysis within a given LSOA could allow to capture more localised power flows variations, e.g. impacts of clusters of PV deployment or the potential of balancing across different end users demand profiles.Moreover, the UKPVD framework will be extended to include distribution network asset data in order to more inclusively assess PV generation local network impacts disaggregated to a spatially sensitive scale, which would also allow to evaluate the potential of storage not just in terms of power flow management and peak shaving but also to provide other ancillary services such as voltage control or local power outages management.Finally an economic layer of analysis will be added in order to estimate cost implications of local network upgrade caused by PV deployment, and build up scenarios to assess the economics of storage deployment versus local network upgrade or the relative cost effectiveness of distributed versus standalone storage.These assessments will be across different UK areas and accounting for a set of potential storage benefits, including peak shaving and other ancillary services.For example, local network upgrades needed to integrate a given level of additional PV capacity and their relative costs are likely to be different in rural areas when compared to urban ones, thus also changing the economic case for storage implementation in the different areas.This study, and the framework developed, could potentially be used by network operators and regulators to have a realistic picture of where and how embedded PV generation might impact local network, and how and where storage implementation could provide benefits in terms of increased PV hosting capacity.This would allow to fine tune grid network management, PV and storage deployment across the country by accounting for local variations in PV generation and electricity demand balance.For example it could be used by DNOs to identify potential critical areas, where e.g. storage installation could constitute a more timely intervention to accommodate increasing levels of PV deployment while planning longer term network upgrade.While storage is increasingly acknowledged by UK government and regulators as an essential part of the future UK energy mix and storage installations have been slowly increasing, private and public investments in all possible storage applications are still limited.This in particular in the distributed storage segment, characterized to date only by few pilot programmes.Significant regulatory barriers need to be removed.This includes the lack of storage specific UK regulatory framework which would allow creating a level playing field for storage to participate in the provision of electricity market services.Along with possible regulatory framework interventions, policy makers have also control over several policy tools.For example, demand pull policies could be implemented, including direct financial support to storage implementation which would speed up market uptake and allow progress of storage technologies along the innovation chain, while also improving their economics.Supply push policies providing continuous funding to storage technologies research, development and demonstration project are also needed to guarantee progressive technological development and scaling up.The framework here developed, in particular when integrated with the economic layer of analysis, could also be used to provide evidence to support policy makers in potential regulatory and policy decisions in the field.For example while identifying best location and deployment level of storage, it can also be used to help the decision maker in understanding the appropriateness and cost effectiveness of different storage options, compare them with other flexibility solutions as well as assess the overall cost of storage implementation and, potentially, the relative cost of policy support across the UK.
This paper analyses the potential of distributed storage to mitigate impacts of embedded PV generation on the distribution grid by improving local balance between generation and demand, thereby enabling higher levels of PV penetration. In particular it looks at the potential of storage to: 1. reduce impacts on power flows due to local supply and demand imbalances driven by PV deployment within the UK domestic and non-domestic markets; 2. of improving PV capacity credit, i.e. the contribution of installed PV to meeting electricity demand within the UK electricity system. Results highlight how, under current levels of deployment, PV generation does not create major problems for the local distribution network, but that storage might play a relevant role for higher levels of PV deployment across the UK. The paper contribute to academic debate in the field by providing a novel locally disaggregated framework for the analysis of embedded PV generation integration into the distribution grid. It also provides a useful tool for network operators, regulators and policy makers.
55
mTOR-S6K1 pathway mediates cytoophidium assembly
CTP not only serves as the building blocks for nucleic acid synthesis, but also contributes to the synthesis of membrane phospholipids and protein sialylation.Low intracellular concentration makes CTP one of the rate-limiting molecules for nucleic acid biosynthesis and other CTP-dependent events.Therefore, understanding the precise control of CTP production is crucial for cell metabolism and many growth-related processes.CTP can be generated through either the de novo synthesis pathway or the salvage pathway in mammalian cells.CTP synthase is the rate-limiting enzyme that catalyzes the conversion of UTP to CTP using glutamate or ammonia as the nitrogen source.It has been demonstrated in a number of studies that CTPS can be assembled into filamentous structures, termed cytoophidia, in several different organisms, including fruit fly, bacteria, yeast and mammalian cells.Recent studies have established a link between cytoophidium and CTPS enzymatic activity.In Drosophila, inhibition of the proto-oncogene Cbl disrupts cytoophidium formation, and the protein level of the oncogene c-Myc is correlated with cytoophidium abundance and size.Moreover, CTPS activity was found to be elevated in various cancers such as hepatoma and lymphoma.Recently, we also observed the presence of CTPS cytoophidia in a variety of human cancer tissues.These findings suggest that the formation of cytoophidia is an evolutionarily conserved property of CTPS.In mammals, the mechanistic target of rapamycin is the key serine/threonine protein kinase, which can interact with several proteins to form two distinct molecular complexes, called mTOR complex 1 and mTOR complex 2.mTORC1 controls cell growth and metabolism by regulating protein synthesis, lipid and glucose metabolism, and protein turnover.In contrast, mTORC2 regulates cell proliferation and survival primarily through phosphorylating Akt and several members of the AGC family of proteins.Deregulation of the mTOR signaling pathway is associated with a number of human diseases, including cancer, type 2 diabetes, obesity, and neurodegeneration.Recent studies have established a direct link between mTOR pathway and nucleotide metabolism.In this study, to get a better understanding of the regulation of cytoophidium, we used a human cancer cell line and Drosophila as model systems to investigate the regulation of cytoophidium assembly by mTOR.We show that inhibiting mTOR pathway results in cytoophidium disassembly without affecting CTPS protein expression.In addition, the mTOR pathway controls CTPS cytoophidium assembly mainly via the mTORC1/S6K1 signal axis.Thus, this study links mTOR-S6K1 pathway to the polymerization of the pyrimidine metabolic enzyme CTPS.To investigate whether the mTOR pathway regulates CTPS cytoophidium formation, we screened various cell lines.We observed that CTPS cytoophidia were present in ∼40% SW480 cells under normal culture conditions.However, it is hard to detect cytoophidia in other colorectal cancer cell lines, including LoVo, RKO, DLD1, HCT116 and a normal human colon mucosal epithelial cell line NCM460.Therefore, we used the SW480 cell line as a model for investigating the correlation between the CTPS cytoophidium and mTOR pathway activity.We first treated SW480 cells with the mTOR inhibitors rapamycin or everolimus, and then labeled CTPS with anti-CTPS antibody.Immunofluorescence analysis showed that CTPS cytoophidia were present in 34.6% of control cells, while the percentage of cells with CTPS cytoophidia was reduced to 17% and 15.8% upon rapamycin or everolimus treatment, respectively.Inhibition of mTOR pathway was confirmed by the decreased level of phosphorylation at T389 of S6K1, a marker of active mTOR signaling.Further analysis showed that rapamycin and everolimus inhibit CTPS cytoophidium formation in a time- and dose-dependent manner.Previous studies have shown that Myc and Cbl regulate cytoophidia formation in Drosophila.Here we investigate if mTOR mediates cytoophidium assembly through the reduction of c-Myc or Cbl.Our data showed that the mRNA levels of c-Myc and Cbl were not changed when cells were treated with rapamycin.Moreover, the protein levels of c-Myc were not changed upon rapamycin treatment either, suggesting that mTOR does not regulate cytoophidium formation via c-Myc or Cbl.To confirm the correlation between mTOR pathway and cytoophidium assembly, we constructed a stable cell line expressing shRNA targeting mTOR and investigated the impact of mTOR knockdown on cytoophidium formation.Immunofluorescence results showed that the percentage of cells with CTPS cytoophidia dramatically decreased in cells expressing mTOR shRNA in comparison with the cells expressing control shRNA.mTOR knockdown efficiency was confirmed by the decreased protein level of mTOR.A similar result was observed in an mTOR siRNA experiment.Compared with control siRNA, transfection of mTOR siRNA decreased the expression of mTOR protein, which was accompanied by a reduced proportion of cells presenting the CTPS cytoophidia.The expression level of CTPS has been recognized as a critical factor for cytoophidium assembly.We next determined whether mTOR pathway inhibition reduces cytoophidium assembly through decreasing CTPS protein expression.Our data showed that neither rapamycin nor everolimus treatment affected CTPS protein expression.Inhibition of mTOR pathway was confirmed by the decreased level of phosphorylation at T389 of S6K1.In addition, knockdown of mTOR either by siRNA or by shRNA did not decrease CTPS protein expression.mTOR can be incorporated into both mTORC1 and mTORC2, and is essential for them to exert their biological functions.Rapamycin binds to FK506-binding protein 12 and inhibits mTORC1 activity directly.Although the rapamycin-FKBP12 complex does not directly bind to and inhibit mTORC2, long-time rapamycin treatment attenuates mTORC2 signaling, likely because the rapamycin-bound mTOR cannot be incorporated into a new mTORC2 complex.Therefore, we next determined which complex plays a dominant role in controlling CTPS cytoophidium assembly.For this purpose, we constructed two other stable cell lines expressing shRNA targeting a specific component of mTORC1 or a specific component of mTORC2.Immunofluorescence data showed that knockdown of Rictor did not change the proportion of cells with cytoophidia.In contrast, the percentage of cells presenting cytoophidia was reduced from 34.1% to 12.7% in Raptor knockdown cells as compared with control cells, and the degree of reduction is comparable to cells expressing mTOR shRNA.The knockdown efficiency was confirmed by Western blotting assay.For further confirmation of this phenomenon, we conducted a siRNA experiment.We found no difference in the percentage of cells with cytoophidia when cells were transfected with Rictor siRNA as compared with control siRNA.However, the transfection of Raptor siRNA significantly decreased the proportion of CTPS cytoophidia-positive cells from 32% to 20%, which is similar to the transfection of mTOR siRNA.The knockdown efficiency of the indicated genes was verified by Western blotting.Taken together, these results show that mTORC1 plays a dominant role in controlling CTPS cytoophidium assembly.Recent studies showed that mTORC1 could promote purine and pyrimidine synthesis through the ATF4/MTHFD2 and S6K1 pathway, respectively.To further understand the mechanisms by which mTORC1 regulates CTPS cytoophidium formation, we analyzed the effects of ATF4, MTHFD2 or S6K1 knockdown on CTPS cytoophidium formation.In comparison with cells transfected with control siRNA, no significant difference in the percentage of cells with cytoophidia was observed in cells transfected with ATF4 or MTHFD2 siRNA.Yet, transfection of S6K1 siRNA dramatically decreased the proportion of cells containing cytoophidia from 36.1% to 19.8%.Western blotting was used to verify the knockdown efficiency of the indicated genes.The role of S6K1 in cytoophidium assembly was further confirmed by lentiviral shRNA targeting S6K1.Immunofluorescence results showed that the percentage of cells expressing S6K1 shRNA-1 or shRNA-2 which contained CTPS cytoophidia dropped significantly from 41.1% to 6% and 15%, respectively, in comparison with cells expressing control shRNA.Cells stably expressing S6K1 shRNA-1 or shRNA-2 showed significantly reduced S6K1 protein expression.Next, we sought to examine whether overexpression of a constitutively active S6K1 could reverse the inhibitory effect of mTOR knockdown on CTPS cytoophidium assembly.Therefore, we stably overexpressed HA-CA-S6K1 in the cells expressing mTOR shRNA, and then analyzed cytoophidium assembly by immunofluorescence staining.As expected, knockdown of mTOR reduced the percentage of cells containing cytoophidia from 38% to 15%, while it rose to 32% in the cells stably expressing CA-S6K1.Meanwhile, the expression of HA-CA-S6K1 alone increased the frequency of cells with cytoophidia from 38% to 46%.The expression of HA-CA-S6K1 was verified by Western blotting assay.We next determined if S6K1 could interact with CTPS.Our co-immunoprecipitation data showed a clear interaction between HA-CA-S6K1 and CTPS.Thus, these data suggest that the mTOR pathway controls CTPS cytoophidium assembly mainly through S6K1 kinase and S6K1 may directly phosphorylate CTPS and regulate its filamentation.We further investigated the correlation between mTOR pathway and CTPS cytoophidium assembly in vivo.Two independent UAS driven shRNA were used to knock down the expression of mTOR in follicle cell epithelium of the Drosophila egg chambers.Compared with the neighboring cells, the cells expressing mTOR shRNA showed reduced nuclear size.The nuclear size in mTOR knockdown cells is less than 50% of that in neighboring cells, which is in agreement with the well-known function of mTOR in cell size control.Meanwhile, the expression of mTOR shRNAs resulted in a decrease of the cytoophidium length in GFP-positive clones as compared to the normal cytoophidium formation observed in their neighboring cells.Statistical analysis showed that the length of cytoophidia is less than 50% of the length of cytoophidia in their neighboring cells, suggesting that mTOR is required for cytoophidium assembly in vivo.A significant finding presented here is the connection between the mTOR pathway and CTPS cytoophidium assembly.mTOR has emerged as an important regulator of nucleotide metabolism and is implicated in multiple human cancer types.Mutations in mTOR itself are observed in various cancer subtypes.mTOR also serves as a downstream effector for many frequently mutated prooncogenic pathways, such as Ras/Raf/MAPK pathway, resulting in the hyperactivation of mTOR pathway in numerous human cancers.However, single-agent therapies using mTORC1 inhibitors, including rapamycin and everolimus, only showed limited anti-cancer activity, mainly due to the inhibition of mTORC1 generally has cytostatic but not cytotoxic effects in cancer cells.Elevated CTP levels and increased CTPS enzyme activity have been reported in many types of cancer such as hepatomas, leukemia and colorectal cancer.Knockdown of CTPS reduced tumorigenesis in a Drosophila tumor model, indicating that CTPS plays a functional role in tumor metabolism.In fact, CTPS has been an attractive anti-cancer target for decades.However, treatment with CTPS inhibitors such as acivicin and 6-Diazo-5-oxo-L-norleucine often provokes some unacceptable side effects, such as neurotoxicity, nausea and vomiting, which has hindered their further applications.A recent study also reported that inactivation of CTPS caused imbalance of dNTP pools and increased mutagenesis in Saccharomyces cerevisiae.The assembly of CTPS into cytoophidium has been suggested as a way for modulating its enzymatic activity.Polymerization of CTPS inhibits its catalytic activity in S. cerevisiae and Escherichia coli.However, an in vitro study showed that filamentation of CTPS increases its enzymatic activity.We recently reported the increased abundance of CTPS cytoophidium in various human cancers including colon, prostate and live cancers.A larger nucleotide pool is required to support fast cancer cell growth.The potential advantage of the cytoophidium formation is to increase enzyme activity rapidly, provided that polymerization is faster than transcription.Inhibition of the rate-limiting enzyme in guanylate nucleotide synthesis, inosine monophosphate dehydrogenase, selectively kills mTORC1-activated cancer cells, implying that targeting nucleotide metabolism is promising for treating tumors with elevated mTOR signaling.Therefore, it will be interesting to determine whether inhibition of CTPS filamentation could suppress mTOR hyperactive cancer cell growth in future studies.If this is true, further identification of small molecules to disrupt CTPS polymerization may be a promising strategy for combating mTOR-driven cancers.The mechanisms by which mTOR pathway controls CTPS filamentation is likely through direct and indirect manners.In this study, we understand that the regulation of CTPS filamentation seems not through reducing CTPS protein expression, which has been recognized as a critical factor for cytoophidium assembly.We showed evidence supporting that the regulation of cytoophidium formation by mTOR pathway is being carried out mainly by S6K1 kinase.When S6K1 was knocked down by siRNA, there is an approximate 50% reduction in the number of cells containing cytoophidia, and a further reduction in the percentage of cells stably expressing S6K1 shRNA.Importantly, exogenous expression of a constitutively active S6K1 mutant rescued mTOR knockdown-induced cytoophidium disassembly.Both transcriptional and post-transcriptional mechanisms especially phosphorylation can regulate CTPS enzymatic activity.Indeed, filamentous CTPS can be recognized by a phospho-specific antibody against CTPS phosphorylated on serine 36, whose mutation causes a decrease in CTPS catalytic activity.These findings raised the possibility that phosphorylation could regulate CTPS activity by influencing cytoophidium assembly.Interestingly, a previous phosphoproteomics study identified several phosphorylation sites at the C-terminal of CTPS in the mTOR pathway-activated mouse embryonic fibroblasts.We recently reported that deletion of the conserved N-terminal of Drosophila CTPS, targets of multiple post-translational modifications including phosphorylation, is sufficient to interfere with cytoophidium assembly.Therefore, it will be interesting to determine if phosphorylation has a direct effect on CTPS filamentation in the future.mTOR plays a central role in regulating cell size, cell cycle progression and cell proliferation in Drosophila and many other species.A previous study reported that reduction of CTPS could result in a decrease in nuclear size in Drosophila follicle cells.Our recent investigation in Drosophila also showed that CTPS is required for Myc-dependent cell size control.It is worth to note that several nucleotide metabolizing enzymes are phosphorylated or upregulated in response to mTOR activation in mammalian cells leading to increased intracellular pools of pyrimidines and purines for DNA and RNA synthesis.It is reasonable to believe that CTPS is involved in the regulation of nucleotide metabolism by mTOR, as CTP is essential for the biosynthesis of DAN and RNA.Although mounting evidence suggests that mTOR regulates nucleotide metabolism in cultured cells and tumor models, the relevance of this relationship in normal animal development has not been well defined.In this study, we observed a connection between mTOR expression and the length of CTPS cytoophidium in Drosophila oogenesis.Together, using the colorectal cancer cell line SW480 and Drosophila as model systems, we show that the mTOR pathway regulates CTPS cytoophidium assembly.We have found that pharmacological inhibition of the mTOR pathway or knockdown of mTOR protein expression significantly reduces cytoophidium formation without affecting CTPS protein expression.In addition, the mTOR pathway controls CTPS cytoophidium assembly mainly via the mTORC1/S6K1 signal axis.Collectively, our results show a connection between the mTOR pathway and CTPS cytoophidium assembly.Antibodies for CTPS, Raptor, Rictor, S6K1, ATF4 and MTHFD2 were purchased from ProteinTech.Antibodies for mTOR and Phospho-S6K1 were purchased from Cell Signaling Technology.Antibodies for c-Myc and β-Actin were purchased from Abcam.Antibody for HA and normal mouse IgG were from Santa Cruz Biotechnology.Antibody for α-tubulin was purchased from Sigma-Aldrich.Antibodies for Drosophila CTPS were purchased from Santa Cruz.Antibody for Hu-li tao shao was purchased from Developmental Studies Hybridoma Bank.Ramaycin and everolimus were from Selleck Chemicals."Two-tailed unpaired Student's t-test was used for comparisons between two groups and ordinary one-way ANOVA with Tukey's multiple comparison post-test was used to compare variables among three or more groups.The quantification of the percentage of cells containing cytoophidia was from at least three independent experiments, and more than 200 cells were counted for each quantification.P ≤ 0.05 was considered statistically significant.All analyses in human cells were performed using GraphPad Prism version 6.00.For Drosophila data, image processing and analysis was conducted using Leica Application Suite Advanced Fluorescence Lite and ImageJ.Each group over 60 Drosophila follicle cells were quantified.Nuclear sizes or the length of cytoophidia are expressed as a ratio of the average nuclear size or cytoophidium length in GFP marked clones to neighbouring cells."293T, SW480 and NCM460 cells were cultured in Dulbecco's modified Eagle's medium, whereas LoVo, RKO, DLD-1 and HCT116 cells were cultured in Roswell Park Memorial Institute Medium 1640 supplemented with 10% fetal bovine serum and antibiotics, in a humidified atmosphere containing 5% CO2 at 37 °C. "Cell transfections were carried out by using Lipofectamine 2000 or R0531 according to the manufacturer's instructions.All stocks were maintained on standard Drosophila medium at 25 °C.w1118 was used as a wild-type control in all our experiments.All RNAi stocks were from the TRiP collection.Desalted oligonucleotides were cloned into pPLK/GFP + Puro purchased from the Public Protein/Plasmid Library with the BamHI/EcoRI sites at the 3′ end of the human H1 promoter.The target sequences for mTOR, Raptor and Rictor are 5′-CCGCATTGTCTCTATCAAGTT-3′, 5′-CGAGTCCTCTTTCACTACAAT-3′ and 5′-CCGCAGTTACTGGTACATGAA-3′, respectively.The target sequences for S6K1 are 5′-AGCACAGCAAATCCTCAGACA-3′ and 5′- CCCATGATCTCCAAACGGCCA -3′.Plasmids were propagated in and purified from top 10 bacterial cells and co-transfected together with psPAX2 and pMD2.G into HEK 293T cells.Virus-containing supernatants were collected at 48 h after transfection, and then filtered with 0.45 μm PES filters.Cells were infected with appropriate lentiviruses in the presence of 8 μg/mL polybrene for 48 h.The GFP-positive cells were purified by flow cytometry and then cultured in normal medium containing 0.5 μg/mL puromycin for 1 week.The resulting puromycin-resistant cells were used for further analysis.Small interfering RNA duplexes against mTOR, Raptor, Rictor, S6K1, ATF4 and MTHFD2 were purchased from Ribobio.Three siRNA duplexes were used for one target gene to achieve greater knockdown efficiency and lower off-target effects."A 2 μL or 5 μL aliquot of 20 μM siRNA per well was transfected into cells seeded in 24-well or 6-well plates, respectively, with Lipofectamine 2000 according to the manufacturer's protocol.For mammalian, cells were cultured on glass slides and fixed with 4% paraformaldehyde in PBS for 10 min, and then permeabilized with 0.1% Triton X-100 for 10 min at room temperature.After washed with PBS, samples were blocked with 5 mg/mL bovine serum albumin in PBS for 1 h, followed by incubation with anti-CTPS antibodies overnight at 4 °C.After the primary antibody reaction, samples were washed and incubated with FITC-labeled secondary antibodies for 1 h. Finally, samples were washed and mounted with medium containing 4′,6-diamidino-2-phenylindole, which was used to visualize nuclei.The images were taken under a confocal laser scanning microscope."For Drosophila, tissues were dissected into Grace's Insect Medium, and then fixed in 4% paraformaldehyde for 10 min.After that, tissues were washed with PBT, followed by overnight incubation with primary antibodies at room temperature.After primary antibody reaction, tissues were washed with PBT, and then incubated at room temperature overnight in secondary antibodies.Nuclei were labeled by Hoechst 33342.All samples were imaged using a Leica SP5II confocal microscope.Cell lysates were prepared with NP-40 lysis buffer), and equal amounts of lysates were electrophoresed on a 10% SDS-PAGE gel.PVDF membranes were used for protein transfer.The membranes were then blocked with 5% nonfat milk in TBST, and 0.1% Tween 20) for 1 h, followed by incubation with appropriate primary antibodies at 4 °C overnight.After primary antibody reaction, the membranes were washed with TBST three times and then incubated with HRP-labeled secondary antibody at room temperature for 1 h.After washed again with TBST for three times, the signals of secondary antibodies were detected by an enhanced chemiluminescence system.Total RNAs were extracted by Trizol.The first-strand cDNA synthesis was conducted with RevertAid First-Strand cDNA synthesis kits.qRT-PCR reactions were performed using SYBR Green dye and the Applied Biosystems 7500 Fast Real-Time PCR System.Primers used for c-Myc, Cbl and β-actin.The resulting values were normalized to β-actin expression.For Co-IP assay, SW480 cells stably expressing HA-CA-S6K1 were cultured in 10 cm dishes for 48 h, and then cell lysates were prepared with Co-IP lysis buffer, NaCl 100 mM, EDTA 2.5 mM, NP-40 0.5%, DTT 1 mmol/L and proteasome inhibitors).Cell lysates were incubated with the appropriate antibody for 1h, and subsequently incubated with protein A-Sepharose beads overnight at 4 °C.The protein-antibody complexes recovered on beads were subjected to Western blotting using appropriate antibodies after separation by SDS-PAGE.
CTP synthase (CTPS), the rate-limiting enzyme in de novo CTP biosynthesis, has been demonstrated to assemble into evolutionarily conserved filamentous structures, termed cytoophidia, in Drosophila, bacteria, yeast and mammalian cells. However, the regulation and function of the cytoophidium remain elusive. Here, we provide evidence that the mechanistic target of rapamycin (mTOR) pathway controls cytoophidium assembly in mammalian and Drosophila cells. In mammalian cells, we find that inhibition of mTOR pathway attenuates cytoophidium formation. Moreover, CTPS cytoophidium assembly appears to be dependent on the mTOR complex 1 (mTORC1) mainly. In addition, knockdown of the mTORC1 downstream target S6K1 can inhibit cytoophidium formation, while overexpression of the constitutively active S6K1 reverses mTOR knockdown-induced cytoophidium disassembly. Finally, reducing mTOR protein expression results in a decrease of the length of cytoophidium in Drosophila follicle cells. Therefore, our study connects CTPS cytoophidium formation with the mTOR signaling pathway.
56
The Structure of a Conserved Domain of TamB Reveals a Hydrophobic β Taco Fold
In Gram-negative bacteria, the outer membrane serves as a highly selective permeability barrier, protecting bacterial cells from a hostile external environment, while allowing import of the nutrients required for survival and growth.In addition, the OM forms the interface between the bacteria and its external environment.As such, it plays a pivotal role in the adherence of bacteria to surfaces, as well as in attack and defense.To perform this diverse set of functions, the OM contains a multitude of integral membrane proteins.The transport of these proteins from their site of synthesis in the cytoplasm, and their correct and efficient insertion into the OM, poses a significant challenge.Gram-negative bacteria possess a specialized nano-machine termed the β barrel assembly machinery charged with this task.In addition, these bacteria possesses the translocation and assembly module, a nano-machine which is important in the proper assembly of a subset of OM proteins.In the Gram-negative bacterium Escherichia coli, the BAM complex contains five components centered around BamA, an integral OM protein of the Omp85 family.The TAM is composed of two subunits, TamA an Omp85 family protein evolutionarily related to BamA and the enigmatic inner membrane-anchored protein TamB.In E. coli and many other Gram-negative bacteria, the presence of BamA is essential for the growth and survival of the cell.The TAM on the other hand is dispensable for growth of E. coli under lab conditions; however, in a mouse model of infection, TAM mutants from various pathogens exhibit attenuated virulence.In E. coli, TamA and TamB have been shown to associate and, as TamB is embedded in the inner membrane via a signal anchor, it must span the periplasm to interact with TamA.In keeping with this, analysis of recombinant TamB by atomic force microscopy and dynamic light scattering shows it to be highly prolate, with a length of 150–200 Å.Interaction between TamA and TamB occurs via the conserved C-terminal DUF490 domain of TamB and POTRA1 of TamA and is required for the proper functioning of the TAM in vitro.In vivo, the presence of both TamA and TamB is required for the correct assembly of a number of OM proteins.In keeping with the role of the TAM in infection, these proteins are predominantly virulence factors, with prominent roles in bacterial adhesion and biofilm formation.Intriguingly, recent reports have shown that TamB homologs exist even in bacteria that lack TamA.In Borrelia burgdorferi, the causative agent of Lyme disease, TamB has been shown to interact with BamA and appears to be essential for viability.While further investigation is required, these data point toward a more general role for TamB homologs in OM protein biogenesis.TamB is a large protein by bacterial standards, consisting in E. coli of 1,259 amino acids, which are predicted to be composed of predominantly β strand structure.To date, no high-resolution structural information on TamB is available and, as no homologs have been structurally characterized, very little information about its structure can be inferred.In this work, we report the crystal structure of TamB963-1138 from E. coli, a region spanning half of the conserved DUF490 domain.This structure reveals that TamB963-1138 forms a previously undescribed fold, consisting of a concave β sheet with a highly hydrophobic interior, which we refer to as a β taco.We show that TamB963-1138 is stabilized by detergent molecules, which likely reside in the hydrophobic cavity of the β taco.Furthermore, sequence analysis of TamB suggests that this structure is shared by the majority of the molecule.Given the role of TamB in the transport and assembly of integral membrane proteins we postulate this hydrophobic cavity may serve as a chaperone and conduit for the hydrophobic β strands of target proteins.This proposed mechanism of TamB has striking similarities to the lipopolysaccharide transport system Lpt in which a membrane spanning β jelly roll with a hydrophobic groove is predicted to act as a conduit for LPS.To gain insight into the structure of the DUF490 domain of TamB, we attempted to crystalize the full-length domain, as well as a number of truncation constructs.One of these constructs, consisting of residues 963–1,138 of TamB produced diffraction quality crystals and data was collected and anisotropically processed to 1.86–2.2 Å.As no homologs of TamB have been structurally characterized, selenomethionine-labeled protein was prepared, crystalized, and the structure was solved using single-wavelength anomalous dispersion.Substructure solution proved difficult because only weak anomalous signal was present in the data.Despite this, a heavy atom substructure was determined consisting of one high-occupancy site, as well as two low-occupancy sites in close proximity.Initial SAD phases lacked contrast, making hand determination impossible.However, density modification greatly improved contrast, allowing main-chain tracing.This initial model was then used to phase the higher-resolution native data by molecular replacement, and the structure was built and refined.The crystal structure of TamB963-1138 revealed an elongated taco-shaped molecule consisting entirely of β sheet and random coil.This β taco structure is formed by two molecules of TamB963-1138, which interact via their N-terminal β strand to form a continuous 16-stranded curved β structure.The two molecules of TamB963-1138 in this structure consist of eight β strands related by non-crystallographic symmetry.The first of these strands runs parallel to the second, with the subsequent strands adopting an anti-parallel structure.Between the first and second β strands 29 residues lack electron density due to disorder.This disordered section leads to ambiguity regarding which molecule the first TamB963-1138 β strand originates from.Either this first β strand is connected by the disordered loop to the parallel strand of one monomer creating a continuous eight-stranded sheet, or this loop connects β strand 1 to β strand 2 of the opposing molecule, leading to a β zipper intercalation of the two molecules.Analysis of purified TamB963-1138 in solution by size-exclusion chromatography coupled to multi-angle laser light scatter gave a molecular mass of 38 kDa for TamB963-1138.This is twice the 19 kDa mass of an individual TamB963-1138 molecule, showing that the crystallography dimer is also the solution state of TamB963-1138.Proline residues 987 and 1,071 at the center of β strands 1 and 4 and glycine 1,035 at the center of β strand 2 create a discontinuity which kinks of the β sheet, facilitating the curvature of the β taco.The two molecules of TamB963-1138 are structurally analogous with a Cα root-mean-square deviation of 0.71 Å.The differences between the molecules is accounted for by flexible loops connecting the β strands; specifically, a large difference in conformation in the loop connecting β strands 6 and 7.As TamB963-1138 only represents a fragment of the larger TamB, the head-to-head dimer observed in the crystals structure is unlikely to be physiological.However, the oligomeric state of TamB in vivo has yet to been definitively determined, so the relevance of this dimer in unknown.The region of TamB N-terminal to TamB963-1138 is predicted to consist of a β structure, and so the interaction between the N-terminal strands of the two monomers may act as a surrogate for the β strands of full-length TamB.The most striking feature of the TamB963-1138 crystal structure is that the interior surface of its β taco is populated entirely by aliphatic and aromatic residues, making this interior cavity highly hydrophobic.During purification of TamB963-1138 it was found that the detergent lauryldimethylamine N-oxide was required for stabilization of the domain.Purification of TamB963-1138 in the absence of LDAO led the protein to precipitate and resulted in a poor yield of purified protein.TamB963-1138 could be purified in the presence of LDAO and, once purified, the protein could be maintained in the absence of the detergent.However, while analytical SEC suggests that TamB963-1138 still exists as a dimer under these conditions, circular dichroism revealed it to be unstructured under these conditions, lacking the characteristic minima for β structured proteins.Electron density possibly attributable to the aliphatic chains of stabilizing LDAO molecules is evident inside the TamB963-1138 cavity.This density however, is insufficiently resolved to permit accurate modeling of the LDAO head groups and as a result it was not possible to unambiguously attribute it to the detergent.As such, LDAO was not included in the final model submitted to the PDB.Given the periplasm-spanning topology of TamB, as well as the amphipathic characteristics in the substrate proteins assembled by the TAM, the hydrophobic β taco of TamB963-1138 structure is suggestive of a role for TamB in chaperoning membrane proteins across the periplasm to TamA in the OM.The open hydrophobic cleft of TamB963-1138 could shield the hydrophobic face of the β strand of an integral membrane protein, while leaving the hydrophilic face exposed to the aqueous environment.In support of this hypothesis, the interior of the TamB963-1138 β taco is of a width and depth sufficient to accommodate a single extended β strand.To test this hypothesis, we introduced the charged amino acids glutamate or arginine into full-length TamB in the place of Leu1049 and Ile1102, respectively.Both these amino acids reside in the TamB963-1138 hydrophobic β taco.We then tested the ability of these mutant versions of TamB to complement a ΔtamB E. coli strain, by observing its function in an established pulse-chase assay, where TAM function is the rate-limiting step in the assembly the fimbrial usher protein FimD.In this assay, proteinase K shaving of the bacterial cell surface is used to detect properly folded, radiolabeled FimD assembled in the OM.Exogenously added proteinase K cleaves FimD at an extracellular loop, generating a C-terminal and N-terminal fragment.However, in the absence of the TAM, a 45 kDa “B fragment” is generated representing a central portion of FimD in a non-native conformation.Interestingly, placement of an Arg at position 1,102 significantly impaired the assembly of FimD, leading to the accumulation of the 45 kDa B fragment, indicating that the Ile1102Arg mutant can only partly complement a tamB null-phenotype.Other mutations in the groove had less impact: the ability of the Leu1049Glu mutant to assemble FimD was indistinguishable from wild-type, BN-PAGE analysis of crude membrane extracts revealed that both mutant versions of TamB were capable of interacting with TamA to form the TAM, indicating that the defect in TamBIle1102Arg is not due to a gross defect in TamB production or structure.Why TamBIle1102Arg was defective in our assay, but TamBLeu1049Glu remained functional, is unknown.However, while the Leu1049Glu mutation would certainly change the local charge of the β taco, it does not project into the cavity to the extent that bulky arginine at 1,102 does.Future work involving more thorough mutagenesis studies of TamB would be useful in answering these questions.To create the hydrophobic β taco structure found in TamB963-1138, the amino acid sequence of the β strands consist of alternating hydrophobic and hydrophilic amino acids.The sidechains projecting from a face of a β sheet are on alternate sides of the strands, so that the patterning observed in β taco of TamB creates one hydrophobic face and one hydrophilic face that would face the periplasmic environment.This sequence pattern is reminiscent of β barrel membrane proteins but in that case the hydrophobic side of the β sheet is embedded in the lipid bilayer.Sequence analysis of the TamB family reveals this alternating pattern of conserved hydrophobic and hydrophilic residues occurs not only in the TamB963-1138, but is widely distributed throughout the majority of TamB.Extrapolating from the structure of TamB963-1138, this pattern suggests that the extended TamB molecule consists of long sections of hydrophobic channel.This proposed structure for TamB has a striking similarity to the well-characterized LPS transport system of Gram-negative bacteria.Three proteins from this system, LptC, LptA, and LptD, contain or consist of a β jelly roll with an interior hydrophobic groove.These proteins are predicted to interact to form a hydrophobic conduit for the aliphatic chains of LPS across the periplasm, from the inner to OMs.In an interesting parallel to TamB963-1138, the β jelly domain of LptD, the OM component of this system, was crystallized with two detergent molecules in its hydrophobic groove.TamB homologs have been shown to be widely conserved in bacterial diderms, where they are involved in OM biogenesis in distantly related genera, from Escherichia to Borrelia to Deinococcus.The distribution of TamB-like proteins is, however, not limited to the bacterial kingdom, with proteins containing the conserved DUF490 domain having also been identified in plants.In a recent study screening rice mutants for defects in starch accumulation, the protein SSG4 was identified.SSG4 is a large protein consisting of predominantly β structure and a TamB-like C-terminal DUF490 domain.SSG4 is localized to the amyloplast, the plastid responsible for starch synthesis in plants.This organelle was derived by evolution from an ancient symbiotic Cyanobacterium.Mutation of Gly1924Ser in the DUF490 domain of SSG4 leads to enlarged starch granules and seed chalkiness.The authors suggest that this glycine is crucial to function and that it is conserved in TamB proteins from Proteobacteria.While plastids and Cyanobacteria share an evolutionary history, their protein-transport pathways are not homologous: proteins are imported into plastids from the cytoplasm, and there is no evidence of a vestigial protein secretion pathway from the internal compartments of the plastid out to its OM.Therefore, if SSG4 also plays a role in membrane protein biogenesis in the plastid it must be distinct from that of TamB.Sequence alignment between TamB and SSG4 shows that the conserved glycine falls within the TamB963-1138 crystal structure corresponding to Gly1073.Gly1073 is located in β strand 4, adjacent to the kink in the β sheet caused by Pro1071.To test the significance of glycine at this position for the function of TamB, we subjected it to mutagenesis.However, substitution of either serine or glutamate for Gly1073 did not affect the function of the TAM in the assembly of FimD into the OM of E. coli.While this finding does not rule out the importance of Gly1073 in the function of TamB, it shows that substitution of this residue does not result in a gross defect in the function of this protein.To determine if TamB and SSG4 do indeed share a related function in these distantly related organisms, further investigation will be required.Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Rhys Grinter.Expression of proteins used for crystallographic studies and analytical size exclusion chromatography was performed in E. coli BL21.Cells were grown at 37°C in Terrific broth.When optical density at 600 nm reached 0.8, protein expression was induced with the addition of 0.5 mM IPTG and cells were incubated overnight at 25°C before harvest.For membrane isolation, BN-PAGE and pulse chase analyses, E. coli BL21 Star™ and derivative strains were used.For plasmid storage, E. coli DH5α was used.These strains were routinely grown in lysogeny broth, at 37°C and 200 strokes per minute.For strain storage, saturated overnight culture was diluted 1:1 in 40 % v/v glycerol, snap frozen in liquid nitrogen and kept at -80°C.Where appropriate, the following antibiotics were used for selection: 34 μg.mL-1 chloramphenicol, 30 μg.mL-1 kanamycin, and/or 100 μg.mL-1 ampicillin.If solid media was required, 15 g.L-1 agar was added to the growth medium.Native TamB963-1138 was expressed and purified as described by.Briefly, the gene fragment encoding the DUF490 domain residues 963-1138 from TamB from E. coli K12 was ligated into pET-21a via NdeI and XhoI restriction sites producing a C-terminally His6 tagged product.This construct was transformed into E. coli BL21 cells which were grown in LB to an OD of 0.6, before induction with 0.5 mM IPTG.Cells were then grown for 15 hours at 25°C and harvested by centrifugation.Cells were resuspended in 20 mM Tris–HCl, 10 mM imidazole, 0.5 M NaCl, 5% glycerol, 0.05% LDAO pH 7.5 then lysed via sonication, supernatant was clarified by centrifugation.TamB963-1138 was purified from this clarified supernatant by a 2-step purification of nickel affinity and size exclusion chromatography.Clarified cell lysate was applied to a 5ml Ni-agarose column and the column was washed with at least 10 column volumes of 20 mM Tris–HCl, 10 mM imidazole, 0.5 M NaCl, 5% glycerol, 0.05% LDAO pH 7.5.Protein was then eluted from the column with a 0-100% gradient of of 20 mM Tris–HCl, 500 mM imidazole, 0.5 M NaCl, 5% glycerol, 0.05% LDAO pH 7.5 over 10 column volumes.Fractions containing DUF490963-1138 were then applied to a 26/200 Superdex S200 column equilibrate in 20 mM Tris-HCl, 200 mM NaCl, 0.05% LDAO.DUF490963-1138 eluted as multimeric species on size exclusion, however a single peak most likely corresponding to a monomer or dimer was pooled and concentrated to 8-15 mg.ml-1 prior sparse matrix screening for crystallization conditions.For selenomethionine labelling TamB963-1138 expression construct described above was transformed into the methionine auxotrophic strain E. coli B834.Cells were grown at 37°C in M9 minimal media to an OD600 of 0.4 before induction with 0.5 mM IPTG.Cells were then grown for 15 hours at 25°C before harvesting, and protein purified as described above.1 mM DTT was included in all buffers to prevent oxidation of the selenium.Crystallisation was performed as previously described.Protein for crystalisation was in a buffer containing: 50 mM Tris–HCl, 200 mM NaCl, 0.05% LDAO pH 7.5.Crystals were grown with a reservoir solution containing: 0.1 M HEPES, 15% PEG 400, 0.2 M CaCl2 pH 7.0.Crystals were transferred to cryoprotectant consisting of reservoir solution with 25% PEG 400 and flash cooled in liquid nitrogen.Data was collected at 100°K at Diamond Lightsource, UK.Membranes comprising 150 μg.μL-1 protein were thawed on ice, and subjected to centrifugation.Membranes were resuspended in 36 μL blue native lysis buffer.Samples were incubated on ice for no more than 30 min, and then subjected to centrifugation.The supernatant was transferred to 9 μL 5× blue native sample buffer.With a 40 % T, 2.6 % C acrylamide/bis acrylamide solution, a 4 % acrylamide and a 14 % acrylamide mixture were used to cast a 4-14 % blue native gradient gel with an SG50 gradient maker as per manufacturer’s instructions.Samples and size markers were loaded onto 4-14 % blue native gradient gels and analysed by blue native PAGE as follows.Anode buffer and dark blue cathode buffer were added to the lower and upper tanks, respectively, and subjected to electrophoresis, until the dye front has migrated two-thirds of the gel.Replace the buffer in the upper tank with a slightly blue cathode buffer and subject to electrophoresis until the dark blue cathode buffer within the gel has been completely replaced by the slightly blue cathode buffer.Samples in the gel were denatured as follows.Blue native denaturing buffer was heated to 65°C and poured over the 4-14 % blue native gradient gel.The gel was then incubated for 20 min, and after briefly rinsing the gel in water, it was transferred to CAPS western transfer buffer and incubated for 10 min.Denatured protein was transferred to 0.45 μm PVDF membranes using CAPS western transfer buffer.Residual coomassie was removed from the PVDF membrane, before rinsing briefly in TBS-T buffer.Membranes were incubated in blocking buffer for 30-60 min or overnight, before incubation in rabbit anti-TamA antibodies for 1 hour.Membranes were washed three times in TBS-T for 5-10 min each, before incubation in goat anti-rabbit antibodies for 30 min.Membranes were then washed as before, followed by incubation with Amersham ECL Prime Western Blotting Detection Reagent as per manufacturer’s instructions.Chemiluminescent membranes were then exposed to super RX-N film in an Amersham Hypercassette™ for up to 10 min, and developed using the SRX-101A medical film processor as per manufacturer’s instructions.Saturated overnight cultures were diluted 1:100 into fresh LB, supplemented with chloramphenicol and ampicillin, and incubated until mid-log phase.The culture was subjected to centrifugation, washed in M9-S media, and after another round of centrifugation, was resuspended in M9-S media.After 30 min incubation, cells were normalised to an optical density at 600 nm of 0.6 and diluted 1:1 in 40 % v/v glycerol.The samples were then snap-frozen in liquid nitrogen and stored at -80°C.Each batch of cells was considered to be one set of technical replicates.Aliquots were thawed on ice, subjected to centrifugation and resuspended in 650 μL M9-S media.Rifampicin was added to inhibit transcription for 60 min before 0.2 mM of pre-warmed IPTG was added to induce pKS02-based fimD expression for 5 min.Cells were then ‘pulsed’ with 22 μCi.mL-1 of pre-warmed EXPRE35S35S -Protein Labelling Mix for 45 s, then immediately transferred to ice.Samples were then subjected to centrifugation and resuspended in 650 μL M9+S media.The ‘chase’ component was considered to have begun immediately on resuspension of M9+S media and was performed for 32 minutes.For analysis by protease shaving, at each chase time point, 50 μL aliquots were incubated on ice for 10 min with or without 50 μg/mL proteinase K. Trichloroacetic acid was then added and protein precipitates were collected by centrifugation.The precipitate was washed with acetone, subjected to centrifugation as before, and the pellet was air-dried.The sample was resuspended in 50 μL SDS sample buffer and boiled for 3-5 min.Samples were loaded into 12% SDS acrylamide gel and analysed by SDS-PAGE.Proteins were transferred to 0.45 μm nitrocellulose membrane and the membrane was air dried.Radiation was captured for 12-18 hours using the storage phosphor screen and analysed using the Typhoon Trio.The absolute molecular mass of TamB963-1138 was determined by SEC-MALS.100-μl protein samples were loaded onto a Superdex 200 10/300 GL size-exclusion chromatography column in 20 mM Tris, 200 mM NaCl 0.05 % LDAO at 0.6 ml/min with a Shimadzu Nexera SR.The column output was fed into a DAWN HELEOS II MALS detector followed by an Optilab T-rEX differential refractometer.Light scattering and differential refractive index data were collected and analyzed with ASTRA 6 software.Molecular masses and estimated errors were calculated across individual eluted peaks by extrapolation from Zimm plots with a dn/dc value of 0.1850 ml/g.SEC-MALS data are presented with light scattering and refractive index change plotted alongside fitted molecular masses.Circular dichorism measurements were obtained for TamB963-1138 at 1 mg/ml in 20 mM Tris, 200 mM NaCl the presence and absence of 0.03 % LDAO at 24°C using a Jasco J-810 spectropolarimeter.Based on the Matthews coefficient for the DUF490963-1138 crystals, two molecules were predicted to be present in the crystal asymmetric unit, with a solvent content of 50 %.One molecule per ASU was also a possibility, with a solvent content of 76 %.Each DUF490963-1138 molecule has 2 methionine residues, giving 4 as the most likely number selenium atoms present.To locate heavy atom sites in diffraction data from the selenomethionine labelled DUF490963-1138 data was collected at the selenium edge and processed to 2.7 Å.Anomalous signal for the data was detected up to 7.4 Å using Xtriage from the Phenix package.This was weaker than expected given the methionine to amino acid residue ratio.ShelxC was employed for data preparation, followed by ShelxD to locate selenium sites.The best substructure solutions were obtained with 3 selenium sites with occupancies of 0.87, 0.47 and 0.31, rather than the 4 sites expected for 2 molecules per ASU.These sites were then provided along with the DUF490 anomalous dataset to Autosol from the Phenix package for phasing and density modification.Contrast of the initial experimentally phased maps was poor, making it difficult to determine the corrected hand of the screw axis.However, density modification greatly improved map contrast with clear density present for molecules consisting of an elongated U-shaped β-sheet in the solution from the correct hand with the space group P3221.This experimentally phased map was then used to construct a provisional model.This structure was then used as a molecular replacement model for the higher resolution native data.The DUF490963-1138 was then iteratively built and refined using COOT and Phenix refine to give the final structure with Rwork and Rfree of 20.8% a 25.1% respectively.Structural analysis and figure construction was performed using pymol and QtMG structural graphics packages.Secondary structure prediction for TamB was performed using the JPred4 webserver.Amino acid sequences for TamB homologues were identified using a Hmmer search against the rp15 database, with TamB from E. coli as the query sequence and an e-value cut off of 1e-30.Sequences identified were triaged for those +/- 500 amino acids in length of TamB from E. coli and aligned using clustalx.In order to introduce single amino acid mutations onto the TamB963-1138 region of tamB in pTamB the whole plasmid PCR mutagenesis method was utilised.A reaction was assembled in 50 μl H2O containing: 2.5 U PfuTurbo polymerase, 5 μl 10 x Pfu reaction buffer, 125 ng each of forward and reverse primers, 50 ng pTamB DNA and 1 μl 10 mM dNTP mix.The following forward primers were utilised for each mutation, with the reverse complement of the listed sequence used for the reverse primer:Leu 1049 to Glu,Gly 1073 to Ser,Gly 1073 to Glu,Ile 1102 to Arg,The reaction mixture was subjected to the following thermocycling regime: 1 x 95°C for 30 seconds, 18 x.1 μl of DpnI was then added to the reaction which was incubated at 37°C for 1 hour.The reaction mixture was then transformed into E. coli DH5α and plated onto LB agar containing 30 μg/ml chloramphenicol.Plasmid DNA was extracted from resultant colonies and sequenced to confirm that the desired mutation and no other mutations were present.E. coli DH5α were Saturated overnight cultures were diluted 1:50 into fresh 30 mL LB, supplemented with appropriate antibiotics, and incubated until mid-log phase.The culture was chilled on ice for 30 min, then subjected to centrifugation and resuspended in 4.5 mL ice cold 0.1 M CaCl2.The suspension was chilled on ice for a further 30 min, centrifuged as before and resuspended in 150 μL 0.1 M CaCl2.Following a 2-hour incubation on ice, 75 μL LB were aliquoted and snap frozen and stored at -80°C.Cells were thawed on ice and incubated with 20-50 ng plasmid DNA for 40 min on ice.Cells were heat shocked at 42°C for 45 s, then incubated on ice for 2 min before 250 μL LB media was added and cells were allowed to recover for 1 hour.Samples were then spread-plated onto LB agar containing appropriate antibiotics, and following a 24-hour incubation at 37°C, transformants were selected for subsequent analyses.Saturated overnight cultures were diluted 1:50 into fresh 30 mL LB, supplemented with appropriate antibiotics, and incubated until mid-log phase.The culture was subjected to four rounds of centrifugation, followed by resuspension in increasingly smaller volumes of 10% v/v glycerol: 12 mL, 6 mL, 3 mL, 0.3 mL.Cells were briefly incubated on ice with 20-50 ng plasmid DNA and then transferred to a chilled electroporation cuvette.Samples were electroporated and immediately transferred to 250 μL LB and allowed to recover for 1 hour.Transformants were then selected for on solid media, supplemented with appropriate antibiotics, after a 24-hour incubation at 37°C.Saturated overnight cultures were diluted 1:100 into fresh 50 mL LB, supplemented with appropriate antibiotics, and incubated until the optical density at 600 nm was between 0.8 and 1.2.The culture was subjected to centrifugation and then resuspended in 10 mL sonication buffer.Samples were lysed by sonication and the sample was subjected to centrifugation to remove unbroken cells.The supernatant was then subjected to centrifugation, and the membrane pellet was resuspended in 1 mL SEM buffer.Membranes were snap frozen in liquid nitrogen, and stored at -80°C.Statistical methods were not utilised in analysis of the significance of data in this study,The coordinates and structure factors for the crystal structure of TamB963-1138 have been deposited in the PDB under the accession number: 5VTG,R.G., C.J.S., I.J., D.W., and K.M. conceived and designed the experiments; R.G., C.J.S., G.V., and I.J. performed the experiments; R.G., C.J.S., G.V., I.J., D.W., and K.M. analyzed the data; R.G., T.L., and D.W. contributed reagents/materials/analysis tools; R.G., C.J.S., T.L., I.J., and D.W. wrote the paper.
The translocation and assembly module (TAM) plays a role in the transport and insertion of proteins into the bacterial outer membrane. TamB, a component of this system spans the periplasmic space to engage with its partner protein TamA. Despite efforts to characterize the TAM, the structure and mechanism of action of TamB remained enigmatic. Here we present the crystal structure of TamB amino acids 963–1,138. This region represents half of the conserved DUF490 domain, the defining feature of TamB. TamB963-1138 consists of a concave, taco-shaped β sheet with a hydrophobic interior. This β taco structure is of dimensions capable of accommodating and shielding the hydrophobic side of an amphipathic β strand, potentially allowing TamB to chaperone nascent membrane proteins from the aqueous environment. In addition, sequence analysis suggests that the structure of TamB963-1138 is shared by a large portion of TamB. This architecture could allow TamB to act as a conduit for membrane proteins. In this work Josts et al. provide structural insight into the bacterial β barrel assembly protein, TamB. This structure suggests that TamB performs its function via a deep hydrophobic groove, capable of accommodating hydrophobic β strands.
57
Whole genome sequencing uses for foodborne contamination and compliance: Discovery of an emerging contamination event in an ice cream facility using whole genome sequencing
Whole genome sequencing has been used to provide detailed characterization of foodborne pathogens.Genomes comprising diverse groups of pathogenic genus and species including Listeria monocytogenes, E. coli, Salmonella, Campylobacter and Vibrio have provided insight into the genetic make-up and relatedness of these pathogens.Numerous government agencies, industry, and academia have developed applications in food safety using WGS approaches such as outbreak detection and characterization, source tracking, and determining the root cause of a contamination event.In this particular case study, the FDA GenomeTrakr database, PulseNet and The NCBI Pathogen Detection Portal were used to cluster and characterize low levels of sporadic illnesses caused by L. monocytogenes that had been documented in the United States.By fusing WGS with the GenomeTrakr database, a relationship emerged early between sequences from environmental swabs and those derived from clinical L. monocytogenes samples.This observation supported a potential relationship between illnesses and non-food contact environmental samples obtained as part of FDA inspection activities at this food manufacturing facility.This was an important lead as traditional cluster detection methodologies based on PFGE did not identify these cases as an outbreak cluster so they had not previously been investigated for a common vehicle, prior to the linkage referred to here with the FDA samples.As WGS testing and additional follow-up questioning ensued, public health officials concluded that the illnesses were, in fact, related exposure to the suspect ice cream eaten by ill consumers.The authorities reported this information to the food company, and the firm issued a voluntary recall notice for all of their ice cream products."This was followed by suspension of the facility's registration.In this case, strong genomic evidence mobilized the Office of Compliance early, documenting an example where application of WGS and the resultant sequence evidence derived from it provided clear and actionable data to support further investigation.WGS has been applied to the traceback of foodborne pathogens where insufficient genetic resolution bogged down existing subtyping tools.The primary application of using WGS for pathogen surveillance was to look for close matches between clinical and environmental isolates, although any WGS linkage can support an investigation and direct additional inquiry.In addition to finding similar “matches” in the database, another advantage of WGS data as the premier subtyping tool for foodborne pathogen traceability was the ability to reconstruct the evolutionary history within a cluster of clonal isolates, enabling the identification of a recent common ancestor or source of a contamination event.These phylogenetic cluster analyses are highly accurate and can be made readily available, often making the genomic linkage the first lead in an outbreak or compliance investigation, whether this is linking clinical, environmental or both kinds of isolates.In an ideal pathogen surveillance network, these genetic linkages need to be analyzed, investigated and acted on rapidly.Additionally, WGS data diversity in the database should represent the real-world, global, microbial diversity and should include sequences from all countries currently gathering pathogen genomes.Such a database should be manifold in structure and comprise genome sequences from food, environmental and clinical sources.A contamination event discovered in the United States may involve foods that were traded on the other side of the world as documented by a recent case study involving trade with Australia.A dearth of real-time WGS data sharing remains in some parts of the world, and despite several conspicuous reasons for this lack of open data availability, the concern remains that some contamination events likely will not be fully elucidated thus allowing globally traded contaminated product to linger in the food supply chain for extended periods.Restated, the sharing of all WGS data in real-time globally enables timely access for analysis of the data to all relevant stakeholders engaged in food safety surveillance and mitigation.Moreover, the continued pooling of data to an open and common curated source allows for refinement of geographical mapping of foodborne pathogen species on a global scale.Unfortunately, many countries still do not actively share genomic data or metadata, an act that clearly stifles growth of global open-source WGS databases, and potentially limits insight into the sources of foodborne contamination and outbreaks on an international scale.Both the Food and Agricultural Organization and World Health Organization are developing guidance documents on the value and barriers of WGS technologies for many countries, and the intrinsic value of sharing these data.As an example, the GenomeTrakr database is made publicly available in real-time in support of these lofty goals.The database is housed within the National Center for Biotechnology Information and their Pathogen Detection web tools which provide WGS genomic linkages and phylogenetic trees daily, funded by NCBI."Additionally, the Centers for Disease Control and Prevention led PulseNet Network also uploads all SRA genomic data to NCBI as does the United States Department of Agriculture's Food Safety and Inspection Services.Together these data comprise a one-health oriented database where clinical, veterinary, food and environmental WGS samples are combined to discover novel linkages among foodborne pathogens from various niches.Each of these governmental agencies releases minimal metadata for the isolates including who, what, when and where.For example, NCBI biosample SAMN07702403 for strain FDA00012171 lists this isolate as an environmental swab from FL, with a collection date of 2017-08-30 and sequenced from FDA lab 0973150-044-001.The federal laboratories also keep confidential data that is not made publicly available such as firm names, specific locations of food manufacturers, and commercial details of ingredient sources for each specific food product.Confidential data often is shared among professional public health agencies in-order to solve outbreaks.The Listeria monocytogenes ice cream case study presented here documents the value of intervention, even when only a few isolates match genomically and or are observed over a long period of time.It also highlights the critical need to conduct WGS testing on all clinical and environmentally derived isolates of Listeria infected and ill patients in real-time as these data have the potential to provide critical evidence in support of compliance investigation and regulatory response.Minimal methods are provided for this case study to document how FDA surveillance identifies several ways that WGS improves actionable outcomes for public health and compliance in cases involving Listeria monocytogenes contamination.More detailed methods have been published for several Listeria cases.In late August 2017 FDA conducted environmental sampling inside an ice cream facility as part of an FDA sampling assignment designed to gather baseline environmental surveillance data and other inspection information on ice cream facilities.When investigators collect an environmental sample this generally consists of one hundred subsamples across zones 1 and 2.At each facility roughly 200 environmental swabs are collected and assessed for the presence of foodborne pathogens.If positive isolates are observed, then often a second inspection is conducted where roughly 300 to 400 additional environmental swabs are collected and assessed for the presence of foodborne pathogens.The timing and extent of additional inspections depends on the circumstances observed."Libraries of genomic DNA were prepared with the Nextera XT DNA Library Preparation Kit and subsequently sequenced on a MiSeq according to the manufacturer's instruction.Quality control and assessment of the genomic data followed FDA validated and published methods.Paired-end reads were assembled using SPAdes software v 3.X with default settings and invoking the –careful and –cov-cutoff auto options."Annotations of assemblies were processed using the NCBI's Prokaryotic Genome Annotation Pipeline.Fastq files obtained from MiSeq run were used as input of CFSAN SNP pipeline v 0.8.The CFSAN SNP pipeline was used to generate a SNP matrix, with high-density SNP regions filtered.The GARLI v 2.01 was used to reconstruct the maximum likelihood phylogenetic tree.We searched the best tree with 10 ML replicates and conducted bootstrap analysis for 1000 replicates.Python program SumTrees was used to generate the ML phylogenetic tree with bootstrap values.The phylogenetic relationships among the strains in comparison with others isolated worldwide and SNP Cluster nomenclature were checked at the NCBI Pathogen Detection url.Identification of acquired antimicrobial resistance genes and plasmids within the outbreak isolates were performed using ResFinder v3.0 and PlasmidFinder v1.3, respectively.As an aside, we believe that general food microbiologists will greatly benefit from a thorough understanding of precisely how to these powerful new genomic tools are being used.Part of learning a new field is also learning the language.Detailed glossaries are readily available online and provide definitions for molecular biology, genomics and phylogenetics terminology for example see https://www.ncbi.nlm.nih.gov/projects/genome/glossary.shtml and https://ucmp.berkeley.edu/glossary/gloss1phylo.html as examples., "Listeria monocytogenes derived from five environmental subsamples from FDA's 2017 sample 0973150, and 2018 samples FDA00013579 and FDA00013578 are genetically related based on SNP distance and a monophyletic relationship in the resultant phylogenetic tree. "Three subsamples from FDA's 2017 sample 973,150 and 2018 sample FDA00013577 formed a second distinct cluster and were determined to be the same strain as three CDC-sequenced clinical isolates and one clinical isolate from 2018) based on SNP distance and a monophyletic relationship with perfect bootstrap support.The NCBI pathogen detection data analysis pipeline clusters isolates that are between 0 and 50 SNP differences apart.These strains are distinct and only represented by these 7 isolates out of ~21,000 L. monocytogenes isolates in the NCBI Pathogen database at the time of analysis.These strains had not been isolated previously from food or environmental samples from any other firm for which WGS data is available.Two strains of L. monocytogenes isolated from environmental samples collected by FDA investigators during an ice cream facility inspection in 2018 were determined to be the genetically related to strains isolated from the 2017 inspection of the same facility, indicating the presence of multiple resident L. monocytogenes strains.Other strains were determined to be genetically related to clinical isolates from 2013 and 2018 indicating that this strain of L. monocytogenes can cause illness and has potentially been resident since at least 2013.The WGS findings strongly predicted that an epidemiological link would be found, and this was confirmed after exposure findings were completed that pointed to these contaminated ice cream products as the causative agent for the three clinical illnesses reported in 2013 and 2018.FDA, CDC and the Florida Department of Health discussed these findings with the firm and the firm issued a voluntary recall notice for all ice cream products that was then followed by FDA suspending the firm.For a detailed description of FDA safety alerts and advisories see.The phylogenetic results also are available at The NCBI Pathogen Detection web site by searching for the NCBI clusters PDS000004821.11 and PDS000025188.7.This case study shows that low levels of sporadic L. monocytogenes contamination induced illnesses that were linked by WGS to predict a common source and root cause."With application of WGS, a genomic linkage to a food manufacturing location was quickly and convincingly established allowing for mobilization of FDA's compliance officials to act rapidly.This is not the first instance of the use of WGS to identify contamination events caused by very low levels of sporadic contamination persisting in a food facility, nor is it the first reporting of L. monocytogenes in an ice cream product.PulseNet has recently replaced PFGE with WGS as the primary molecular surveillance tool for Listeria, and other foodborne pathogens such as enterohemorrhagic E. coli and Salmonella are soon to follow.It is notable that WGS evidence has improved most contamination investigations as some PFGE matches may not trigger cluster investigation since related cases spread out over several months or years and often do not rise above the background illness.With the adoption of WGS analysis for all clinical, food and environmental isolates, the potential for more refined and accurate cluster detection now exists.This approach, which integrates WGS into investigatory workflow will only further strengthen compliance efforts as more data are shared in real-time and globally.Several bottlenecks to the investigatory process persist as part of current compliance and outbreak investigational workflow.Such checkpoints in the process include facility inspections, epidemiological food exposure determinations with statistical confidence, and record review and auditing all of which are slower and more interactive processes requiring investigators on the ground, patient questionnaires, and/or follow up additional inspections and laboratory analyses.WGS-based phylogenetic clustering provides a new and valuable step in this workflow because of the rapidity and accuracy of the resultant genetic linkages.Additional methods have been proposed to speed up investigations such as predicting zoonotic source attribution using machine learning from WGS data.WGS provided detailed information and an increased degree of certainty in identifying the source of this sporadic contamination cluster.WGS sequencing mitigated the associated compliance investigation substantially."Indeed, its specific role augmented FDA's decision-making processes in a series of observations including supporting the thesis that these illnesses were a cluster and linked to specific L. monocytogenes isolates found in an ice cream facility – likely source of this contamination event.It is noteworthy that traditional epidemiology, after exposure data was collected and analyzed, further supported the relationship between the illness isolates and isolates from the processing facility.Arguably, this particular event is a poignant example of where WGS has aided in solving a food safety concern that may not have been solved previously.The continued adoption of WGS applications makes it far more likely that contamination sources associated with sporadic foodborne illnesses will now be able to be identified.FDA and their regulatory offices now include genomic evidence when disclosing compliance investigation details to firms.With this additional information, companies can determine whether a voluntary recall notice is appropriate and can also act to improve preventative controls that may have broken down in production processes.Voluntary recall is an important method to speed up the removal of contaminated product and quickly reduce the public health burden during a contamination event.The GenomeTrakr and PulseNet databases further leverage WGS technology in an open-access platform which enables more rapid identification of the root sources of foodborne illness and, in turn, allows for faster public health response and reduced numbers of illnesses.WGS has been fully deployed across FDA field laboratories, and this detailed genomic approach is now being applied to all FDA-derived isolates of Salmonella, E. coli, Campylobacter and Listeria in real-time as they are collected from food and environmental sources.To this end, FDA WGS analysts welcome legitimate scientific criticism to improve its analytical methods, but to be convincing, such criticism should include transparent side-by-side analyses and results that compare methods clustering the same genomic dataset.As an example, Nadon et al. recently reported several weaknesses of a SNP-based analytic approach, yet scientific evidence supporting these claims remains scant."On the contrary, FDA's GenomeTrakr workflow ascribes several beneficial attributes to its own SNP-based validated pipeline including the ability to provide for: 1) stable strain nomenclature; 2) international standardization; 3) curation; and 4) scalability.Public health professionals understand that the speed at which one can identify the cause of a contamination event has health consequences.This work identifies ways that WGS improves actionable outcomes for regulatory and public health entities.One way is to sequence more foodborne pathogens from food, environmental and clinical sources and to share them in real-time in publicly available databases.Global comparisons ensure that contamination due to international travel and trade is identified rapidly."Another way is when isolates are available from food facility inspections and the WGS data links these isolates to clinical isolates by a few nucleotide differences, FDA's Compliance officials may be activated early to launch regulatory actions.The ice cream Listeria case reported here, catalyzed by WGS, was an example of an early-warning match observed by FDA compliance investigators despite the sporadic nature of the illness and the length of time spanned.The continued success and support of WGS technology in food safety has made it indispensable for recognizing other contamination events.Indeed, it is now common practice to surveil for early matches between foods, environmental samples and clinical cases.In conjunction with this practice, FDA currently inspects roughly 800 facilities per year with 200 environmental swabs collected at each inspection for a total of 160,000 samples.From this effort, roughly one in four firms produce positive foodborne pathogen isolates all of which are sequenced and uploaded into the GenomeTrakr database and NCBI Pathogen Detection where they are shared publicly, including with PulseNet and USDA FSIS, as well as global public health laboratories.Depending on the phylogenetic clusters observed, FDA may act with further inspections, communications and advisories to coordinate public health response.The cumulative number of compliance actions/cases supported by WGS since 2014, in conjunction with the Office of Analytics and Outreach, is 370 cases, with 29 new cases in FY18 Q4, ending in October.Over the past four years the FDA office of compliance and office of regulatory affairs has collected and sequenced 11,672 isolates gathered from contaminated facilities during routine surveillance, as well as outbreak and post-outbreak response.As all of these isolates have known geographic locations and provenance, our biostatistics have been able to calculate several probabilities including: P: What is the probability that 2 isolates are collected from the same food facility if their genetic distance is no more than d SNPs?,; P: What is the probability that the genetic distance of 2 isolates is no more than d SNPs if both of them are collected from the same food facility?,; and P: What is the probability that the genetic distance of 2 isolates is no more than d SNPs if they are collected from different food facilities?,The details of this analytical work are published and includes two data sets, one for Salmonella and one for Listeria, including isolates sequenced from our freezer collections as well as recent samplings, 04/04/1999 to 07/24/2017.In the Wang et al. manuscript, the authors clearly define the parameters in the probability equations, and clearly state how their hypothesis testing is structured.For results after bias adjustment, P = .91 for Salmonella and P = 100.00 for Listeria when SNP distance is 0.The facility match probability reduces with increase SNP distance on a continuum, with the various probability distributions varying between the two species.The take home message is that when SNP distances are low there is a high probability that the isolates came from the same facility.These foodborne pathogens are evolving uniquely in the facilities that they reside, allowing FDA to leverage this bacterial characteristic to use in source tracking and root cause analysis.Closely related isolates generally originate from the same facility.Facilities with multiple positive findings often had multiple strains.This ice cream case study documents that, as a tool for food safety and compliance, WGS case reports are rare.Food facilities are faced with the similar infection control problems that hospitals and hospital networks are also dealing with, and that WGS could provide solutions for termed “precision epidemiology”.The detailed metadata associated with isolates and the clusters into which they group into also supports efforts to improve risk assessment and risk management.This analysis provides further evidence that the high resolution and greater certainty of linkage from WGS, by better informing source attribution and root cause analyses, greatly enables government regulators to aid in decisions regarding removing contaminants from the food supply through source attribution and root cause analysis.The added value of a distributed network of desktop sequencers uploading WGS data from all contaminants available from the inspections of food facilities was documented as was the need for the regular inspection of high-risk commodities and the associated facilities that produce them.This study documented the need for a more rapid process that later included more automation and integration that is now part of the current methods supported by the GenomeTrakr and the NCBI Pathogen Detection tools.As speed and action are needed to reduce the impact and public health consequences for any contamination event, there was value of network support for both data collection, as well as bioinformatic support was obvious.The NCBI Pathogen detection web site currently provides automated phylogenetic analysis from validated data-analysis pipelines to regularly cluster all incoming data.This is further confirmed by FDAs validated CFSAN- SNP pipeline.The NCBI systems are designed to identify all clusters of 50 SNPs or less.Careful screening of all new genomes for phylogenetic clusters of 0–20 SNPs away is also regularly conducted, as is the ability to look for any cluster that include at least one food and environmental isolate with at least one clinical isolate.Because of the compliance work documented in this study here, FDA bioinformaticists now watch for any new isolates clinical or veterinary strains that match FDA isolate genomes to determine whether additional investigation is warranted.Another strength of making the genomic data public available is that it allows food production and testing companies the ability to download all GenomeTrakr data and validated software locally so that they can identify potential issues and minimize the impact to their brand without a need to upload their data to the public database.Therefore, companies can take advantage directly of the data for the good of public health and their customers.Several companies and third-party providers have already recreated the GenomeTrakr and NCBI Pathogen Detection data and tools thus permitting them to apply these important data directly toward augmenting their own preventive controls programs.FDA also provides the data publicly to further foster innovation and encourage more proactive stances among food industry partners.Several diagnostic companies have used these genomic data already to design better tests for enteric foodborne pathogens.The economic savings gained from the adoption of WGS methods has been published and suggests that Canada, as one example, could be saving from $5,000,000 to $90,000,000 USD annually in controlling and responding to Salmonella-derived food safety concerns alone.If one extrapolates the Canadian study to the United States costs for the burden of foodborne illnesses then the savings annually would be estimated at billions of US dollars across the food safety sector.A more detailed economic estimate must be assessed to determine the exact figure, but the idea is clear that adoption of WGS both nationally and internationally could provide significant savings to public health, the burden of foodborne illness, and the costs associated therewith.In this case study and review, we have shown how regulatory authorities and public health officials in government can leverage the power of WGS to identify, characterize and remove contaminants that emerge in the farm to fork continuum.These Data sharing and compliance work flows involving WGS are mechanisms which are highly coordinated both within the agency as well as across the sectors of food, environmental and clinical public health including the CDC, FDA, NCBI, USDA-FSIS and state departments of health and agriculture laboratories.The FDA GenomeTrakr database and network are specific for food safety, while the NCBI Pathogen Detection site also is available for other human pathogen groups including nosocomial and veterinary pathogens, so these mechanisms can be utilized both by other countries and for additional pathogenic bacterial species.The potency, degeneracy, and agnostic nature of WGS data allows for academicians, industry scientists, and or government officials to document and leverage common and specific applications for this powerful tool to improve food safety.This then can be adopted across the globe and allow for rapid identification, linkage, and prevention for both compliance actions and outbreak detection of foodborne contamination events as they emerge globally in the food and feed supply.It is noteworthy that the crosscutting application of WGS data inherently provides a mechanism to expand global one health objectives for full molecular epidemiological integration and linkage across the different disparate public health sectors including those comprising clinical, foods, and environmental pathogens.When WGS is tied to global real-time data-sharing timely discovery of foodborne pathogens can be tracked at a global level.These growing efforts suggest that it is time for global comprehensive international discussions, perhaps through the World Health Organization and or the Food and Agriculture Organization for coordinated efforts among governments to wholly exploit the full potential of this powerful new tool for improved food safety and public health.No conflicts of interest, financial or otherwise, are declared by the authors.Trade names mentioned in the manuscript do not constitute an endorsement.Work was funded by internal research funding from the US Food and Drug Administration.This project was also supported in part by an appointment to the Research Participation Program at the CFSAN, FDA, administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and FDA.
We review how FDA surveillance identifies several ways that whole genome sequencing (WGS)improves actionable outcomes for public health and compliance in a case involving Listeria monocytogenes contamination in an ice cream facility. In late August 2017 FDA conducted environmental sampling inside an ice cream facility. These isolates were sequenced and deposited into the GenomeTrakr databases. In September 2018 the Centers for Disease Control and Prevention contacted the Florida Department of Health after finding that the pathogen analyses of three clinical cases of listeriosis (two in 2013, one in 2018)were highly related to the aforementioned L. monocytogenes isolates collected from the ice cream facility. in 2017. FDA returned to the ice cream facility in late September 2018 and conducted further environmental sampling and again recovered L. monocytogenes from environmental subsamples that were genetically related to the clinical cases. A voluntary recall was issued to include all ice cream manufactured from August 2017 to October 2018. Subsequently, FDA suspended this food facility's registration. WGS results for L. monocytogenes found in the facility and from clinical samples clustered together by 0–31 single nucleotide polymorphisms (SNPs). The FDA worked together with the Centers for Disease Control and Prevention, as well as the Florida Department of Health, and the Florida Department of Agriculture and Consumer Services to recall all ice cream products produced by this facility. Our data suggests that when available isolates from food facility inspections are subject to whole genome sequencing and the subsequent sequence data point to linkages between these strains and recent clinical isolates (i.e., <20 nucleotide differences), compliance officials should take regulatory actions early to prevent further potential illness. The utility of WGS for applications related to enforcement of FDA compliance programs in the context of foodborne pathogens is reviewed.
58
Lipid derivatives activate GPR119 and trigger GLP-1 secretion in primary murine L-cells
Glucagon-like peptide-1 has multiple anti-diabetic effects, most notably enhancing insulin secretion, suppressing glucagon release and slowing gastric emptying .Current incretin-based therapies focus on preventing the breakdown of GLP-1 by dipeptidyl peptidase-IV or administrating GLP-1 mimetics .The benefits of increasing endogenous GLP-1 secretion are currently under evaluation, supported by evidence that gastric bypass surgery improves glucose tolerance, at least in part by increased GLP-1 secretion .GPR119 is one of a number of candidate G-protein coupled receptors currently under investigation as a potential target for elevating GLP-1 and insulin release.GLP-1 is secreted from enteroendocrine L-cells in the intestinal epithelium, which express a variety of receptors and transporters capable of detecting ingested nutrients, including carbohydrates, lipids and proteins .GPR119 is a Gαs-coupled receptor, linked to the elevation of intracellular cAMP concentrations .Physiological GPR119 ligands include oleoylethanolamide , produced locally within tissues , and 2-oleoyl glycerol generated by luminal triacylglycerol digestion .OEA as well as small molecule GPR119 agonists, increase GLP-1 and insulin release in rodent models .Indeed, GPR119 agonists were developed for human studies and taken into clinical trials in patients with type 2 diabetes, but were not found to improve metabolic control .The reasons for the poor translatability remain uncertain, and the physiological roles and therapeutic potential of GPR119 are still under investigation.The aim of this study was to investigate the physiological role of GPR119, and the signaling events triggered by GPR119 agonists in native murine L-cells.Using a fluorescent reporter providing a readout of cAMP concentrations in living native L-cells, we show that OEA, 2-OG, and a specific GPR119 agonist elevated cytoplasmic cAMP concentrations and enhanced GLP-1 secretion in primary cultured L-cells.We further present a new conditional knockout mouse model lacking GPR119 in proglucagon expressing cell populations including L-cells and alpha-cells.Oral oil tolerance tests in wild type and KO mice revealed that lipid-triggered plasma GLP-1 excursions are highly dependent on activation of GPR119 in L-cells.The flox Gpr119 mouse was created using the embryonic stem cell method by AstraZeneca Transgenics and Comparative Genetics, Mölndal, Sweden.Genotyping for Gpr119fl was performed using the primers: Forward, TGCAGAGAGGGAGCAAATATCAGG; Reverse, TCTTGTTGTAACAAGCCTTCCAGG.Conditional Gpr119 knockout mice were created by crossing homozygous Gpr119fl with heterozygous GLUCre12 mice, which express Cre recombinase under proglucagon promoter control .The mice were selectively bred to produce homozygous females or hemizygous males for Gpr119fl.All mice were on a C57BL/6 background.Details of generation of Glu-Epac21 mice are described elsewhere .Briefly, this is a transgenic strain in which the cAMP FRET sensor, Epac2-camps, is expressed under control of the mouse glucagon receptor, using the same starting BAC and technique as used previously to generate GLU-Venus mice .The L-cell-specificity of Epac2-camps expression was confirmed by immunofluorescence staining of fixed intestinal tissue slices.Mice were kept in individually ventilated cages according to UK Home Office regulations and the ‘Principles of laboratory animal care.’,All procedures were approved by a local ethical review committee.Mice aged six weeks to six months were killed by cervical dislocation.Intestines were collected into ice-cold Leibovitz’s L-15 medium and primary intestinal culture performed was as previously described .Duodenum/jejunum was taken as a 10 cm length distal to the pylorus; 10 cm of ileum was taken proximal to the ileocecal junction, and colon included all tissue distal to the caecum.Minced tissue was digested with 0.4 mg/ml collagenase XI in Dulbecco’s Modified Eagle Medium containing 4.5 g/l glucose.Crypts were pelleted at 100 g for 3 min before resuspension in DMEM containing 10% fetal bovine serum, 2 mmol/l L-glutamine, 100 units/ml penicillin, and 0.1 mg/mL streptomycin.10 μmol/l of the Rho-associated, coiled-coil containing protein kinase inhibitor Y27632 was added to small intestinal cultures.Cells were plated onto 24-well plates or glass-bottom dishes coated with a 1:100 dilution of Matrigel.Each 24-well culture plate contained crypt suspensions from a single mouse.Cultures were incubated at 37 °C and 5% CO2.Secretion studies were carried out 20–24 h post-plating, as described previously .Total GLP-1 concentrations were analyzed in test solutions and cell lysates by immunoassay.Hormone levels in the test solution and cell lysate were summed to give the total well content.GLP-1 secretion was expressed as a percentage of this total.Mixed male and female adult mice were used for the gavage study, and the groups did not differ significantly in body weight.GLP-1 levels were similar in the male and female mice, so data were combined.Mice were fasted overnight.Intragastric gavage of a 1:1 mix of olive:corn oils was administered.Control wild type mice received a gavage of phosphate buffered saline.25 min later, mice were anaesthetized with isoflurane, and terminal blood samples taken at 30 min by cardiac puncture.Plasma was separated immediately and frozen.Total GLP-1 in the plasma was measured by immunoassay.Single-cell measurements of cAMP levels were made using the Förster resonance energy transfer-based sensor Epac2-camps, using tissues from Glu-Epac21 mice maintained in mixed primary culture for 20-78 h.The use of Epac2-camps for monitoring cAMP concentrations in GLP-1 expressing cell lines has been described previously .Maximum time-averaged CFP/YFP ratios, representing , were determined at baseline and following test reagent application.Saline buffer contained supplemented with 0.1% bovine serum albumin.Solutions for secretion studies included 10 mmol/l glucose and DMSO at a final concentration of 0.1%.Unless stated, all reagents were purchased from Sigma.AR231453 was synthesized by AstraZeneca.Data were analyzed using Microsoft Excel and GraphPad Prism v5.0, using Student’s t-tests, ANOVA and post-hoc Bonferroni tests, as indicated in the figure legends.The contribution of GPR119 to GLP-1 secretion in vivo was investigated by the administration of a lipid gastric gavage to Gpr119-KO and WT mice.In WT mice, oil gavage triggered an approximate 3-fold elevation of plasma total GLP-1 concentrations at 30 min, compared with control mice gavaged with saline.GLP-1 after oil gavage was significantly lower in KO animals compared to WT controls, indicating that GPR119 in L-cells plays an important role in mediating the GLP-1 secretory response to ingested triglyceride.Colon cultures from Cre-negative/Gpr119fl and Cre-positive/Gpr119wt mice were treated with 10 μM forskolin plus 100 μmol/l 3-isobutyl-1-methylxanthine to raise cAMP, the small molecule GPR119 agonist AR231453 , 200 μmol/l 2-oleoylglycerol, or 10 μmol/l oleoylethanolamide.No difference in secretion was seen between these genotypes, indicating that neither the Cre-allele nor the Gpr119fl allele alone altered GLP-1 release.The same ligands were then applied to cultures from Gpr119-KO mice and Cre-negative/Gpr119fl mice.Secretion was measured separately from the colon, ileum, and duodenum/jejunum.AR231453 significantly increased GLP-1 release 4.6-fold from the colon and 2.9-fold from the ileum of WT mice; OEA significantly enhanced GLP-1 release by 3.9-fold in the colon and 2.1-fold in the duodenum/jejunum; 2-OG only increased secretion significantly in the colon.Secretory responses to all three GPR119 ligands were significantly impaired in colonic cultures from Gpr119-KO mice.In ileal cultures, the response to AR231453 was reduced in Gpr119-KO tissue, whereas in duodenal/jejunal cultures, the enhanced secretion triggered by OEA was not impaired by Gpr119-KO.cAMP concentrations in primary L-cells were imaged in primary cultures from mice expressing a FRET-based cAMP sensor in proglucagon-expressing cells.In colonic cultures, 2-OG, OEA and 100 nmol/l AR231453 triggered elevations of L-cell cAMP.Particularly in the upper intestine, we observed that not all cells responded to test agents, and cells were allocated as responders if they showed a change of the FRET signal of >2% above baseline.In the duodenum 50% of L-cells exhibited cAMP responses to AR231453, compared with 45% of L-cells in the ileum and 71% in the colon.The mean amplitude of the cAMP response to AR231453 was not significantly different across intestinal tissues.Following the de-orphanization of GPR119, small molecules targeting this receptor were developed as potential new treatments for diabetes that would increase secretion from intestinal L-cells .Although subsequent trials have not yet demonstrated that metabolic improvements can be brought about by the use of GPR119 agonists in humans with type 2 diabetes , there is still a high level of academic and commercial interest in GPR119 as a potential drug target .Our results show that L-cell GPR119 is a critical component of the sensing mechanism responsible for GLP-1 responses to ingested lipid, and that L-cells in the distal intestine respond to GPR119 agonists with elevated cAMP and GLP-1 secretion.We show here that GPR119 ligands increase GLP-1 release from primary cultured ileal and colonic L-cells in a GPR119-dependent manner.Of the three GPR119 agonists tested, OEA and AR231453 were more effective than 2-OG.The magnitude of the secretory response triggered by the different GPR119 ligands increased progressively from the upper small intestine to the colon.Indeed, L-cell knockout of Gpr119 largely abolished responses to OEA, 2-OG and AR231453 in the colon.In the ileum, where the secretory response was smaller, only OEA and AR231453 raised secretion in WT tissues above that found in the Gpr119-KO, and in the duodenum/jejunum, none of the ligands had a greater effect in WT than KO cultures.While our results suggest that the small response to OEA in the duodenum/jejunum of WT tissue is independent of GPR119, we cannot exclude the possibility that the proportion of L-cells undergoing Cre-dependent GPR119 excision differed between tissues and that more residual L-cells expressed GPR119 in the upper intestine.Arguing against this idea, however, AR231453 had little effect on GLP-1 secretion in the WT duodenum/jejunum, and OEA has been reported to activate other pathways such as PPARα that might influence GLP-1 secretion even in the absence of Gpr119 .The GLU-Epac transgenic mouse enabled us to monitor cAMP responses to GPR119 ligands in individual primary cultured L-cells.Not all L-cells were found to be responsive to AR231453, suggesting there may be a subpopulation of L-cells that do not express functional GPR119.There was a tendency for smaller and less frequent cAMP responses to AR231453 in the small intestine compared with the colon, although this did not reach statistical significance.These results do, however, mirror the gradient of GLP-1 secretory responses in cultures from the different regions.In line with these findings, we also reported previously that Gpr119 expression appeared higher in colonic than small intestinal L-cells by qRT-PCR .Mice with targeted deletion of Gpr119 in L-cells exhibited a marked reduction of plasma GLP-1 levels after gastric oil gavage.This suggests that GPR119-dependent detection of luminally-generated 2-monoacylglycerols or locally-released OEA plays a major role in the post-prandial GLP-1 secretory response to orally ingested triglycerides.While long chain free fatty acids are also released during the luminal digestion of corn and olive oils, and are sensed by GPR119-independent pathways, likely involving GPR40 and GPR120 , our findings suggest that these pathways play a relatively minor role compared with GPR119 in mediating the GLP-1 secretory response to oral lipids.While our data support the development of GPR119 agonists to enhance GLP-1 secretion, the role of different intestinal regions in post-prandial physiology and as drug targets deserves further attention.
Aims/hypothesis Glucagon-like peptide-1 (GLP-1) is an incretin hormone derived from proglucagon, which is released from intestinal L-cells and increases insulin secretion in a glucose dependent manner. GPR119 is a lipid derivative receptor present in L-cells, believed to play a role in the detection of dietary fat. This study aimed to characterize the responses of primary murine L-cells to GPR119 agonism and assess the importance of GPR119 for the detection of ingested lipid. Methods GLP-1 secretion was measured from murine primary cell cultures stimulated with a panel of GPR119 ligands. Plasma GLP-1 levels were measured in mice lacking GPR119 in proglucagon-expressing cells and controls after lipid gavage. Intracellular cAMP responses to GPR119 agonists were measured in single primary L-cells using transgenic mice expressing a cAMP FRET sensor driven by the proglucagon promoter. Results L-cell specific knockout of GPR119 dramatically decreased plasma GLP-1 levels after a lipid gavage. GPR119 ligands triggered GLP-1 secretion in a GPR119 dependent manner in primary epithelial cultures from the colon, but were less effective in the upper small intestine. GPR119 agonists elevated cAMP in ∼70% of colonic L-cells and 50% of small intestinal L-cells. Conclusions/interpretation GPR119 ligands strongly enhanced GLP-1 release from colonic cultures, reflecting the high proportion of colonic L-cells that exhibited cAMP responses to GPR119 agonists. Less GPR119-dependence could be demonstrated in the upper small intestine. In vivo, GPR119 in L-cells plays a key role in oral lipid-triggered GLP-1 secretion.
59
t-Distributed Stochastic Neighbor Embedding (t-SNE): A tool for eco-physiological transcriptomic analysis
Key to assessing the health of an ecosystem is to know the physiological states of its inhabitants.Physiology governs progressions through life cycles, adaptiveness to environment including ability to cope with stressors, and reproductive success.But the “physiological state” of an organism is not easily determined because it comprises a multitude of interacting chemical and physical processes governed by gene expression and gene regulatory networks operating simultaneously in multiple organs and tissues of the body.In addressing the problem at the gene-expression level, much of ecology, including biological oceanography, currently depends on identifying the condition of pre-selected biological processes using single-gene biomarkers for expression profiling of transcriptional physiological state.For example, gene expression studies on key zooplankton species such as Calanus finmarchicus, have typically focused on relative expression of target genes as biomarkers of physiological responsiveness.More recently a broader approach has been taken: two de novo reference transcriptomes have been assembled for this species using high-throughput Illumina sequencing of short RNA fragments.These references have been used to investigate global differences in gene expression as a function both of development and experimental treatment.In these studies, data analysis was focused on comparing pairs of samples, and downstream analysis examined differentially expressed genes between treatments.In some studies, the data were then used to identify a small number of genes to serve as biomarkers, thus reenlisting a targeted-gene approach.Because RNA-Seq produces millions of short sequences from RNA fragments that generate gene expression profiles for an entire small planktonic organism, these high dimensionality datasets can be difficult to interpret and compare across samples.The field of cell biology has gone beyond the search for biomarkers within transcriptomes, succeeding in using most of the information available in transcriptome-wide profiles analyzed “agnostically” without a priori assumptions about how samples might be categorized into groups for statistical comparisons.Such approaches produce a more complete, unbiased and nuanced determination of physiological state and/or cell type.In such approaches, profiles of the “N” genes an organism expresses are represented mathematically by an N-dimensional state-vector."Each axis in N-space corresponds to one of the genes, with its coordinate value corresponding to the gene's expression level.How similar in transcriptomic profile two organisms are may then be determined by assessing how “close” their two transcriptional state-vectors are in N-space.A collection of organisms with similar physiological states generates a cluster of state-vectors in N-space.The challenge then becomes how to recognize and interpret these clusters of N-vectors.Here we show the application and robustness of a technique termed “t-distributed Stochastic Neighbor Embedding,” or “t-SNE”.This state-of-the-art technique is being used increasingly for dimensionality-reduction of large datasets.It has not, to our knowledge, been applied to high-dimensional biological data in oceanography or marine biology.We demonstrate its use for guiding the investigation and interpretation of transcriptional data by showing its effectiveness when applied to previously-published well-analyzed field-collections and experiments on the calanoid copepods Calanus finmarchicus and Neocalanus flemingeri.Dimensionality reduction is a necessary step in the extraction of the most significant features from a complex set of expression profiles from different samples, treatments or origins that involve thousands of simultaneously-sampled genes.This consists of mapping the high-dimensional state-vectors onto a low-dimensional space without losing critical information on the relatedness of the component samples.Some standard methods for this in use in biological oceanography, among other fields, include principal component analysis and multidimensional scaling.Another related tool is hierarchical clustering, which is used for grouping samples according to similarity.Like other dimensionality-reduction methods, t-SNE generates a 2-dimensional visualization of sample interrelations that allows close similarities between samples to be identified by the relative location of mapped points.The methods all strive to retain in their mapping the proximity of similar samples while placing dissimilar samples at greater distances."However, because of t-SNE's nonlinearity and its ability to control the trade-off between local and global relationships among points, it usually produces more visually-compelling clusters when compared with the other methods.t-SNE can be readily applied to transcriptomic as well as other large high-dimensionality datasets.At the outset, however, we reiterate a caution frequently voiced in the literature, that as powerful as the technique is, it is important not to over-interpret the plots it generates.We will address some of these issues below.In this paper, we present no new research data, but rather we re-analyze previously published peer-reviewed results to demonstrate and validate the application of the t-SNE algorithm to these data.We test and evaluate t-SNE over a range of parameters, using transcriptomic data from multiple sources, different sequencing depths, and with the application of filters.We propose guidelines for optimizing the method to a particular dataset obtained from field samples or lab experiments on individuals or small numbers of zooplankton, and we demonstrate how the algorithm can lead to novel and robust interpretations of the exemplar datasets.RNA-Seq data available through Sequence Read Archive at the National Center for Biotechnology Information were used in the t-SNE analysis.These datasets are for two species of high-latitude copepods: Calanus finmarchicus and Neocalanus flemingeri, and detailed descriptions of the data and analyses have been published previously.The Calanus finmarchicus dataset included sequencing data from different developmental stages and for individuals incubated for two and five days under different experimental conditions.The data from the developmental stages were from either wild-caught from the Gulf of Maine or reared in the laboratory from eggs produced by wild-caught females.All C. finmarchicus samples contained multiple individuals ranging from three to 500.The Neocalanus flemingeri dataset consisted of pre-adults that were wild-caught over a 6-day period at six stations along the Seward Line and in Prince William Sound, Alaska.RNA-Seq was performed on individual copepods that had been collected using a vertical tow from 100 m to the surface with a CalVET net and preserved in RNAlater RNA Stabilization Reagent within two hours of collection.The robustness of t-SNE performance under different levels of sample transcriptional heterogeneity was tested by comparing runs on N. flemingeri samples, made up of single individuals, with runs on C. finmarchicus, comprising multiple individuals.Fig. 1 diagrams the t-SNE workflow that we applied to the datasets.The Illumina RNA-Seq quality-filtered reads were mapped using Bowtie2 against a specific reference transcriptome.The C. finmarchicus reference contained 96,090 transcripts and was derived from a de novo assembly from the 2011 data set.The N flemingeri reference contained 51,743 transcripts and was assembled by Trinity from a single individual.The resulting expression profiles were normalized across samples by the length of each transcript, being computed as the number of reads per kilobase of transcript length per million mapped reads.After adding a pseudocount of 1 to the RPKM value for each transcript, the multi-dimensional dataset of relative expression for thousands of genes for each sample was log-transformed to bring the data closer to a normal distribution.The entire dataset, or a subset filtered for specific Gene Ontology terms for functional studies using annotated references was then processed by the R package Rtsne.The results were plotted and, in some cases, subjected to further cluster analysis using DBSCAN.An example of the R code used is given in Supplementary methods S2.Dimensionality reduction methods aim to represent a high-dimensional data set X = , here consisting of the relative expression of several thousands of transcripts, by a set Y of vectors yi in two or three dimensions that preserves much of the structure of the original data set and can be displayed as a scatterplot.A nonlinear, dimensionality reduction method, t-SNE seeks to minimize the divergence between the probability distribution, P, of pairwise similarities of the data in the high-dimensional space and the probability distribution, Q, of pairwise similarities of the corresponding low-dimensional points.The similarity between two high-dimensional data points, xi and xj is based on their Euclidean distance.Joint probabilities pij that measure the pairwise similarity between the high-dimensional points are defined by symmetrizing the conditional probability that xi would have xj as its neighbor, if neighbors were determined in proportion to their probability density under a Gaussian distribution centered at xi.The variance of the Gaussian, σi2, is set such that the perplexity, where H is the Shannon entropy of the probability distribution Pi) of the conditional probability distribution equals a parameter value set by the user.Hence, the higher the density of points surrounding xi, the smaller the value of σi2; that is, the value of the perplexity parameter sets the number of effective neighbors of xi.Perplexity adjusts the trade-off between local and global inclusiveness, with low values favoring local geometries and high values more global ones."The pairwise similarities between corresponding low-dimensional points are computed using a normalized Student's t-distribution with 1 degree of freedom, which has heavier tails than the Gaussian.The heavy tails of the t-distribution allow moderate distances between points in the high-dimensional space to be modeled by much larger distances in the low dimensional space, which in turn allows small distances to be better represented.The low-dimensional points are determined by finding points, which minimize the Kulback-Leibler divergence between the joint probability distributions P and Q.This cost function emphasizes modeling high values of pij by high values of qij, that is, similar objects in the high dimensional space by nearby points in the low dimensional space.Minimization is done using a gradient descent method with an adaptive learning rate scheme and a random initial solution Y = drawn from a normal distribution.The number of gradient descent iterations can be set and it must be set sufficiently high to stabilize the pattern of the resulting low dimensional points.We used a variant of t-SNE that utilizes a Barnes-Hut algorithm to approximate the t-SNE gradient and is implemented in R.The Barnes-Hut variant of t-SNE substantially speeds up the algorithm and allows t-SNE to be applied to much larger datasets that would be computationally intractable with the original t-SNE algorithm.A consequence of using this algorithm is the requirement that the perplexity parameter must be less than or equal to N/3, where N is the number of samples in the original dataset.In addition, the algorithm begins by running PCA to reduce the dimensions of the original data to 30.The resulting 30-dimensional representation is then reduced to two dimensions by t-SNE.This speeds up the computation with little change in the resulting 2-dimensional dataset Y.The t-SNE algorithm can be run without the initial PCA step.A consequence of the stochasticity of the t-SNE algorithm is that different runs give different results.These different realizations vary primarily in the orientation of the scatterplot within the plane and in the numeric values of the coordinate axes, but not in the grouping of the points within the scatterplots.Thus we follow the common practice of omitting labels from the coordinate axes.A standard technique for handling the stochasticity of t-SNE is to run the algorithm with the same parameter values multiple times and select the solution that gives the smallest KL divergence.However, we found that the variability of the KL divergence by iteration changed from run to run, so the run that gave the smallest KL divergence could be the run with the greatest variability.Hence, instead we selected a t-SNE run that captures the consensus of multiple runs."While a t-SNE plot provides a good visualization of the arrangement of the data points based on similarity, it isn't always clear how to divide these points into subsets or “clusters” such that the points within a cluster are more closely related to each other than those assigned to other clusters.We approached this problem by using a density-based clustering algorithm, DBSCAN.The idea underlying this approach is that for point z to belong to a cluster, a neighborhood of z of a given radius, ε, must contain at least a minimum number of points, that is, the density in the neighborhood has to exceed some threshold."The DBSCAN algorithm has the following distinctive properties: it doesn't require that the number of clusters be specified in advance; it allows clusters to have arbitrary and elongated shapes; it doesn't force all points to belong to a cluster and it requires only minimal knowledge of the input data to determine the parameter values.Two parameters need to be set for the algorithm: 1) Eps, the radius of the neighborhood of an interior point of a cluster, and 2) MinPts, the minimum number of points that must be contained in an Eps-neighborhood of an interior point.Together these parameters define the density threshold for a cluster.DBSCAN provides auxiliary subroutines that help in determining these input parameters.To choose a value of the parameter Epsfor a given MinPts = k, the distances from a point to its k nearest neighbors are computed for all N points, the resulting k*N distances are then ranked from smallest to largest, and plotted as a function of that rank, to produce a k-nearest neighbor distance plot.The Eps parameter can then be chosen to equal the distance that corresponds to the last “knee” in the plot.This choice for the Eps parameter works well when the data form dense clusters of points in a background of sparsely scattered “noise” points.Ester et al. indicate that for 2-dimensional data, improvement by setting k > 4 is minimal, while increasing computational cost.They suggest letting MinPts = 4 and using the distances of the 4 nearest neighbors of a point to determine Eps.Note, however, that since numerical distances in t-SNE coordinates are meaningless, it is the relative value of Eps that is of interest.We computed the Dunn index to compare the quality of the assignment of points to clusters for different values of Eps.The Dunn index is defined as the ratio of the minimal distance between points of different clusters to the maximum distance between points within a cluster.Hence, increasing the separation between different clusters or decreasing the diameters of the clusters increase the Dunn index.An optimal value of Eps is one that results in the partition of points into clusters with the largest Dunn index.We used the R package clusterCrit to compute the Dunn index.Initial testing involved varying perplexity values and the number of iterations in the Rtsne program in order to select the result that best represented the consensus.This qualitative approach was justified since t-SNE is a visualization tool, not one intended for in-depth quantitative analysis.When used with this in mind, it is a very powerful adjunct for guiding more rigorous down-stream investigations.The effect of changing these two parameters on a single set of samples is shown in Supplementary Figs. S1 and S2.The optimal perplexity setting – one that optimizes the clustering of data points – depends on the number of samples included in the analysis.Rtsne will not accept perplexity above 1/3 of the number of samples.Within broad limits, it changes primarily the density of the clusters but not their integrity.The parameter controlling the number of iterations must be set sufficiently high that the arrangement of points in the display is adequately stable with changes in that number.For the datasets presented here, we found that patterns were well stabilized by 50,000 iterations.However, the optimal number of iterations may differ depending on the nature of the data.There is also an interaction between perplexity and the number of iterations, so exploring a range for both parameters for an untested dataset is advisable."Finally, to allow reproducibility of runs, the seed for R's random number generator was always set to 42.Changing this seed, while keeping other parameters fixed generates plots that are similar in general layout, but differ in detail owing to the stochastic nature of the t-SNE algorithm.The effect of such stochastic differences between re-runs is shown in Supplementary Fig. S3.Unless otherwise stated, other parameters were set to the default values in Rtsne.The t-SNE algorithm, as well as other clustering algorithms can be applied agnostically, without regard to either sample or transcript identity.How sample points assort into clusters, if at all, allows preliminary determination by visual inspection of the multiplicity, sizes and variability of distinct transcriptional profiles potentially present in a high-dimensional dataset.However, its use can be extended to subsets of samples or transcripts, selected by criteria of particular interest, to determine how such “filtering” affects the clustering.We present two such applications.First, we show the effects of identifying samples within a plot according to source and then separating out a subset of interest and reapplying the algorithm, in order to eliminate the influence of samples not relevant to the specific analysis.Second, we demonstrate how transcriptional differences among samples can be related to function by restricting the algorithm to annotated transcripts included in a “Gene Ontology” term of interest.For the latter, we constructed GO-term filtered input files by finding all of the descendent terms from a higher-level term of interest, then extracting the RPKM values for all transcripts annotated with these terms.We provide exemplar scripts for such a process in Supplementary methods S3.Fig. 2 presents examples of how the t-SNE algorithm maps transcriptional states of samples treated as an agnostic set.Fig. 2A shows the plot for the 46 Calanus finmarchicus samples while Fig. 2B shows the plot for the 18 field samples of Neocalanus flemingeri.The clustering of sample points for both species is prima facie evidence that subsets of samples share similar transcriptional profiles and that these differ from the profiles characterizing the other clusters.In the examples shown, clusters determined visually number at least 6, for the Calanus set and three for Neocalanus.These unbiased initial plots identify multiple distinct transcriptional profiles, proxies for physiological state, represented in the two sets of samples, without indicating what specific gene-expression differences underlie these states.Clustering was robust, occurring over a 7-fold range in the number of mapped reads per sample, and equally apparent in samples consisting of single as well as multiple individuals.Since clustering is used as evidence of a shared physiological transcriptional state, it becomes important to objectively determine what constitutes a cluster.The density-based clustering algorithm DBSCAN described in Section 3.4 provides a formal method for this.Since the data included replicates consisting of three samples, we set the DBSCAN parameter for the minimum number of points in a cluster, MinPts, equal to 3.Even had there been more replicates, the results of Ester et al. would suggest not using a value above 4.Using MinPts = 3, we then generated the sorted 3-nearest neighbor distance plot for the C. finmarchicus data set as an example.The slope of the curve increased abruptly for a distance greater than 6; a knee in the resulting curve indicates a good initial choice for the Eps parameter.The clusters resulting from four possible choices for Eps 5, 7, 10 and 12 are shown in Fig. 3B.As Eps is reduced, more clusters result, ranging from 4 to 6 in the panels shown.Both Eps = 5 and Eps = 7 resulted in 6 clusters, although for the smaller value of Eps four of the points were determined to be too isolated and were designated as noise points.The values of the Dunn index for the four Eps values were 0.30, 0.59, 0.37 and 0.65, respectively.Since a larger Dunn index value indicates a better separation of the data into clusters, this suggests that identifying either 4 or 6 distinct clusters in the t-SNE plot is an improvement over 5 clusters.A similar approach was used in the cluster analysis reported below.The evidence of groups of samples having similar transcriptional profiles within a diverse collection, and from each of two different species, when processed agnostically by the t-SNE algorithm, leads to the question of what underlying factors characterize the clusters.One feature is immediately apparent in our exemplar datasets when the samples are identified according to source, as shown in Figs. 4A and B for the C. finmarchicus and N. flemingeri samples, respectively.Here color and symbol-coding according to source has been added to the plots of Fig. 2A and B. Replicate and biologically similar samples map mostly to the same cluster, which is an indication that individuals from one source tend to share transcriptional physiological states.For the C. finmarchicus samples of Fig. 4A, Cluster I contained all but one of the adults from Day 2 of an experimental treatment; Cluster II contained the control adults and those from a later time point in the experiment; Cluster III contained the embryos; and Cluster V the early nauplii; different developmental stages, late nauplii and early copepodids accounted for all but one of the samples in Cluster IV, and late-stage copepodids Cluster VI.The “noise” group identified with Eps = 5 turns out to be a composite of one late-stage nauplius and three early-stage copepodids.Only a few samples from a given source are split between clusters.The field samples of N. flemingeri are less diverse than the samples from C. finmarchicus, deriving exclusively from one developmental stage.Despite their stage homogeneity, the sample points separated into three clusters.This was substantiated by application of the DBSCAN clustering algorithm followed by computing the Dunn index to determine the value of the Eps parameter that gave the optimal grouping of points, as indicated by the grey circles in Fig. 4B. Almost all samples from a given station mapped into the same cluster.The two stations in Prince William Sound generated Cluster I; two stations from the shelf region of the Gulf of Alaska combined to produce Cluster II; and two stations from farther offshore together formed Cluster III.This implies a minimum of three distinct transcriptional profiles characterizing the individuals of the three regions.The one exception was an individual from the GAK1 station, which was more similar to the PWS individuals than to the others from the nearshore shelf region.Comparisons with other dimensionality-reduction methods are shown in Supplementary Figs. S4, S5 and S6.Neither PCA nor MDS clustered the samples as well as t-SNE.Restricting the number of samples prior to t-SNE can produce more refined plots.As an example, the most prominent pattern in the clustering of the combined C finmarchicus samples in Fig. 4A is, perhaps not surprisingly, the developmental stage of the copepod."An animal's physiology changes as it develops from embryo to adult, and this is reflected in the source-specific vector clusters.By excluding the experimental treatments from the original dataset, differences among the developmental stages stand out better, as shown in Fig. 4C. Cluster I disappears, but the remaining ones, as recognized by DBSCAN, remain.The occurrence of successive developmental stages in roughly neighboring positions in the plot is particularly noteworthy, as is the similarity between adult females and embryos.The effect of varying the input parameters on this dataset is shown in Supplementary Figs. S1–S3.The second clustering seen in the original t-SNE output for C. finmarchicus related to the experimental animals fed on a diet of the toxic dinoflagellate Alexandrium fundyense.In Fig. 4A, in addition to Cluster I, certain regions of both Clusters II and IV include a mixture of experimental-treatment points and controls.A rerun of t-SNE on just the experimental samples and their controls gives somewhat clearer segregation within each cluster, as shown in Fig. 4D.As in the original plot, the naupliar samples form a separate cluster, with controls segregated in one portion of the cluster and toxic-alga treated in the other.The most pronounced clustering for the adult females, as in the original plot, is the separation of the 48-hour treatment from both the controls and the 5-day treatment.The latter treatment clustered with the controls, but the cluster had internal structure with little overlap between controls and treatment.A single “outlier” in the 48-hour treatment also clustered with the control/5-day group.The analysis just described, while showing a strong correlation of transcriptomic profile with sample source provides no information on which particular genes are responsible for the similarities.The transcriptional state vector represented by a given point is derived from all expressed genes in the transcriptome.To extract more functional insight, the data supplied to the t-SNE algorithm can be pre-filtered to contain expression data restricted to pre-selected groups of genes.Both the appearance of new clusters and the disappearance of original ones can be informative.Such filters can be constructed in multiple ways.We will illustrate one that uses Gene Ontology terms.If separation into two or more clusters is observed when only genes from a selected GO term are included, it flags the corresponding function as a candidate for contributing to the transcriptional difference between samples in those clusters.Four cases are shown in Figs. 5 and 6 when different GO-term filters are applied to the datasets from the C. finmarchicus experiment on Alexandrium-exposure and the same filters to the field samples of N. flemingeri.In each of the C. finmarchicus/Alexandrium panels, there are, not surprisingly, at least two clusters, one for the adults and one for nauplii as in the developmental stage collection.The split into clusters by the experimental treatment that occurred in the unfiltered plot of the C. finmarchicus adults was retained under three of the filters: “response to stress” , “lipid metabolic process” and “protein metabolic process” .This is evidence that genes that annotated into these biological processes were involved in the response to the toxic alga for C. finmarchicus.A similar retention of the spatially-distinct clustering occurred for N. flemingeri under the response to stress filter.On the other hand, the filter using the GO term “detoxification” eliminated the split in both the adult C. finmarchicus experimental samples and that between the two shelf areas in the N. flemingeri samples.This last condensation was also seen for the lipid metabolic process and the protein metabolic process filters applied to N. flemingeri samples, thus deemphasizing the likely contributions of genes in those functional categories to the separation between inner and outer shelf stations.We have argued that the separation of sample points into several distinct clusters by the t-SNE algorithm is good evidence both of transcriptional similarity within clusters and differences among clusters.In what follows, we validate this by comparing the t-SNE clustering results with the differential gene expression analyses reported in the original publications.The strongest “signal” in the t-SNE analysis for C. finmarchicus was the separation of the developmental stages into stage-specific clusters, which implies distinct transcriptional profiles for the different stages.The cause undoubtedly lies in the fact that the developmental trajectory of any organism involves complex temporal sequences of gene expression patterns.The strength and inter-annual consistency of the clustering would be expected since the stage-to-stage developmental progression is independent of many environmental factors.This is consistent with mapping results in Lenz et al., who found that large numbers of reference transcripts failed to be expressed in any one specific developmental stage.Furthermore, a targeted analysis of the genes involved in lipid synthesis, showed stage-specific expression of genes such as diacylglycerol o-acetyl transferase 1.Such discrepancies in gene expression would contribute strongly to the separation of points by the t-SNE algorithm.The t-SNE plots had a tendency to place successive developmental stages in neighboring clusters, suggesting a measure of shared expression profiles.Several nauplii and early copepodids were even clustered together, despite having different body forms.A difference in cluster association of early nauplii between 2011 and 2012 may be related to differences in stage bias between the two years.Expression differences between these samples from the two years were also found in a target gene expression analysis focused on transcripts encoding enzymes in the amine biosynthetic pathways.The t-SNE result further validates these findings.More surprisingly, the embryo and the adult female clusters were consistently closer to each other than to the other developmental stages, suggesting a transcriptional similarity between them.This proximity was not as readily anticipated from previous and more limited gene expression studies.However, in retrospect, Lenz et al. noted that while the distribution of GO terms for the silent genes was for the most part equally represented among the different developmental stages, adults and embryos shared a greater representation in three GO terms than did the other stages.We illustrated the application of the t-SNE tool to experimental treatments that alter transcriptional profiles using data from Roncalli et al. on adult female Calanus finmarchicus fed on a diet of the toxic dinoflagellate Alexandrium fundyense.The experimental samples clustered separately from the controls for the acute phase and rejoined the control cluster after 5-days of treatment.Thus, the most prominent difference in gene expression occurred in the short term, with a return over time to transcriptional states closer to the control group.A single “outlier” in the 48-hour treatment rather close to the 5-day samples is of some interest, and was missed in the original analysis.Roncalli et al. found that a relatively large fraction of genes expressed by the adult females were differentially expressed with respect to controls in the 48-hour treatment.This is consistent with the separate clustering in the t-SNE plots.That fraction was smaller for both the 5-day treatment of the adult females and the 48-hour treatment of late nauplii, which would help explain the lack of separate clustering from controls in those groups.The broad t-SNE clustering thus is consistent with the in-depth functional analysis of Roncalli et al.Gene expression differences included a large number of differentially-expressed genes in the 48-hour samples that were associated with the cellular stress response."After 5-days' exposure, the response was characterized by a lower number of differentially-expressed genes, which is consistent with a return to cellular homeostasis.These results also explain the persistence of t-SNE clusters under a stress-response filter."Furthermore, lack of clustering when the “detoxification” filter is applied is consistent with Roncalli et al.'s conclusion that regulation of detoxification genes was not a significant response to the toxic diet.Transcriptomic technology has the potential to revolutionize biological oceanography provided that data can be analyzed and interpreted.For example, RNA-Seq of field-collected phytoplankton led to new hypotheses on niche separation between two diatom species, which were then tested experimentally.Transcriptional profiling of N. flemingeri pre-adults demonstrated how regional heterogeneity in resource availability is affecting the physiology of a copepod during preparation for diapause.The t-SNE plots from six field stations in the Gulf of Alaska and a bordering embayment illustrate the application of this technology in N. flemingeri.Three clusters comprising two stations each suggest regional differences in transcriptional physiology within the same developmental stage of a genetically mixed population of N. flemingeri.It is significant that at this global level the samples fell into only three profile categories despite originating from six stations spread over 300 km of distance.The large and consistent separation between PWS and the two offshore GAK stations described by Roncalli et al. is underscored in all t-SNE plots.The t-SNE plots using multiple filters also highlight the complexity of the gene expression patterns, as the relationship of the inshore GAK stations changes depending on GO term filter.Individuals from Prince William Sound showed up-regulation of transcripts involved in lipid biosynthesis, while in the Seward-line individuals up-regulation was found for genes involved in lipid catabolism and protein degradation.The occurrence of separate PWS and Seward-line t-SNE clusters using the “lipid metabolic process” GO filter was consistent with this.In addition, Roncalli et al. found differential expression of genes involved in response to stress, glutathione metabolism and protein metabolism.Application of the detoxification filter, a process involved in response to stress, in the t-SNE analysis separated individuals from PWS and the shelf regions into two clusters consistent with results obtained by functional analysis.Overall, the comparison demonstrates the application of t-SNE algorithm as a powerful tool for discrimination of transcriptional differences.The filtering approach offers possibilities for initial functional insights.The development of “designer” filters specific for known transcriptional response-patterns might be a fruitful direction for future refinements of this approach, especially as more data on ecophysiological responsiveness become available.The t-SNE approach, with or without the application of functional filters, provides a rapid assessment of transcriptional similarity among samples.Clusters become a basis for identifying “experimental groups” that can then be further analyzed to characterize transcriptional patterns and identify environmental correlates that contribute to observed differences.Thus t-SNE is an analysis tool that can focus the effort involved in down-stream analysis of differentially expressed genes and their function within a broader ecological context.So far, we have used published results as confirmatory evidence for the validity of t-SNE clustering in identifying similar gene-expression profiles.However, the t-SNE patterns also identified several anomalies that contribute new insights to the published data.An initial assessment of the data using the t-SNE tool might have led to additional or modified downstream analyses.In the Alexandrium experiment, the 48-hr time point showed differences in transcriptional response between the low-dose and high-dose treatments.However, the t-SNE shows two LD replicates clustering with the three HD replicates, while the third LD sample clustered with the controls/5-day samples.It suggests that the odd replicate was either farther along in its response to the toxic dinoflagellate, or was delayed in responding to it.The analysis by Roncalli et al., which categorized each station into a separate “group”, used a Generalized Linear Model to identify patterns of increased nutritional stress between Prince William Sound and out along the Seward Line.The t-SNE analysis in combination with DBSCAN underscores how application of this tool could have strengthened the statistical analysis by reducing the number of “groups” to three, while increasing biological replication.In contrast, PCA and MDS did not generate clear clusters, in particular with respect to the inner shelf stations, which were characterized by greater individual variability in gene expression.As we have shown, the t-SNE algorithm has proven to be quite effective in quickly identifying clusters of similar transcriptomic profiles in samples both from lab experiments and field collections.However, as noted above, there are limitations to the proper application of the tool.A list of limitations and cautionary notes have been summarized in a web publication by M. Wattenberg and F. Viégas.Several of these are relevant to the current application:While the distance between points in the 2D plots represents a measure of the distance between the points in N-space, there are trade-offs that require caution in interpretation.Thus the compactness of a cluster is not significant because the algorithm performs a density-equalization operation.Originally dense clusters of points tend to be expanded and dispersed clusters contracted by the algorithm.The distances between clusters gives a sense of global geometry, but it requires optimizing the setting of the “perplexity” parameter to develop.Three clusters that are unevenly spaced in the original data may become more evenly spaced by the algorithm.Structuring in the N-dimensional original data space can become distorted thereby.While the clusters are robust, the arrangement of clusters on the 2D plane is not necessarily informative and can switch around dramatically in different runs and with different parameters.Running the algorithm under multiple conditions of the controlling parameters can reduce ambiguity.Owing to its stochastic implementation, it yields somewhat different results each time it is run.Examples of this are given in the supplementary material.Although the clusters developed are usually consistent, this is not assured.Also, many iterations are required before the algorithm “settles” on a consistent stable pattern.Thus multiple runs are advisable, as well as checking pattern stability with different iteration lengths.Wattenberg & Viégas,point out that a “pinched” shape in the pattern of points may indicate an insufficient number of iterations.The “perplexity” parameter, as mentioned in Materials and Methods, governs the trade-off between local and global relationships among the vectors.For example, the replicates of a particular sample should be closely associated and might thus represent the minimum perplexity that needs testing.A perplexity setting greater than the number of points tends to yield a single cluster, which is in a broad sense correct, but hardly helpful.A very low perplexity places each point in its own cluster, which in a sense is correct also, but not useful.Beware of clustering artifacts from this source.The “right” perplexity depends on the desired level of global vs. local resolution.It is best resolved through multiple tests.An added complication arises in datasets containing clusters with large differences in size among them.Optimal perplexity may differ among such clusters.Metagenomic, metagenetic and metatranscriptomic approaches have been applied to bacteria, phytoplankton and zooplankton communities.Transcriptomics of individual species and communities is being developed as a tool to investigate physiological responses to experimental manipulations within an ecological context.Thus, research programs on plankton populations that are focused on periodic assessments of diversity and abundance can be expected to incorporate routine high-throughput sequencing of DNA and RNA of either target species or whole communities, which will require reduction of high-dimensional data.The t-SNE algorithm is a powerful tool for the initial parsing of big datasets prior to downstream bioinformatic and functional analyses.DKH, AMC and PHL conceived the study; MCC, DKH and AMC implemented and tested the t-SNE and DBSCAN applications, MCC, DKH, AMC, VR and PHL analyzed the data and evaluated the results and conclusions, DKH, AMC and PHL wrote the manuscript.All authors reviewed and approved the final manuscript.Data analyzed here were downloaded from the National Center of Biotechnology Information under Bioprojects PRNJA236528, PRNJA328961, PRNJA312028, PRNJA356331, PRNJA496596.Accession numbers to the RNA-Seq short sequence read data are listed in Supplementary Table S1.The de novo assemblies used as reference transcriptomes were downloaded from NCBI: Calanus finmarchicus and Neocalanus flemingeri.For the C. finmarchicus reference, the de novo assembly was reduced by including only a single isoform for each “comp” as described in Lenz et al.
High-throughput RNA sequencing (RNA-Seq) has transformed the ecophysiological assessment of individual plankton species and communities. However, the technology generates complex data consisting of millions of short-read sequences that can be difficult to analyze and interpret. New bioinformatics workflows are needed to guide experimentation, environmental sampling, and to develop and test hypotheses. One complexity-reducing tool that has been used successfully in other fields is “t-distributed Stochastic Neighbor Embedding” (t-SNE). Its application to transcriptomic data from marine pelagic and benthic systems has yet to be explored. The present study demonstrates an application for evaluating RNA-Seq data using previously published, conventionally analyzed studies on the copepods Calanus finmarchicus and Neocalanus flemingeri. In one application, gene expression profiles were compared among different developmental stages. In another, they were compared among experimental conditions. In a third, they were compared among environmental samples from different locations. The profile categories identified by t-SNE were validated by reference to published results using differential gene expression and Gene Ontology (GO) analyses. The analyses demonstrate how individual samples can be evaluated for differences in global gene expression, as well as differences in expression related to specific biological processes, such as lipid metabolism and responses to stress. As RNA-Seq data from plankton species and communities become more common, t-SNE analysis should provide a powerful tool for determining trends and classifying samples into groups with similar transcriptional physiology, independent of collection site or time.
60
Renewable transitions and the net energy from oil liquids: A scenarios study
The necessity of a global transition to sources of renewable energy production, or Renewable Transition, has now a relevant place in the political agenda as illustrated by the recent commitment of G7 countries and the EU for a future sustainable energy security supply .But even before this commitment, the last years have seen a very active debate about the need, and feasibility, of the RT in energy policy research.Currently, in climate forums, there is a general agreement that, in order to avoid the climate change most damaging effects and keep the global temperature under manageable limits, the RT must not be delayed anymore.In other words, although some scenarios consider the possibility to extract enough fossil fuels to keep the economic growth and maintain the system in the same way than today , the environmental and climatic impacts of keep increasing the GHG emissions would produce in the future disastrous consequences.On the other hand, the feasibility and pace of the RT has been also a subject of intense debate regarding the necessary resources to be deployed to achieve a 100% global renewable energy system.The debate has focused basically on the amount of energy that could be produced by means of renewable sources and whether renewable energy can fulfill, by itself, the present and future world energy demands given the inherent variability of renewable energy sources .The global renewable potential estimated by different studies ranges between a few Terawatts to the more than 250 TW , depending on the methodology used for the calculation.A second crucial question in this debate is about the requirements in terms of available materials and fossil fuels .Previous literature concluded that, in general, except from some critical elements the availability of raw materials required by the RT implementation would not be a limiting constraint.However, it has been found that the full implementation of any RT would require significant increase of raw material production , which could be a challenge for the mining industry if other economic or industrial sectors demanded additional material production.Altogether, the transition to a renewable energy production mix is not a matter of a simple substitution, but the result of huge investments of capital, materials and energy.Following this vein, a subject that still urges to be studied in detail is how much energy would be available for fully implementing the RT in a period during which all or most of fossil fuels will be phased out.Such study has to take into account the rate of decline of the net energy available from current fossil fuels.To do so, two factors need to be further investigated and understood.The first one concerns the amount of net energy that the industrialized society would be able to extract from fossil fuels if only the geological constraints are taken into account.That is, how much fossil fuel net energy we will have at our reach?,As it is going to be illustrated below, the non-renewable nature of fossil fuels results on a continuous reduction of the amount of net energy for discretionary uses.The second factor to be addressed, in connection with the reduction of the geophysically available fossil fuel net energy, is the pace of development of renewable resources that is required to balance such decrease of fossil fuel energy production.In this study, we will not consider all the fossil primary energy sources, but we will focus only on oil liquids because: i) oil is a key fuel for global transportation this is the resource whose availability has been studied the most, and whose future production evolution, despite the well-known debate , has reached the largest consensus among the research community.At present the debate is no longer centered on the fact that curves with a maximum and posterior depletion should be used to represent/forecast the oil liquids production behavior, but on the assumptions on Ultimately Recoverable Resources quantification, on the specific strategies to fit and forecast the production emissions associated to the geological depletion of conventional oil, has been revised by Berg and Boland using recent updates of remaining reserves estimations.The results indicate that, even if the concentration levels of GHG would be lower than previous forecasts by the IPCC, the concentration levels still would surpass the critical threshold of 450 ppm when those reserves are used.The introduction of other fossil fuels for a potential transition to oil substitutes, implies an increase of the environmental concerns, as non-conventional oils and synthetic coal-to-liquids fuels could raise upstream greenhouse emissions significantly 1).Coal is environmentally more GHG polluting and its intensive development in the future, as a substitute of oil, would also have deep environmental implications .However, a careful analysis of its availability and climate/ecological impacts is required; in addition, coal production is submitted to depletion as well and forecasts indicate that reserves will run out fast .Natural Gas has also been suggested to be a valid resource to support future energy needs .On the other hand, natural gas presents similar environmental problems as oil liquids, mainly related to GHG emissions , and its depletion is not either in a distant future .identifies five local pollutants and one global emission generated by fossil fuel and traditional bioenergy uses which have health effects and effects on agriculture.The costs derived from health effects are not supported by the producer but for the public sector, and the effects on agriculture cannot be detected by the producers, but they cause a lower global crop productivity that can be quantified.In both cases the costs are externalized out the production process.The total external costs estimated by IRENA for the base year 2010 amounts between 4.8% and 16.8% of global GDP.The wide range is a result of significant uncertainties for costs associated with air pollution, as well as the assumption about carbon prices.The same report concludes that a doubling of the present renewable share would reduce externalities by USD 1.2–4.2 trillion per year in comparison to current policies.However, markets are unable to translate these costs into correct pricing signals and, therefore, a transition away from oil would require an active support form governments.Due to such sustainability issues, the assumption here analyzed is to assume that, if there is a decline in the net energy coming from oil, and as a consequence, the future global transport and many other economic sectors appear to be in compromise , electricity coming from Renewable Energy Sources should keep the global net energy supply.The approach followed in this paper is first to estimate the time evolution of the net energy provided by the oil liquids, combining the production forecasts by the International Energy Agency with the projections about Energy Return On energy Invested of oil liquids.This choice is complemented with two models of Hubbert curves with different estimations of Ultimately Recoverable Resource and .Three models for the tendency of the EROI decline during the next decades have been used:En = Ep−Ei.If we express the net energy as a function of ε we obtain:ε = Ep/,En = Ep.So, for a given constant amount of energy produced, the net energy tends to zero as the EROI reduces to one.Unless otherwise stated, eqs– must be calculated by accounting energy balances over a period of time not shorter than the full life-cycle of the system."Also note that the energy investment has to include all energy costs for manufacturing the materials prior to the deployment of the system, the energy costs of the deployment, the energy costs associated with the system's operation and maintenance, and eventually the costs associated to the decommissioning of the system.This conventional view of the EROI is hence static and is adequate to deal with systems, which are stationary.However, this view can pose a serious problem when the EROI of a given energy source is not constant .For instance , and found that the EROI for oil production in US is related to the level of production and the level of effort in time.Thus, EROI does not decline steadily in the short term, but both studies find an overall negative trend over decades.In this work we take, for shake of simplicity, a long-term view of the EROI decline and we will suppose, as a simplification, such steady decline already used in Ref. .Due to the non-linear relationship between net energy and EROI, a falling EROI could pass unnoticed until its value gets close to 1, where further reductions would have a significant impacts on the available net energy .The starting data to estimate the evolution in terms of gross energy come from the Annual World Energy Outlook reports issued by the International Energy Agency which estimate the evolution of the global production of different liquid hydrocarbons.The data for 2013 and the predicted oil production for 2013–2040 come from Table 3.6 of .These values have been combined with the data from Ref. for the years prior to 2013; in both cases the figures correspond to IEA reference scenario.This scenario is taken as the reference of the future oil primary production.As the categories of produced volumes of all liquid hydrocarbons do not completely correspond between and ; some homogenization has been required.In Ref. there was a category called “Processing Gains”, which is absent in Ref. This corresponds to increases in volume of liquid hydrocarbons after being processed in refineries.The refined fuels may have more energy than the input hydrocarbons because of the upgrading taking place in the refinery, which combines the oil with an additional input of natural gas.Notice however that, according to the Second Law of Thermodynamics, the sum of energies of oil and the employed gas is always greater than the energy of the refined products.We have thus decided to completely disregard this category to avoid double accounting if oil and gas are considered separately, because in fact it does not represent a real increase in the energy of the input liquid hydrocarbons.In addition , contains a category, “Enhanced Oil Recovery”, which was not present in Ref. .We have considered the criterion, maybe too restrictive, that EOR is being employed exclusively in fields that are already in production, and so we accumulate this entry with that of “Fields already in production”.This simplification is debatable, not only because some fields experience EOR since the beginning of their production, but also because some fields starting to produce at present would probably be submitted to EOR within the 25-year time span of the IEA forecast.Fig. 1a and b shows clearly that the IEA accounting records a peak of conventional crude oil production some time around 2005.Moreover, the EIA data anticipates a second peak by 2015 if the fields yet to be developed ramp up their production, and a slight decline since 2015 onwards; conventional crude oil production would go from 70 Mb/d in 2005 to 65 Mb/d in 2040.The main drive of this decay is the decline of the conventional crude oil production from already existing fields, which is about 3% per annum during the whole period 2015–2040.With the inclusion of other sources of liquid hydrocarbons, the IEA estimates that, by 2040, the global production of liquid hydrocarbons would be, in volume, 100.7 Mb/d according to its reference scenario.It is worth noticing that the 100 Mb/d mark was also given for the end of the projected period of the central scenario; although in the case of the end of the projected period was 2035.The estimated production for 2035 in Ref. is very close, 99.8 Mb/d.This coincidence in the expected value of the total volumetric production in both WEOs, at about 100 Mb/d by 2035, is even more striking taking into account the different shapes of the curves derived from Refs. and .In Table 1 we summarize the differences between both scenarios.Table 1 shows that the figures published in the IEA scenarios do not correspond to free forecasts but to forecasts with pre-defined production targets.For instance, the large difference during the first years of the forecast in the category “Fields to be Developed” in Ref. is later compensated by a much slower increase by 2035, year at which scenario outpaces that category of by 7 Mb/d.This very large increase in the early years of with respect to those of cannot be explained by the two-year difference in the onset of the category.This large deviation at the beginning of the forecast period cannot be explained either by the assignment of EOR to the “Existing fields” category in Ref. ; because other attribution of EOR should increase “FTD” in Ref. even more with respect to 2012.Besides, the positive deviation in “Existing Fields” and the negative one in “FTD” by 2035 cannot be justified by a possible excessive attribution of EOR to “Existing fields”.It is also worth noticing that has a considerable stronger reliance on the evolution of LTO fields but there is not PG category in 2012.Even if we disregard the variations on the other categories, the observed differences in “Existing Fields”, “FTD”, “LTO” and “PG” are quite significant, but strikingly the difference between and totals are very similar, as shown in the last column of Table 1.It is hard to believe that during the two-year period going from 2012 to 2014 the production conditions have really changed so much but, surprisingly, lead to the same round figure of 100 Mb/d total produced volume in 2035.It is thus evident that a production goal has been fixed externally and that the categories are worked according to some loose constraints to attain the production goal.This implies that the prediction values by the IEA should be taken with a bit of caution, as they may be too optimistic about the future of the production of the different categories of liquid hydrocarbons.Indeed, these data do not introduce constraints based on geology or thermodynamics, but consider the energy necessary for a forecasted level of economic activity.The evolution in the production of liquid hydrocarbons shown in Fig. 1a and b refers to the volume of the oil production.To estimate the resulting gross energy, we need to translate the produced volumes into units of energy, because not all of the liquid fractions have the same amount of energy per unit volume.By doing so, we will obtain an estimate of the gross energy of the produced liquid hydrocarbons.All the conversion factors we will estimate for oil liquids to gross energy here are supposed constant along time, which follow our criteria of giving an optimistic calculation for the IEA forecast.To estimate the energy content of Natural Gas Liquids we assume that the fractions of ethane, propane and butane in the world NGL production are the same that the one observed in the US NGL production, i.e. 41% ethane, 30% propane, 13% natural gasoline, 9% isobutene and 7% butane.The energy contents of these fractions are approximately the following: 18.36 GJ/m3, 25.53 GJ/m3, 28.62 GJ/m3, The enthalpy of combustion of propane gas include some products losses, for example where the hot gases including water vapor exit a chimney is −2043.455 kJ/mol, which is equivalent to 46.36 MJ/kg.Low heating values of ethane, butane, isobutene and natural gasoline are 47.8, 45.75, 45.61, and 41.2 MJ/kg, respectively.The density of liquid propane at 25 °C is 493 kg/m3.Propane expands at 1.5% per −12.22 °C.Thus, liquid propane has a density of approximately 504 kg/m3 at 15.6 °C and 494 kg/m3 at 25 °C .We assume the following densities for ethane, propane, butane, isobutane and natural gasoline: 570, 494, 599, 599, 711.Strictly speaking, ethane should not be considered as an energy resource since it is almost completely used to produce plastics, anti-freeze liquids and detergents.However, the IEA statistics regularly includes this fraction of the NGLs in its concept “all liquids” and so we do in this work.Therefore, the energy projections discussed in this work can be considered optimistic estimations.Light Tight Oil denotes conventional oil containing light hydrocarbons, which has been trapped by a non-permeable, non-porous rock such as shale and its energetic content should be not much lower than conventional crude oil. assumes the same energy content for synthetic oil coming from oil sands and for crude oil.We will assume the same energy content also for other non-conventional oils.According to ; the upgrading of bitumen to syncrude is made in two steps.A first partial upgrade produces a pipeline quality crude of 20–25° API.The second upgrade produces a final product, which is similar to conventional oil.An alternative to the first step is to dilute the bitumen with natural gas liquids to produce “dilbit” with 21.5° API approximately.By assuming a final product of 32° API in all the cases, the oil density of upgraded bitumen is 865 kg/m3.The upgrading of kerogen is different to the one of bitumen, but the final product is an oil of 30° API approximately , or 876 kg/m3.We assume that the mean density of upgraded unconventional oils is the average of the last two densities, i.e. 870.5 kg/m3.According to US Oil the apparent density of an standard oil barrel of 159 L and 1 BOE of energy content is 845.5 kg/m3.Thus, the energy content of one barrel of unconventional oil is practically equal to that of a standard oil barrel.With these assumptions, the aggregate gross energy contribution of all liquids and their components as they evolve along time can be obtained, and they are represented in Fig. 2.We can observe how total gross energy of oil liquids is growing to a total amount of 95.3 Mboe/day in 2040 with a decay of Crude Oil compensated by the categories of “Fields to be Found” and FTD.We made an alternative estimation of the all liquids evolution with the help of a Hubbert fit of historical gross energy production that uses estimation of the Ultimately Recoverable Resource.The historical data of production has been taken from The Shift Project data portal,2 which take them from Ref. and the US EIA historical statistics.Where P is the annual production of oil liquids, u is its ultimately recoverable resource, f the fraction of u that belongs to the first logistic function, g1,2 is the growth rate parameter of the two logistic functions, and t1,2 the year of peak production of the two logistics.A good estimate of u is important in reducing the number of free parameters in the fit to only g and tp.The total petroleum resource estimated to be recoverable from a given area is the ultimately recoverable resource for that area.At any point in time, the URR is equivalent to the sum of cumulative production, remaining reserves, and the estimated recoverable resources from undiscovered deposits - normally called “yet-to-find” .In our fit, URR is just the area of the curve, and it strongly constrains the curve shape so decreasing the uncertainty of g and tp.After obtaining these two parameters the resulting function is used to forecast future production rates.The URR for oil liquids has been estimated by Refs. to be 3 × 1012 boe, about 400–420 Gtoe.Taking the largest value, 17580 EJ are obtained for the parameter u.A nonlinear best Mean square root fit with R2 = 0.999 is obtained for the following parameters: f = 0.07, g1 = 0.049, t1 = 2021, g2 = 0.155, t2 = 1977.To compensate possible pessimistic bias an alternative URR of 4 × 1012 boe has been also used in the fitting.This figure is close to the upper value used by Ref. for the URR of conventional oil, which has been considered to be larger than the actual URR with 95% probability .Fig. 3a shows the two fits obtained until 2050 and Fig. 3b compares the total energy projection obtained with IEA data with the one obtained from the pessimistic Hubbert fit and the optimistic Hubbert fit.The projections of IEA and pessimistic fit have a similar evolution within an error bar of 5–10 EJ/yr from 2010 to 2021.After 2021 the IEA projection continues its growth and markedly separates from our pessimistic fit, which declines.This projection would be consistent with a URR larger than 3 Tboe, for instance an URR = 4 Tboe would be amply capable to feed a sustained increase of oil production until 2040.However, when the FTF are removed from the IEA fractions, the IEA projection so modified becomes closer to the pessimistic fit with an error of 5–10 EJ/yr until 2035.After that year, the two curves separate due to their different decay rate, which is linear for IEA and exponential for the fit.The black continuous line in the figure corresponds to the historical data from TSP portal used for the Hubbert fits.From Fig. 2 of gross energy estimation, we can estimate the net energy using the definition of EROI.As a first approximation, we have assumed that EROI is different for each type of hydrocarbon but constant in time, so we have obtained the estimates of net energy production for the 5 types shown in Fig. 2.We have assumed EROI = 20 for existing fields of conventional crude oil .For fields to be developed, we have assumed that an EROI of 10, half the value for existing fields, which is a compromise between the EROI from non-conventional oil and the EROI for existing fields .We have estimated the EROI for natural gas liquids taking into account an EROI of 10 reported for natural gas and the energy needed to separate the NGL fractions; the result is EROI = 7.7.Finally, for non-conventional oil we assume that 80% of it is “mined” and 20% “in situ” exploitation, with EROIs of 2.9 and 5 respectively .The combined EROI of non-conventional is estimated by averaging the gross energy inputs for each unit of produced energy, what implies that the combined EROI is estimated as the inverse of the average of the inverses of both types of EROI, namely:εun = −1≅ 3.2,Light Tight Oil is energetically expensive to extract , but has no need of upgrading; thus, we assume for it the same EROI, 5, as for tar sands without upgrading .Fields to be found are assumed to be at high depth and/or sea locations and having the same EROI.The EROI used for each liquid is resumed in Table 2.The net energy for every oil liquid fraction is displayed in aggregate form in Fig. 4.We can see that, since 2015, the aggregate net energy available from oil liquids is almost constant, reaching a value slightly higher than 80 million barrels of oil equivalent per day by 2040.The parameter ε2013 is the initial value of EROI at the reference year 2013.In what follows we will assume δ = 0.25 year−1 and τ = 43 year.Those values correspond to the scenario of intermediate exponential variation and gradual linear variation, respectively, in the work of .It should be noticed that the minimum value that ε is allowed to take is 1.For EROI values below 1 the hydrocarbon will no longer be a source but a drain of energy and hence we assume that its production would be discontinued.Fig. 5a shows the net energy obtained when L model, eq., is used.Net energy is the same than in Fig. 4 until 2015 and after that year it decreases as shown in Fig. 5a. Fig. 5b represents the net energy obtained when E model, eq., is used.As shown in Fig. 5a–b, both models predict a peak of net energy approximately at 2015, with an uncertainty of a few years because the time resolution of data is 5 years.The decline is quite sharp in the case of the linear function.In addition, for this model, all the sources, except conventional crude oil from existing fields and fields to be developed, are completely depleted by 2030.In the exponential decay model the decline estimated after 2015 is smoother and takes the net energy from 80 to 70 Mboe/d by the end of the period.A different model for the future evolution of oil EROI was proposed by Ref. and consists in a quadratic decline of EROI as a function of the fraction of Ultimately Recoverable Resource that has not been extracted).The remaining fraction of total oil can be calculated for each year from the area of the corresponding Hubbert curve displayed in Fig. 2-a.Then the EROI for the different oil components is modeled in the following way:εj = kjRfp t > 2013where kj = /Rfp, for component j of oil and t is the year.This expression predicts that the oil EROI takes its observed value at 2013 and tends asymptotically to zero in the long term. ,; see Supplementary Material) obtained p = 3.3, a value providing an appropriate fit to the observed decay of EROI between 1900 and 2010, and it is the value used in this work.The estimation for the net energy obtained with this potential model is displayed in Fig. 5c.It can be appreciated how the forecast for this model gives a more optimistic approach than the two previous, with a slow decay of all net energy liquids including a small growth of net energy around 2020-25 with a final value around 75 Mb/d.In the previous sections, all the models considered show the net energy decay for liquid hydrocarbons during the next 25 years.In this section we will analyze the pace of development of RES to compensate this net energy decay.Two limiting rates of RES deployment will be studied here, namely the maximum and the minimum rates for a time period going from 2015 to 2040.In the context of this work, we just want to identify any possible constraints that the decline in available net energy from liquid hydrocarbons would impose on the RES deployment.Regarding the maximum rate, we want to know if a full replacement of all energy sources by RES is possible during the period.Regarding the minimum rate of RES deployment, we want to assess the necessary growth rate in RES development, under the different scenarios, to be able to keep pace with the world energy demand once liquid hydrocarbon decline is accounted.Implementation of a RT satisfying all the energy needs of humanity would involve using a fraction of the non-renewable energy production to build and maintain the energy production and transmission infrastructure, to extract and process materials, to restructure the heavy industrial sector and to develop unconventional industrial and agricultural activities.To evaluate the order of magnitude of the total net energy required for the RT we estimated the energy required for some of the main changes in infrastructures necessary for such RT.We have considered the 2015 economic cost, multiplied by the energetic intensity of World industry.We have considered energetic costs of:the interconnection between distant areas,Domestic heating and cooling: we consider a mean domestic surface of heating and cooling of 20 square meters per person in the world and we study two main areas: tropical and temperate zones.We take the 45% of World population living in tropical areas and 55% living in temperate areas for the period 2015–2040.The total cost estimated for this sector is: 1.17 Gboe.Mining and processing of copper and iron.We will need to produce a total of 5398 Mt of iron and steel, with an energetic cost of 22.7 GJ/t for mining and producing steel and 23.7 GJ/t for iron mining and production, which considering ovens used by steel industry gives a total of 238 EJ for iron and 10.9 EJ for copper, which is a total amount of 40.7 Gboe.These costs together give a total cost of 160.5 Gboe for such necessary changes, which should be taken into account during the evaluation of the energetic costs of RES development.This amount of required energy, 160.5 Gboe, is quite impressive: if the transition were to take 25 years this would imply an average energy flux of about 17.6 Mboe/day.Just to have a reference, such energy expense compared to the net energy annually provided by liquid hydrocarbons would represent 22% in 2015 and up to 44% -in L model-by 2040; compared to the total amount of primary energy currently consumed in the world this would represent a bit more than 7%.However, this expense should not be accounted the same way as the cost for implementing the new renewable systems, because what it implies is a shift in the uses assigned to energy.Indeed, some of the required changes will imply an increase of energy consumption with respect to the present consumption patterns and somehow an actual cost, but at the same time some new activities will imply a decreased consumption of energy and materials with respect to those activities replaced by them.Thus, evaluating the net cost of the transition in terms of the required infrastructures implies carrying a very detailed study on the many affected sectors, which exceeds by far the frame of the present work.We analyze in this subsection the transition scenario of using gas and coal to compensate the oil net energy depletion.We only consider the effect that will have the oil depletion in the global transport.In this line, we analyze two main aspects: the effect of oil depletion in transportation and their substitution by oil and gas and the use of biofuels to compensate such depletion.If coal and gas energy will be used by transport sector to support the RT, filling the decay in oil net energy, then we can consider two cases: 1) energy coming from coal and gas or only from gas and 2) energy coming from renewable electricity.For these estimations we considered the potential model for EROI decay.We consider that the EROI decay for gas will be the same than for oil liquids and that for coal it will be constant.Oil decay compensated by using coal and gas or only gas to produce electricity with which to power the transport.We will assume that the vehicles fleet is already 100% electricity-powered.The efficiencies would be: about 0.42 for the electricity production and transmission and 0.67 for the plug-to-wheel efficiency of a battery vehicle .This gives an efficiency of 0.281 for the process primary energy production to wheel.In contrast, the current efficiency well to wheel of a gasoline vehicle would be 0.92 × 0.16 = 0.147.Here, we have considered that refining self-consumption and transport of fuels use 8% of the primary oil and that the tank-to-wheel efficiency of an internal combustion vehicle is about 0.16 .If we divide this efficiency by the previous one the result is a 0.52 factor.Thus if we consider the oil depletion substitution by coal and gas then the sum of the two fuels must be greater than 0.52 of the oil decline.If, due to environmental reasons, coal is avoided and we only substitute the oil depletion by gas then the gas production must be 0.52 greater than the oil decline.Oil decay compensated by using renewable electricity and battery cars.We will assume an efficiency of 0.93 for the electricity transmission of a future grid connecting renewable stations and consumption points .Using the plug-to-wheel efficiency given above for battery vehicles the production-to-wheel efficiency obtained is 0.623.The ratio of the well-to-wheel efficiency of an internal combustion vehicle and the previos figure is 0.236.Thus, if only renewable electricity and vehicles with batteries are used then the increase of renewables must be at least 0.236 times the oil decline.Another option, as has been commented previously, is to use biofuels to compensate the oil liquids decline in transportation , specially in the marine and air sectors, since batteries have severe limitations in long distance and high power transport.A mean fixed cost of transportation of bio oil by truck is 5.7$10/m3.The energy consumed by the air and marine transport in 2005 was 285 and 330 GWy/y.The scale factor for energy consumption between 2005 and 2040 would be 1.31.Assuming that the scale factor is appropriate for both air and marine transport, these sectors would demand 373 and 433 GWy/y, respectively, in 2040.We assume that both sectors would use mainly liquid fuels.Assuming that 50% of the total demand of energy by these sectors must be already supplied by bio-fuels in 2040, 403 GWy/y of bio-fuels should be produced and transported that year .A typical value of the lower heating value of Liquified Natural Gas apt to be transported by truck and ship is 20300 MJ/m3.Assuming that a small fraction of LNG will be transported by ship and adding by the inflation 2010–2017, the final cost would be 4.4 × 109 USD or, using the energy intensity of world industry, about 3 Mboe of energy embodied.This figure is three orders of magnitude lower than the energy cost of renewing the air conditioning worldwide, four orders of magnitude lower than renewing the vehicles fleet and five orders of magnitude than building the new electric infrastructure.Thus, the deployment of a gas transport grid would be a relatively minor problem in the transition.According to the U.S. Energy Information Administration , the global annual total primary energy consumption from 2008 to 2012 was 512, 506, 536, 549 and 552 EJ, which correspond to an average power of 16.2, 16.0, 17.0, 17.4 and 17.5 TW.To calculate the energy cost of the renewable transition, we assume a goal of having 11 TW of average power by 2040, which implies that, by that year, we should produce the equivalent to 155 Mboe per day with renewable energy sources.These 11 TW are assumed to be enough to replace 16 TW produced by non-renewable sources, if one implicitly assumes a gain in terms of efficiency when using electricity as the energy carrier or equivalently some conservative policy to save energy .We assume an EROI of 20 for renewable sources, constant along time, both for wind and solar concentration power .Hence, to produce 155 Mboe per day we would have to invest the twentieth part of the energy produced in its construction and maintenance.We assume this investment will be mainly provided by crude oil during that period, at a constant pace.We start with a production of renewable energy that is today 13% of all primary energy, and we assume that 100% is reached in 2040.For the sake of simplicity, to assess the maximum rate of deployment, we assume an implementation of RES, which grows linearly with time.Here, we consider the evolution in net energy from all oil liquids in three scenarios, each one in agreement with each of our three models; a part of the available oil energy will be directed to the renewable deployment.In all cases the implementation of RT will be feasible in terms of available crude oil net energy.According to Table 4, the maximum required percentage of produced crude oil net energy occurs in the L model by 2040, attaining almost 20%; in contrast, with P model just 10% is required by that year.The intermediate scenario requires 2.85 Mboe per day, which is 11% of the total 2040 net energy of crude oil.The second question addressed here deals with the minimal rate of RES deployment required to fulfill the global net energy demand at any moment, especially taking into account that the decline in net energy may impose large annual rates of change.We start with the amount of RES available in 2015, which we consider the same as in the previous subsection: 13.9% of all primary energy, i.e. 31.3 Mboe/d or 19420 TWh.Then, we will calculate the necessary rate of deployment to compensate the fall in net energy from liquid hydrocarbons.To simplify the discussion, only L and P models will be used, which represent optimistic and pessimistic cases.Two different future net energy supply scenarios will be considered.In the first one, we will assume that RES net energy production compensates in the long-term tendency the net energy decay in liquid hydrocarbons: the sum of RES and liquid hydrocarbon net energy is constant.We acknowledge here that this is an oversimplification, as previous studies have shown that there are short-term oscillations in the EROI time evolution that conducts always to total net energy decays.However, our objective is to focus in the long-term tendency of EROI and to give some optimistic limits considering the hypothesis of .We will call this scenario of constant sum of net energy of RES and oil liquids the constant scenario.In the second one, we will assume that RES deployment not only compensates the long-term net energy decay of liquid hydrocarbons but that it provides an additional 3% yearly increase of net energy production from the level given by liquid hydrocarbons from 2015 to 2040.We will call this the growth scenario.This 3% growth can be understood as a 2% of net energy growth necessary for the economic growth plus a 1% due to the expected increase of global population, according to the forecasts of UN.Notice that the growth scenario is in fact a quite optimistic one, as the rate of growth is lower than the historical rates of growth in oil production.For each 5-year period, an annualized rate of RES growth will be calculated:μi =1/5−1where Δpi is the decay + growth in liquid hydrocarbon energy for the i-th 5-year period and ri stands for the amount of RES at period i; then, we can estimate RES for the following period once the annualized rated is known in the present period by means of the formula:ri = ri-15 = ri-1 + Δpi,The results of the constant and growth scenarios are shown in Fig. 6a and b and Tables 5 and 6 The results for model L and P are given, expressing for each 5-year period the requirements of energy rate increase of RES to fulfill that mark.Fig. 6a shows the results for the constant scenario, while Fig. 6b refers to the growth scenario.As model L forecasts a faster decay, it consequently implies a higher RES increase to compensate it.In the case of constant scenario, model L indicates that by 2040 RES should have been increased by at least 105% from present value, passing from 19420 TWh in year 2015 to a minimum of 39814 TWh in year 2040, while the minimum required raise would be of just 1.4% in the case of model P, only 22460 TWh by 2040.The largest annualized rate of growth is observed in model L for the period 2025–2030, when the annual growth should be of at least 5.6% per year.The situation is stringent in the case of the growth scenario, where the RES deployment must not only compensate the net energy decay in the liquid hydrocarbons but also produce an additional contribution to reach the proposed 3% increase.The minimum annual rates of change are always above 4% annual for all considered periods in both models L and P, and for one period in model L this required minimum is 8% per each of the five years in the period.Notice, however, that each 5-year period in which the transition to RES is postponed, implies a substantial increase on the required minimum of average annual rate.So, for the full period 2015–2040 the average annual rate in the growth scenario is 5.6% in model L and 3.8% in model P; but if the transition is delayed to 2020 the required mean annual rates of increase in RES deployment will be of 8.3% and 6.9%.If the transition is postponed 10 years from the present, the required mean annual rates will be 11.1% and 9.3%.In this article we have used the IEA forecasts of the global production of different liquid hydrocarbons with three projections of Energy Return On energy Investment to provide estimates of the amount of net energy that will be available from oil for discretionary uses in the next decades.Taking into account the gross energy equivalences, the total gross energy will grow up to 95 Mboe per day till 2040.This quantity makes an adjustment of the WEO forecast by around 5 Mboe per day less for the total volume.When these IEA estimates are compared with two fits of a Hubbert function there is a reduction with respect to forecast in terms of the total World production of liquid hydrocarbon net energy, placing the projected growth of global total gross energy supply under serious stress.To overcome this problems the implementation of a RT requires other energy investments that should be considered as a part of the development of renewables: transport electrification, industry adaptation from fossil to electric technologies, changes and extension of the electric grid from the geographical renewable production center to the end-users and consumers, energy storage, small-scale energy producers, and industrial large-scale RES production.Here we have made a rough evaluation of some of such issues, which amounts to around 160.5Gboe, and that has to be considered as a first estimation for future evaluations and discussions of total energy requirements for a RT.Moreover, it cannot be added directly to our estimations for RES energy requirements for RT, as commented above, because the changes in infrastructures must be evaluated per sector and with a more precise calculation on how the required total energy will affect to a new energy system based mainly on RES.A more detailed analysis should be done to assess such issues, mainly calculating the net energy decay from all the fossil sources.Such calculations are out of the scope of the present work.Focusing only in the oil liquids net energy decay we have considered maximum and minimum rates of deployment of RES for each model of EROI decline and, according to a scenario of total replacement of non-renewable sources by 100% RES in 25 years from now.The goal of this exercise provides an assessment of how much energy from all oil liquids should be addressed assuming that oil liquids are not replaced, but used as the main energy source to implement the RT.These estimations are necessary if the energy supply for transportation should be kept globally.We want to stress that our analysis assumes that such energy coming from oil and devoted to the transport is essential to be replaced and the more sustainable way to do it is by RES.It is worth to note that the practical implementation of such substitution is a more complex mater whose study is out of the scope of this work.These rough estimations give a maximum of 20% of the oil liquids total net energy by 2040 to be invested in RT in the worst case.These results evidence that the RT is feasible in terms of available net energy, even in the most pessimistic scenario for maximum development rates.However, such amount of required energy implies huge economic and societal efforts.On the other hand, the minimum development rates for RT are calculated assuming that RES must compensate the decline in net energy from all oil liquids under two hypothetical scenarios and for the two extreme EROI decay models; the goal of this second exercise has been to assess the minimum rate of deployment of RES in order to avoid problems with the net energy supply at global scale.In the first case we show that the RT is achievable but, depending on the EROI decay finally taking place, may require in some periods to invest at least 10% and up to 20% of all oil net energy available per year, which may interfere with other uses of net energy by society.Those percentages would be considerably higher if the decay of conventional oil production is faster than what IEA expects, which is not implausible given the questionable IEA data processing shown in section 2.Regarding this second analysis, it shows that if we want to avoid problems with global net energy supply, the RT cannot wait for quite long, especially if a moderate increase in net energy supply is to be expected: if the equivalent of a growth of 3% in net energy from liquid hydrocarbons must be attained, new RES should be deployed at a rate of at least 4% annual disregarding the EROI model, and even reaching 8% during one of the periods in Linear model.Those rather important rates of RES deployment, which are the required minimum to avoid problems with the global energy supply, get substantially increased as the RT is postponed, and they can go above 10% annual in just a decade from now.Our analysis of both, constant and 3% required growth in the global net energy can conduct to a short run reduction of the EROI within a long run stabilization or growth .Other point that arises in such RT is that oil currently almost does not affect the electricity production and, as RES produce electricity, there are some minor impacts from Oil to the electricity generation.Here we argue that Oil liquids net energy decay will have strong impacts into the transportation system , since this sector consumes approximately 28% of total secondary energy, 92% of it in form of oil derivatives .The current global economy requires an extensive and low price transportation system to keep the industrial and raw materials flows to allow and deploy the RES.This freight transportation relies basically in Oil.All this net energy demand will have to be supplied with renewable energy in a future post-oil economy.According to García-Olivares a future post-carbon society could sustain a vehicles fleet similar to that we have currently if an intelligent restructuring of the transport were made.To manage the limited reserves of platinum-palladium, fuel cell motors should be reserved mainly for boats, ambulance, police and fire trucks, and by 10% of the present number of farm tractors.Other tractors should be powered by batteries or by connection to the electric grid, and similarly for other commercial trucks.This solution could be more easily implemented if land transport were based on electric trains for freight and passengers between cities and villages.Open field work in farming, mining and construction sometimes requires high power tractors that should also be supplied by fuel cell vehicles, while other generic farming work could be done using many smaller electric tractors which would recharge their batteries in the grid.Thus, full connection of farms to the electric grid would become necessary in the future economy.For similar reasons, any project involving open field construction will have to plan for the building of a connection to the grid.This reorganization of open-field work is a major challenge but does not necessarily create an insurmountable problem if there is political will to foster an energy transition.Then, the decay in net energy of Oil liquids has necessarily to be compensated with other energy sources, our work shows how this compensation is not only necessary for the RT to a post-carbon economy but also achievable if determinate and urgent policies are implemented.The policy implications of this RT will be discussed in the next section.This work shows that the transition to a Renewable Energy Supply system has to be defined taking into account the EROI of available primary energy sources.The figures presented in this work should be taken as optimistic/conservative estimates about the needs for a future RT; actual required rates of deployment and energy needs can be significantly greater that those presented here if other factors are properly accounted.In this work we have just analyzed the situation regarding oil net energy, but the analysis should be extended to the rest of non-renewable sources: coal, natural gas and uranium, even though we expect their importance will be relatively lower in the particular energy sectors in which oil has high impact.We considered and initial estimation of the impacts of coal, gas and biofuels for transport sector but further analysis are required at this point.The work necessary for such detailed analysis is far beyond the scope of this paper.The hypothesis that this work manage is try to keep the current energy production level just replacing a non-renewable energy source by other which is more sustainable in terms of CO2 emissions.Thus the main idea is to analyze if the fossil fuel based economy could be partially supported in the future by renewable energy sources.The deployment of RES will have a double effect: will fill the gap of the net energy coming from oil liquids and the need to keep/increase the energy production for a healthy economy and also help to reduce GHG emissions.But indeed conservation will be a crucial instrument in any viable RE transition.As an example, the Energy savings 2020 report of the European Union shows that 38% of energy could be saved in 2030 for the residential sector if a “high policy intensity” scenario of saving measures were implemented.For all final sectors this saving could be 29% of secondary energy relative to the base case .Taking our estimates on the net energy future decay into account, we have shown how current and future net energy availability in terms of oil liquids can allow a renewable transition.The required minimum rates of RES development to fulfill the RT, are feasible considering that during the last 5-years the global mean RES development has been around the 5%, with 2012 having 8% growth.These rates of RES development are also compatibles with the IEA forecasting of 5% RES mean growth for the next 5 years .At this point, a question arises about how this necessary RT should be supported, particularly taking into account the development rates required to compensate the net energy decay of oil liquids.Such rates require a continuous investment support to keep the pace of RT.Many economists support some form of carbon fee, such as a revenue neutral carbon fee and dividend plan, as a market-based solution to the problem.The effort that has gone into promoting carbon fee plans is laudable, and carbon pricing is clearly one of the public policy levers that governments and regulators will need to use in the future.But a carbon tax alone will not probably solve the problem: it will not move quickly enough to catalyze very specific, necessary changes, and a carbon fee and dividend system would have to be implemented on a global level, something that is hardly going to happen.Another possibility to obtain the necessary investment capital to support the RT is to develop financial instruments.Thus, instead on relying only in the public financing of the necessary changes and infrastructures, an additional support could come from private initiatives.For instance, a financial instrument currently in use is the Yield Cos, which allows private investors to participate in renewable energy without many of the risks associated with it.However, they have risks related to the payoff time of the investment within the regulatory framework that can influence it.Another related issue arises from the oscillations of the electricity price, which can affect the consumers costs, paying more for the electricity of their solar panels than they would for grid electricity.Regardless on whether the investment is coming from private of public sectors what is clear from the numbers we are managing here is that a decisive effort from the policy side supporting the RT must not be avoided anymore.Particularly, in Europe, European Commission launched main plans or strategies to promote low-carbon socio-economy."One of them is the Strategic Energy Technology Plan , which aims to accelerate the development of low-carbon technologies and promotes research and innovation efforts to support EU's transformation in a low-carbon energy system.However, such plans act as a general framework for more concrete policy actions.The work developed here aims to give a set of values estimated under optimistic assumptions, to stimulate a debate, and also to warn about the future global net energy availability and the urgency of more concrete and determined policies at all administrative levels to support and to enhance the implementation of RES.Finally, we cannot dismiss the side effect of the RT development produced by the stress and constraints on critical raw materials supply used in the implementation of RES as pointed out in several reports and discussed in the literature.As showed by recent analysis a negative feedback can be produced by the need of usual raw materials in wind power technologies and solar cell metals.Most of them are classified into rare earth elements that appear as byproducts of more common materials.The rate of RES implementation would imply more efforts to obtain raw materials and in turn to produce an excess of co-products having two negative impacts.First the oversupply of eventually unnecessary host products may lower its prices and then discouraging life recycling of those metals, a situation that presently has trapped the emerging economies strongly dependent on commodities but also the mining companies.A second negative feedback is the environmental impact associated to the extraction and processing of huge quantities of raw material, being translated in the best case into an increase of GHG emissions exceeding the benefits of the RT savings and in the worse cases into an increase of highly poisoning heavy metals and radioactive elements.This situation can only be surmounted by promoting research efforts to develop green technologies less dependent on such materials and by policies strongly encouraging material recycling.
We use the concept of Energy Return On energy Invested (EROI) to calculate the amount of the available net energy that can be reasonably expected from World oil liquids during the next decades (till 2040). Our results indicate a decline in the available oil liquids net energy from 2015 to 2040. Such net energy evaluation is used as a starting point to discuss the feasibility of a Renewable Transition (RT). To evaluate the maximum rate of Renewable Energy Sources (RES) development for the RT, we assume that, by 2040, the RES will achieve a power of 11 TW (1012 Watt). In this case, by 2040, between 10 and 20% of net energy from liquid hydrocarbons will be required. Taking into account the oil liquids net energy decay, we calculate the minimum annual rate of RES deployment to compensate it in different scenarios. Our study shows that if we aim at keeping an increase of 3% of net energy per annum, an 8% annual rate of RES deployment is required. Such results point out the urgent necessity of a determined policy at different levels (regional, national and international) favoring the RT implementation in the next decades.
61
Antioxidant and free radical scavenging activities of taxoquinone, a diterpenoid isolated from Metasequoia glyptostroboides
Overproduction of free radicals and reactive oxygen species has been confirmed in a human body due to the perturbation of various metabolic reactions.Free radicals and ROS are generated through normal reactions within the body during respiration in aerobic organisms which can exert diverse functions like signaling roles and provide defense against infections.However, many degenerative human diseases including cancer, cardio- and cerebrovascular diseases have been recognized being a possible consequence of free radical damage to lipids, proteins and nucleic acids.Natural antioxidants protect the living system from oxidative stress and other chronic diseases, therefore they can play an important role in health care system.The food industry has long been concerned with issues such as rancidity and oxidative spoilage of foodstuffs.The auto-oxidation of lipids during storage and processing resulting in the formation of various free radicals, is the major reaction responsible for the deterioration in food quality affecting the color, flavor, texture and nutritive value of the foods.Hence, antioxidants are often added to foods to prevent the radical chain reactions of oxidation by inhibiting the initiation and propagation steps leading to the termination of the reaction and a delay in the oxidation process.Synthetic antioxidants such as butylated hydroxytoluene, butylated hydroxyanisole and tert-butylhydroxyquinone effectively inhibit the formation of free radicals and lipid oxidation.However, these frequently used synthetic antioxidants are restricted by legislative rules due to their being toxic and carcinogenic by nature.Therefore, there has been a considerable interest in food practices and a growing trend in consumer preferences for using natural antioxidants over synthetic ones in order to eliminate synthetic antioxidants in food applications, giving more emphasis to explore natural sources of antioxidants.This has led to develop a huge working interest on natural antioxidants by both food scientists and health professionals.Nowadays, there has been a convergence of interest among researchers to find out the role of natural antioxidants in the diet and their impact on human health.Metasequoia glyptostroboides Miki ex Hu is a deciduous coniferous tree of the redwood family, Cupressaceae.This species of the genus Metasequoia has been propagated and distributed in many parts of Eastern Asia and North America, as well as in Europe.Previously we reported various biological properties of various essential oils derived from M. glyptostroboides such as antibacterial, antioxidant/antibacterial, antidermatophytic and antifungal activities.In addition, the antibacterial activities of terpenoid compounds isolated from M. glyptostroboides have also been reported against foodborne pathogenic bacteria.The biological efficacy of M. glyptostroboides has been reported previously in vitro and in vivo both, however, no research has been reported on the antioxidant and free radical scavenging efficacy of taxoquinone from M. glyptostroboides.Hence, in our continuous efforts to investigate the efficacy of biologically active secondary metabolites, in this study, we assayed the antioxidant, and free radical scavenging efficacy of taxoquinone, an abietane type diterpenoid isolated from M. glyptostroboides.The chemicals and reagents used in this study such as 1,1-diphenyl-2-picryl hydrazyl, sodium nitroprusside, Griess reagent, trichloroacetic acid, nitro blue tetrazolium, ferric chloride, potassium ferricyanide and gallic acid as well as standard antioxidant compounds ascorbic acid, butylated hydroxyl anisole and α-tocopherol were purchased from Sigma-Aldrich and were of analytical grade.Spectrophotometric measurements were done using a 96-well microplate reader.The cones of M. glyptostroboides were collected locally from Pohang city, Republic of Korea, in November and December 2008, and identified by the morphological features and the database present in the library at the Department of Biotechnology, Daegu University, Korea.A voucher specimen was deposited in the herbarium of the College of Engineering, Department of Biotechnology, Daegu University, Korea.Dried cones of M. glyptostroboides were milled into powder and then extracted with ethyl acetate at room temperature for 12 days.The extract was evaporated under reduced pressure using a rotary evaporator.The dried ethyl acetate extract was subjected to column chromatography over silica gel and was eluted with hexane-ethyl acetate-methanol solvent system to give 20 fractions.Of the fractions obtained, fraction-12 was further purified by preparative TLC over silica gel GF254 using hexane-ethyl acetate as a mobile phase to give one compound which on the basis of spectral data analysis was characterized as taxoquinone as shown in Fig. 1.The ferric ion reducing power of the taxoquinone was determined by the method described previously with minor modifications.Aliquots of different concentrations of taxoquinone were mixed with 50 μL phosphate buffer and 50 μL potassium ferricyanide, followed by incubation at 50 °C for 20 min in dark.After incubation, 50 μL of TCA was added to terminate the reaction and the mixture was subjected to centrifugation at 3000 rpm for 10 min.For final reaction mixture, the supernatant was mixed with 50 μL distilled water and 10 μL FeCl3 solution.The reaction mixture was incubated for 10 min at room temperature and the absorbance was measured at 700 nm against an appropriate methanolic blank solution.A higher absorbance of the reaction mixture indicated greater reducing power ability.All tests were run in triplicate.Ascorbic acid and α-tocopherol were used as positive controls.All data are expressed as the mean ± SD by measuring three independent replicates."Analysis of variance using one-way ANOVA followed by Duncan's test was performed to test the significance of differences between means obtained among the treatments at the 5% level of significance using the SAS software; Version: SAS 9.1.The ethyl acetate cone extract of M. glyptostroboides after column chromatography over silica gel yielded a pure compound which was obtained as orange needles with a specific melting point.The 1H NMR spectrum of the compound showed a hydroxyl methine signal at d 4.77, an oxygenated proton at d 3.80, an aliphatic methine at d 3.14, and proton signals for methylene and five terminal methyl groups.Further analysis of the COSY data established the connectivity through H-7a, 7b–OH, and H-6b, and through H-1b, H-2b, and H-1a, 2a, 3a, 3b.In addition, two methyl signals at d 1.20 and 1.19 coupling with a methine signal at d 3.14 in the 1H NMR data as well as 20 carbon signals including two carbonyl groups at d 189.6 and 183.7 in the 13C NMR data strongly suggested that this compound should be an abietane diterpenoid.On the basis of the interpretation of HMQC and HMBC data, this compound was proposed to be taxoquinone.By comparison of the multiplicity of H-7 and the chemical shifts both in the 1H and 13C NMR data, the structure of this compound was determined to be taxoquinone.The scavenging activity on DPPH radicals assay is generally used as a basic screening method for testing the anti-radical activity of a large variety of compounds.This assay is based on the measurement of the ability of antioxidants to scavenge the stable radical DPPH, widely used in evaluating the antioxidant activities in a relatively short time as compared to the other methods.In this assay, the color of DPPH radical changes from violet to yellow upon reduction, which is demonstrated by the decrease in absorbance at 517 nm.The free radical DPPH radical is reduced to the corresponding hydrazine when it reacts with hydrogen donors.For being an easy and accurate method, it has been recommended to measure the antioxidant activity of samples of different origin.Free radical scavenging is one of the recognized mechanisms by which antioxidants inhibit lipid peroxidation.Fig. 2 shows the percentage of DPPH radical scavenging capacity of taxoquinone in comparison with ascorbic acid and α-tocopherol as reference compounds.A concentration-dependent response relationship was found in the DPPH radical scavenging capacity and the activity was increased with the increase of sample concentration.The taxoquinone showed maximum 81.29% inhibition of DPPH radicals at 150 μg/mL, while ascorbic acid and α-tocopherol showed about 81.69% and 84.09% inhibitory effect, respectively at 150 μg/mL.Significant scavenging of DPPH radicals was evident at all the tested concentrations of taxoquinone and reference compounds.DPPH radical scavenging activity of diterpenoid compounds has been confirmed previously.Similarly, terpenoid compounds isolated from Toona ciliate were found to exhibit remarkable free radical scavenging activities.As reported previously, antioxidant and radical scavenging activities of terepenoids could be mediated due to bearing hydroxyl groups, including o-dihydroxy group in their structures.NO is a free radical with a single unpaired electron.NO is formed from l-arginine by NO synthase.It can also be formed from the reaction of peroxyl radical and NO, polluted air, and smoking.NO itself is not a very reactive free radical, however overproduction of NO is involved in ischemia reperfusion, neurodegenerative and chronic inflammatory diseases such as rheumatoid arthritis.Nitric dioxide adds to double bonds and extracts labile hydrogen atoms initiating lipid peroxidation and production of free radicals.Marcocci et al. reported that NO scavengers compete with oxygen, resulting in a lower production of NO. The metabolite ONOO- is extremely reactive, directly inducing toxic reactions, including SH-group oxidation, protein tyrosine nitration, lipid peroxidation, and DNA modifications.As shown in the Fig. 3, both taxoquinone and positive controls showed significant NO radical scavenging activity in a concentration-dependent manner.In this assay, taxoquinone caused a concentration-dependent inhibition of NO, and the highest inhibitory effect was observed at the concentration of 100 μg/mL.On the other hand, ascorbic acid and α-tocopherol as positive controls had about 74.62% and 78.61% of inhibitory effect on scavenging of NO radical, respectively.Nitric oxide radical scavenging activity of terpenoid compounds has been confirmed previously.Nitric oxide is considered as a bio-regulatory molecule which participates in several physiological processes, including neural signal transmission, immune response, vasodilatation, and blood pressure.In the present study, nitrite scavenging activities of the tested terpenoid compound taxoquinone confirmed its potential for using in drug formulation strategies.Superoxide anion, which is a reduced form of molecular oxygen, is an initial free radical formed from mitochondrial electron transport systems.Superoxide anions serve as precursors to active free radicals that have the potential to react with biological macromolecules and, thereby, induce tissue damage.Superoxide has also been observed as directly initiating lipid peroxidation and plays an important role in the formation of other ROS such as, hydroxyl radicals, which induce oxidative damage in lipids, proteins, and DNA.The effect of the taxoquinone on superoxide radical was determined by the PMS-NADH superoxide generating system and the results are shown in Fig. 4.All the tested samples significantly scavenged the superoxide radicals in a concentration-dependent manner.It has been reported that antioxidant properties of some phenolic compounds are effective mainly via scavenging of superoxide anion radicals.In this assay, addition of taxoquinone, and standard compounds at the concentration of 250 μg/mL showed 73.20%, 73.00% and 74.45% superoxide radical scavenging effect, respectively.Previously the diterpenoid compounds have been found to possess significant superoxide radical scavenging activity.Similarly, Topçu et al. isolated three terpenoids from Salvia macrochlamys, which showed significant amount of superoxide anion radical scavenging activities.Hydroxyl radicals are extremely reactive free radicals formed in biological systems and have been implicated as a highly damaging species in free radical pathology, capable of damaging almost every molecule found in living cells.These radicals can be formed from a superoxide anion and hydrogen peroxide in the presence of copper or iron ions.Hydroxyl radicals are very strongly reactive oxygen species, and there is no specific enzyme to defend against them in human.Therefore, it is important to discover natural compounds with good scavenging capacity against these reactive oxygen species.It is well established that the hydroxyl radical scavenging capacity of samples is directly related to its antioxidant activity.The scavenging effect of taxoquinone against hydroxyl radicals was investigated using the Fenton reaction.As presented in Fig. 5, the percent inhibition of taxoquinone, ascorbic acid and BHA on hydroxyl radical scavenging was found to be 86.27%, 73.79% and 70.02%, respectively.The results showed significant antioxidant activity in a concentration-dependent manner.The ability of the taxoquinone to quench hydroxyl radicals seems to be directly related to the prevention of propagation of lipid peroxidation; because taxoquinone seems to be a good scavenger of active oxygen species, it will thus reduce the rate of the chain reaction.Hagerman et al. have also explained that high molecular weight and the proximity of many aromatic rings and hydroxyl groups are more important for the free radical scavenging activity of phenolics than are specific functional groups.Hydroxyl radical scavenging efficacy of diterpenoid compounds has been confirmed previously.The hydroxyl radical possesses strong capacity to join nucleotides in DNA and causes strand breakage, thus contributing carcinogenic and mutagenic effects including cytotoxicity.Moreover, highly reactive hydroxyl radicals can cause oxidative damage to DNA, lipids and proteins.Interestingly, similar to the results reported in this study, terpenoids isolated from Ganoderma lucidum also exhibited significant hydroxyl radical scavenging activities.The reducing property of test compounds classifies them to be as electron donors, which can reduce the oxidized intermediates of lipid peroxidation processes, and therefore, can act as primary and secondary antioxidants.In the present study, assay of reducing power was based on the reduction of Fe3 +/ferricyanide complex to Fe2 + in the presence of reductants in the tested samples."The Fe2 + was then monitored by measuring the formation of Perl's Prussian blue at 700 nm.The increased reducing ability observed may be due to the formation of reductants which could react with free radicals to stabilize and terminate radical chain reactions during fermentation, converting them to more stable products.As demonstrated in Fig. 6, the conversion of Fe3 + to Fe2 + in the presence of taxoquinone and reference compounds could be measured as their reductive ability.At the concentration of 25 μg/mL, the absorbance values of taxoquinone, ascorbic acid and α-tocopherol were measured to be as 1.13, 1.21, and 1.14, respectively.The results showed a concentration-dependent significant increase in reductive ability of the test samples.These results demonstrated that taxoquinone had marked ferric ions reducing ability and had electron donor properties for neutralizing free radicals by forming stable products.Some of the terpenoid compounds have been found to be potent reductants.Bakshi et al. also reportred ferric ion reducing capacity of terpenes isolated from red mushroom G. lucidum.Therefore, it is confirmed that, antioxidant activities of any tested samples are reported to be directly correlated with their reducing power abilities.On the basis of the results obtained in the present study, it can be concluded that taxoquinone exhibited potent antioxidant and free radical scavenging activities.Moreover, the hydrogen donating ability of taxoquinone has been proven through the assessment of reducing power ability and radical scavenging activities.These results confirm the efficacy of taxoquinone as a significant source of natural antioxidant, which might be helpful in preventing the progress of various oxidative stress-induced diseases.Hence, suggestions are made that taxoquinone can be used as an easily accessible source of natural antioxidants and as a possible food supplement against oxidative deterioration.However, the in vivo safety needs to be investigated thoroughly prior to its practical application.The authors declare that there is no conflict of interests regarding the publication of this paper.
Nowadays, there is an upsurge of interest on finding effective phyto-constituents as new sources of natural antioxidants for using in food and pharmaceutical preparations. However, this study was carried out to investigate the antioxidant and free radical scavenging efficacy of a biologically active diterpenoid compound taxoquinone isolated from Metasequoia glyptostroboides in various antioxidant models. An abietane type diterpenoid taxoquinone, isolated from ethyl acetate cone extract of Metasequoia glyptostroboides, was analyzed for its antioxidant efficacy as reducing power ability and its ability to scavenge free radicals such as 1,1-diphenyl-2-picryl hydrazyl (DPPH), nitric oxide, superoxide and hydroxyl radicals. As a result, taxoquinone showed significant and concentration-dependent antioxidant and free radical scavenging activities of DPPH, nitric oxide, superoxide and hydroxyl free radicals by 78.83%, 72.42%, 72.99% and 85.04%, as compared to standard compounds ascorbic acid (81.69%, 74.62%, 73% and 73.79%) and α-tocopherol/butylated hydroxyanisole (84.09%, 78.61%, 74.45% and 70.02%), respectively. These findings justify the biological and traditional uses of M. glyptostroboides or its secondary metabolite taxoquinone as confirmed by its promising antioxidant and free radical scavenging activities.
62
Epifaunal communities across marine landscapes of the deep Chukchi Borderland (Pacific Arctic)
Deep-sea regions occupy roughly half of the Arctic Ocean area, yet the understanding of Arctic deep-sea biodiversity still remains extremely limited.Recent scientific programs such as the International Polar Year 2007–2009, Census of Marine Life, and studies at the HAUSGARTEN, a biological long-term deep-sea observatory located in Fram Strait, have helped increase current knowledge of the Arctic deep-sea biodiversity.Results have shown benthic communities within the Arctic Ocean basins exhibit higher biodiversity – including numerous species new to science described in the last decades – than previously expected.A recent count estimated 1125 invertebrate taxa inhabit the central Arctic Ocean deeper than 500 m.The most abundant taxa reported were nematodes among the meiofauna, crustaceans, polychaetes, and bivalves among the macrofauna, and sponges, cnidarians, and echinoderms among the epifaunal megafauna.Despite these efforts much of the Arctic deep-sea region remains poorly known and virtually unsampled.Epifaunal organisms, those animals living attached to or on the sediment surface, are currently among the least studied in the Arctic deep sea, which is partly related to the difficulty of deploying trawls or photographic gear at great depths in often ice-covered waters.It is known from studies of Arctic shelf systems that epifaunal organisms contribute considerably to total benthic biomass in Arctic ecosystems and play key roles in trophic interactions, bioturbation, and remineralization.In addition, epifaunal sensitivity to natural disturbance and human impacts necessitate basic knowledge of their biodiversity patterns given the ongoing sea-ice loss and the potential for enhanced deep-sea fisheries, shipping, and petroleum/mineral exploitation in Arctic deep-sea areas.As benthic communities integrate the effects of physical, chemical and biological factors, they can be used as indicators of ecosystem status.Globally, epifaunal community structure differs among different regions of the deep-sea floor which is now recognized as a system of great complexity with diverse habitats at different spatial scales.These differences arise from environmental factors such as sediment characteristics, sea floor morphology, current flow regimes, chemical conditions, depth, and food availability.Deep-sea sediments consist mainly of silt and clay, while ridges and plateaus can have a higher sand fraction.Hard substrate and other forms of increased complexity of sea floor morphology occasionally occur in the deep sea and may enhance benthic biodiversity and biomass compared to abyssal plain deep-sea environments.Seafloor morphology also affects direction and strength of bottom currents transporting food particles, eggs and larvae, even though currents are usually very slow in Arctic deep-sea systems.Food availability is often the major depth-related factor driving benthic community structure and limiting biomass.Arctic deep-sea ecosystems are usually described as oligotrophic with highly seasonal food supply to the benthos resulting in decreasing benthic biomass with depth.Despite the low food availability in the deep sea benthic biodiversity can be comparatively high which has previously been discussed in light of comparatively stable, ‘stress-free’ environmental conditions or alternatively as a result of patchy food deposition enriching habitat heterogeneity.This study investigates epifaunal benthic communities in the poorly studied Chukchi Borderland area which is currently impacted by dramatic decreases in sea-ice thickness, warming Atlantic water below the surface and halocline waters, and increased Pacific water inflow through Bering Strait in surface waters.Consequently, biological change may be expected in the future.Changes in biological communities, however, would be impossible to detect given that only few benthic studies have been conducted here with only one of the epifaunal studies being quantitative.Thus, there is a need to characterize the biodiversity in the complex landscape of that area that hold the potential for high biodiversity and heterogeneity, and to gain insights into mechanisms structuring benthic habitats in the Arctic deep sea.The CBL is bathymetrically complex in that it comprises comparatively shallow ridges, plateaus, and much deeper, isolated basins.In addition, the Chukchi Plateau is characterized by the presence of pockmarks, i.e., rounded or elliptical depressions of <1 to >100 m in diameter and depth that were formed as a result of explosion of gas or fluids from decomposing organic material or leaking gas reservoirs in underlying sediment layers.Though little is known about patterns of life within pockmarks worldwide and especially in the Arctic, endemic chemosynthetic communities and/or high density of epibenthic fauna have been documented for both active and inactive seeps.Finally, the CBL region also is hydrographically complex in that Pacific surface and deeper Atlantic water meet here.This layering of these water masses results in cosmopolitan Arctic boreal and Atlantic boreal species dominating over Pacific affinities in the macro-infauna of the Canada Basin and CBL.However, biogeographic patterns and transitions in epifaunal communities of the CBL have not yet been mapped.The aim of the present study is to investigate epifaunal communities in the heterogeneous CBL area using bottom trawl samples and remotely operated vehicle imagery.The specific objectives are: to compare epifaunal community structure, taxonomic diversity, and distribution patterns across ridge, plateau and basin locations of the CBL; to identify environmental parameters that may influence epifaunal community characteristics; and to evaluate biogeographic affinities of epifauna in the study area.Given the large difference in depth between basin and plateau/ridges, and the presence of pockmarks on the Chukchi Plateau, we specifically tested the hypothesis that epifaunal community structure differs among plateau, basin and ridge locations, with the highest diversity and density at pockmarks plateau locations and the lowest at deep basin locations.Considering the hydrological complexity of the study area with increasing importance of Atlantic origin water with increasing depth, the second hypothesis tested was that the CBL epifaunal species inventory represents a gradient of declining Pacific-affinity proportion with increasing depth.The study was conducted in the CBL, north of Alaska between 74 and 78°N and 158 - 165°W during the Hidden Ocean III expedition on the USCGC Healy in July–August 2016.The CBL occupies a roughly rectangular area of 600 × 700 km, ranging in depth between ∼300 and 3000 m, and extends into the Amerasian Basin north of the Chukchi Sea.Dynamic geological formation processes have led to the present-day CBL that consists of north-trending topographic highs, including the Northwind Ridge and Chukchi Plateau that surround the isolated Northwind Basin, which also contains several small ridges.On the Chukchi Plateau, pockmarks were first discovered in 2003 with later surveys revealing many more, typically of 300–400 m in diameter and 30–50 m deep.It was suggested that these pockmarks were formed under the effect of pulsed fluid flows with the last modification occurring about 30–15,000 years ago.Waters of Arctic, Atlantic and Pacific origin interact over the complex CBL bottom topography.The Pacific-origin water comprises the Polar Mixed Layer and upper halocline originating from Bering Sea and Alaska Coastal water entering the Chukchi Sea via the shallow and narrow Bering Strait.The influence of Pacific water decreases with depth and increasing latitude, being virtually absent north of the 2000 m isobath at ∼79°N.The lower halocline is of Atlantic origin and enters the area from the Eurasian Basin via Fram Strait and the Barents Sea.Underneath it, the Atlantic layer water consists of Fram Strait Branch water and Barents Sea Branch water.The Arctic Ocean deep-water layer originates from the Greenland Sea; it enters from the Fram Strait and spreads across the Eurasian Basin, finally reaching the Canada Basin.Sampling stations were chosen to represent the three main habitat types: ridges, basins, and plateau with pockmarks.Ridge stations located on the Northwind Ridge and on one of the ridges in the Northwind Abyssal Plain, ranged in depth from 486 to 1059 m. Three plateau stations were sampled relatively close to each other, and their depth ranged from 508 to 873 m. For stations 9 and 10, it was possible to investigate epifaunal communities on the plateau surrounding a pockmark and within the pockmark.Station 8 was in a large groove that was linked to a pockmark.The basin stations were located within the Northwind Basin and isolated from station 7 by a ridge.The depth of the basin stations ranged from 1882 to 2610 m.Epifaunal communities were investigated with two main tools, the ROV Global Explorer and a plumb-staff beam trawl.The ROV was used to perform a photographic survey of the seafloor at each of ten stations, with two dives each at stations 1 and 9 for a total of 12 ROV dives.Analyses were performed on 24-megapixel images collected with a downward-looking DSSI DPC-8800 digital camera.The ROV was equipped with DSSI Ocean Light Underwater LED lights.Forward looking 10x and 3.8x zoom 4K video cameras were used to guide the photographic surveys, control distance from the sea floor, and collect taxonomic vouchers with the suction sampler and manipulator arm.We kept the ROV to a linear transect as much as possible but deviations from straight lines occurred at some stations due to variable drift speeds and bottom currents, irregular topography and occasional inspection and collection of taxa of interest, which might have led to a slight biases in estimation of taxonomic abundance and diversity.Still images were taken every 5–8 s, depending on drift speed.Four digital laser pointers, one located at each corner of a fixed distance of a 10 - cm square, were used to estimate the photographed area at four stations, after which they stopped functioning.The average bottom time of the ROV dives was 3:29 h, and the average distance between start and end point during bottom time was about 3800 m.Trawl samples were collected at six stations with one haul per station to compare abundance estimates to those from ROV images and to verify taxonomic identification inferred from ROV images.The 3.05 m modified plumb-staff beam trawl was equipped with rubber rollers on the footrope, a 7 mm mesh net with 4 mm in the cod end, and had an effective mouth opening of 2.26 × 1.20 m. Trawling at all stations was performed for a target duration of about 30 min at a target speed of 1.5 knots speed over ground.Actual trawl duration and speed varied due to challenges of trawling under the local environmental conditions, resulting in actual distance swept ranged from 713 to 2280 m. Trawl bottom time was estimated from a time depth recorder affixed to the net.The TDR also showed whether the trawl stayed at seafloor.Trawl hauls were rinsed of sediments over a 2 mm mesh on deck.Organisms were sorted and identified to the lowest possible taxonomic level.All organisms were counted and weighed by taxon to 1 g accuracy.Vouchers of taxa that were difficult to identify on board were preserved in 10% formalin or 190 proof ethanol.The vouchers were later sent to expert taxonomists for further detailed identification.Taxon names were verified using WoRMS.At each station, a range of environmental variables was collected with a SBE9/11 + CTD and an Ocean Instruments BX 650 0.25 m2 box corer.Water temperature, salinity and fluorescence of the water were measured with the CTD package as close to the bottom as possible, on average around 20 m from the bottom.Sediment samples were collected from box core samples.The upper surface of the sediments was subsampled and frozen at −20 °C for later determination of grain size composition, organic carbon content, and concentration of sediment chlorophyll a and phaeopigments.Only cores with intact surface layers were used for sediment analyses.Sediment grain size was analyzed on a Beckman Coulter Particle Size Analyzer LS 13320 at the Geology Laboratory of UiT The Arctic University of Norway in Tromsø.The samples were pre-treated with HCl and H2O2 to remove calcium carbonate and organic material, respectively.Each sample was analyzed three times and mean grain-size values were calculated.Sediment organic carbon was determined on a Costech ESC 4010 elemental analyzer at the stable isotope facility at the University of Alaska Fairbanks, USA.Concentration of sediment chlorophyll a and phaeopigments was measured at the University of Alaska Fairbanks.Pigments were extracted with 5 ml of 100% acetone for 24 h in the dark at −20 °C.A Turner Designs TD-700 fluorometer was used to measure pigment concentration.The fluorescence of the sample was read before and after acidification with HCl for determination of phaeopigments.A subset of the useable images of the sea floor was chosen from each station for the image analysis.Images that were overlapping, blurred, had suspended sediment, were poorly illuminated, or that were far off the seafloor were classified as unusable.In total, 940 images were manually analyzed for faunal densities and proportional organism abundances.Faunal densities were determined at the four stations where laser pointers were still functioning so that total image area could be determined.The mean area per image varied from 0.2 to 0.8 m2.For remaining stations proportional organism abundances were determined.Typically, 70–100 pictures per station were analyzed.Image processing and analyses were performed with ImageJ.All putative taxa present in the study area were used to create a taxonomic image library.Taxa were identified to the lowest possible level based on a combination of the ROV imagery, the voucher collection, and additional identifications by experts.Where identification was difficult, taxa were named morphotypes).The image library allowed for standardization of taxonomic identification and nomenclature, in particular in case of morphotypes.All taxa and morphotypes present on the images were counted per image.In addition, lebensspuren, burrows, colour of the sediments, and presence of stones were noted.Stones larger than a few cm were counted, and their approximate size and associated fauna were recorded.The average number of stones per picture was calculated and included in the statistical analyses.Invertebrates and fishes collected in the trawl hauls were assigned to one of the following biogeographic groups: Arctic – occurring only in the Arctic, Arcto-boreal-Pacific – found in Arctic and boreal Pacific waters, Arcto-boreal-Atlantic – found in Arctic and boreal Atlantic waters, Arcto-boreal – found in Arctic and in both Atlantic and Pacific boreal waters, and other – occurring also outside of boreal and Arctic zones.Biogeographic affinities were assigned based on the best current distribution information available in the published literature, internet sources, and expert knowledge by collaborating taxonomists.The list of taxa analyzed for biogeographic affinities included 44 taxa identified to species level; all taxa not identified to species level were excluded from this analysis.Percent of species from different biogeographic regions was presented based on densities and number of taxa by station.Data collected by the ROV were not used in this analysis due to considerably fewer taxa identified to species and given weighting by density was not possible for all ROV stations.Factorial analyses of variance were used to compare ROV-based number of taxa and Simpson index values among three habitat types."For trawl stations, comparison of number of taxa, Simpson index, density and biomass for ridge and plateau stations was conducted with the Student's t-test.This test was also used to compare Simpson index values of ridge and plateau stations between trawl and ROV samples.Prior to analyses, data were tested for normality and for homogeneity of variances.The analyses were performed in the statistical computing software R.The epifaunal community composition was analyzed by means of multivariate statistics including hierarchical cluster analysis using the PRIMER v 6.0 software package.Density data collected with the trawl were used for the analyses.Proportional abundance data were used for all ROV stations since density could not be determined for all stations.Square-root data transformation, which down-weighs the influence of dominant taxa, was applied prior to calculating similarities.The abundance data were grouped a priori as ridge, basin, and plateau with pockmark.A similarity matrix was calculated based on the Bray-Curtis coefficient.A similarity profile test was used to explore statistical significance of difference among cluster branches.The magnitude of differences among ridge, plateau and basin categories and the significance of potential differences were tested with the analysis of similarities.Statistical significance of the ANOSIM global R statistic was assessed by a permutation test.When ANOSIM detected a significant grouping, a SIMPER analysis was carried out to establish taxa contributing most to the dissimilarities between epifaunal communities.The potential influence of environmental factors on epifaunal community structure was tested with canonical correspondence analysis using the package ‘vegan’ in the statistical computing software R.In the CCA ordination biplot, the environmental variables are presented as arrows that are roughly oriented in the direction of maximum variation in value of the corresponding variable.Water depth, bottom temperature, sediment grain size, number of stones per picture, sediment pigments, and sediment organic carbon were included in the analysis, and correlated with the square root transformed proportional abundance of the taxa.Environmental variables included in the model were obtained with a forward selection procedure.Monte Carlo permutation tests were used to determine the statistical significance of the model and the individual terms."In addition, correlations between univariate epifaunal characteristics from trawl surveys, ROV surveys and physical-chemical characteristics of water and sediments were evaluated using parametric Pearson's correlation analysis and non-parametric Spearman's rank correlation analysis.Maps presented in the paper were generated using ArcMap 10.5 software.A total of 2721 individuals was recorded across all stations from ROV images, of which 1584 individuals were classified into eight phyla and 1137 individuals were classified into 10 morphotypes of uncertain phyla.At the four ROV stations where laser pointers were present, densities showed a clear increasing trend with decreasing depth from 2273 ind/1000 m2 at the basin station 7 at 2610 m depth to 14,346 ind/1000 m2 at the plateau station 8 at 557 m depth.Relative composition of the number of individuals per phylum obtained from all ROV stations showed different phyla dominated across the study area.Annelida were most numerous at two ridge stations as well as at three basin stations, where they comprised 18–66% of the total abundance.Cnidaria were numerous at all stations, but dominated at the plateau stations 8 and 10 and basin station 11, where they comprised 37–51% of the total abundance.Epifaunal communities at ridge station 6 and plateau station 9 were dominated by morphotype 10, possibly Atolla polyps, which was also a co-dominant community member at plateau station 10.A total of 2505 individuals were registered in six trawl samples and represented nine phyla: Echinodermata, Arthropoda, Porifera, Cnidaria, Mollusca, Chordata, Annelida, Sipuncula and Nemertea."Total density and biomass were variable across stations with no significant difference indicated by the Student's t-test for biomass and densities between ridge and plateau stations.Densities calculated from the trawl samples were 6–7 times lower than those calculated from the ROV images.The highest and lowest total densities were found at the two plateau stations and varied from 342 ind/1000 m2 at station 10 to 2029 ind/1000 m2 at station 9.The biomass ranged from 173 g/1000 m2 at the basin station 13–906 g/1000 m2 at the shallow plateau station 9.Results of the relative composition of phyla from trawls suggested certain taxa were missed by the trawl compared to ROV samples taken at the same stations.For example, the relative abundance of Annelida and Cnidaria was generally much lower in trawls than in ROV images.Also, in contrast to the ROV samples, morphotype 10 was encountered only once in trawl catches at station 9.The by far dominating phylum, in terms of relative abundance and biomass, at the plateau and ridge stations was Echinodermata with 72–80% of relative abundance and 51–86% of relative biomass, followed by Arthropoda and Cnidaria.The only basin station 13 sampled by trawl was markedly different from the rest of the stations in that it was dominated by Porifera with 62% and 87% of relative abundance and biomass, respectively, and Mollusca with 23% of relative abundance.In total, 152 taxa and morphotypes were identified from the trawl and ROV samples, with at least 34 taxa common to both sampling tools.From the ten ROV stations combined, 78 taxa including morphotypes were registered, mostly within Echinodermata, Cnidaria, Arthropoda, and Chordata.In general, the total number of taxa was significantly higher at the plateau and ridge stations than at the basin stations.The number of taxa ranged from 41 to 40 taxa at ridge station 1 and plateau station 10, respectively, to 9 taxa at basin station 7.The relatively low number of taxa at station 8 might be a result of fewer images available for the analysis at this station.The high number of images analyzed at station 9 combined), however, did not appear to affect diversity estimates at this station.Echinoderms together with Arthropoda and Cnidaria were most diverse at the plateau and ridge stations.The number of taxa per phylum was relatively evenly distributed at the basin stations.Eighty-six taxa were recorded from the six trawl stations.Patterns of diversity for taxa per phylum were relatively similar to those observed from ROV images.The most diverse phyla in the trawl samples were Arthropoda, Echinodermata, Chordata, and Mollusca.The total number of taxa varied from 21 at basin station 13 to 30 taxa at ridge station 2, with no significant difference between ridge and plateau stations.The majority of taxa across all trawl stations were Arthropoda and Echinodermata."Particularly low diversity was found in the trawl catch at ridge station 3 based on the Simpson's diversity index though there were no statistically significant differences among different stations for either trawl and ROV samples. "The Simpson's diversity index was slightly lower in trawl catches than ROV images at all stations where both gears were employed, but these differences were not statistically significant either.Several species found in the CBL represented geographic and depth range extensions compared to literature values.The bivalve Yoldiella intermedia extended its depth range from a previous maximum of 1150 m to 2037 m depth at station 13 in our study.Geographic range extension was registered for four mollusks, and five sponges: Rhinoclama filatovae F. R. Bernard, 1979, Tindaria compressa Dall, 1908, Hyalopecten c.f. frigidus, Bathyarca c.f. imitata, and Radiella sol Schmidt, 1870, Grantia phillipsi Lambe, 1900, Scyphidium septentrionale Schulze, 1900, Stylocordyla boralis, and Hyalonema apertum simplex Koltun, 1967.Hyalopecten c.f. frigidus and Bathyarca c.f. imitata might prove to be new species.Results of the hierarchical cluster analysis on relative abundances obtained from ROV images revealed two main clusters of 74% dissimilarity.The first cluster included all basin stations, while the second cluster included both plateau and ridge stations.Similarly, ANOSIM showed a significant difference between basin stations and combined plateau and ridge stations, though no significant difference was found between ridge and plateau stations.SIMPER analysis for the two main clusters determined that the difference between ROV communities was mainly due to morphotype 10, and polychaetes belonging to the Polynoidae and Ampharetidae families.Cluster analysis performed for the trawl samples indicated similar differences as for ROV samples, with the one basin station sampled clustering separately from the ridge and plateau stations.The average dissimilarity between the two main clusters was 93%.Species contributing most to dissimilarities between the two clusters were the sponge Radiella sol, the brittle star Ophiopleura borealis, and the sea star Pontaster tenuispinus.Based on the ROV images, relative abundance of dominant taxa changed across the study area indicating marked difference between ridge/plateau and basin communities.In addition, ridge/plateau communities differed between the eastern and the western side of the study area.The geographically close ridge communities at stations 1 and 2 were dominated by polychaetes of the families Ampharetidae and Sabellidae, which comprised more than 40% of relative abundance at each station.The subsequent most common taxa were the anthozoan Bathyphellia cf margaritacea and ophiuroids, especially Ophiopleura borealis.Ridge station 6 and plateau stations 8, 9 and 10 in the western study region were characterized by high proportions of morphotype 10 particularly at stations 9 and 6.In addition, morphotype 6 was regularly found, and was attached to stones at these stations.At stations 6, 8, and 9 ophiuroids, especially Ophiostriatus striatus were also regularly occurring.Similar to the geographically distant ridge stations 1 and 2, the anthozoan B. cf margaritacea was common at plateau stations 8, 9, and 10 and dominated at station 8.Characteristic for only ridge station 6 were an ascidian Ciona sp. and morphotype 5, resembling “holes” in the seafloor.Characteristic of plateau station 10 were small unidentified pycnogonids as well as the large pycnogonid Colossendeis proboscidea.The latter was not observed in other parts of the study area and was particularly numerous at this station based on the video records.In addition, video recordings from the ROV indicated a considerable increase in number of anemones on the slope towards the center of pockmark and inside of the pockmark at the plateau station 9.Basin stations, for the most part, differed in dominant taxa from ridge/plateau stations, with the exception of B. cf margaritacea, which was common almost everywhere and contributed 17–51% to relative abundance at basin stations.A polychaete of the family Polynoidae was the second most common taxon at all basin stations except station 11.Porifera were recorded at all basin stations with Polymastiidae contributing most to total abundance at stations 11 and 12 and an unknown white sponge at station 12.At stations 11, 12 and 13, the sea cucumber Elpidia sp. occurred regularly as did morphotype 1, resembling a gastropod with an oval, laterally compressed shell.In addition, the shrimps Bythocaris spp. were recorded at all basin and ridge/plateau stations, but contributed most at station 13.Proportional abundance of fish taxa never exceeded 2% at any station, with Lycodes spp. being most common and recorded at most stations.Based on the trawl samples, the most abundant species of the plateau/ridge stations were the brittle star Ophiopleura borealis and the sea star Pontaster tenuispinus.O. borealis dominated at the shallower stations 9, 3 and 1, while P. tenuispinus was dominant at the deeper stations 10 and 2.Other abundant taxa at the plateau/ridge stations were other ophiuroids and the shrimp Bythocaris spp.Porifera of the family Polymastiidae contributed 62% to total abundance at the basin station 13, followed by the bivalve Bathyarca c.f. imitata and the sea cucumber Elpidia sp.As with the ROV images, Lycodes spp. were found at all trawl stations, with highest relative abundance at the pockmark station 10, though it never exceeded 3%.Bottom temperatures gradually decreased with increasing water depth from 0.7 °C at station 6 to - 0.3 °C at station 7.Salinity ranged between 34.84 and 34.93 PSU at stations 10 and 7, respectively.Concentration of chlorophyll, phaeopigments and percent carbon content in sediments were similarly low across all stations but station 1, where concentrations of phaeopigments and chlorophyll was higher.The sediments at all stations were almost entirely composed of mud.This was generally in agreement with the images, which showed most of the stations were characterized by fine, usually light-colored sediments, but images also showed interspersed hard substrate.Station 6 was covered by numerous, dark-colored pebbles a few mm size on top of fine-grained sediments.Stones were present at most plateau/ridge stations except station 2.In contrast, there were no stones registered at the basin stations except station 13.The number of stones was highest at station 6 and 9.Lebensspuren of different shapes and sizes were observed at all stations.They were particularly numerous inside the pockmark at station 10 and the isolated basin station 7.In general, the most recognizable traces were those left by gastropods, fish, and an unidentified animal leaving narrow, non-linear tracks more or less concentrated in one spot.At the plateau/ridge stations, many tracks from sea stars or ophiuroids were present.Abundant lebensspuren at basin station 11 were small near-circular holes with a tail, which were also present but less numerous at some other stations.There was no sign of chemosynthetic activity at any of the pockmark plateau stations such as gas bubbling or obvious bacterial deposits.In addition, burrows of unknown origin and patches of sediment of different coloration indicating recent sediment disturbance/movement, were registered at the stations 1, 2, 9 and 11.There was no significant correlation between abundance, biomass or number of taxa obtained and any of the environmental variables at the six trawl stations.For the ten ROV stations, the number of taxa was negatively correlated with depth and positively correlated with the number of stones per picture.The main taxa associated with the stones on the images were morphotype 10, tubeworms in a white calcareous tube, morphotype 6, and various anemones.At the ridge/plateau stations, brittle stars were often observed by arms sticking out from beneath the stones.The CCA biplot showed the position of benthic taxa in relation to the environmental variables at different stations.The environmental variables water depth, percent mud, amount of stones per picture, and sediment chlorophyll a showed significant relationships with epifaunal community composition and explained 65% of variance, with the first ordination axis explaining 32% and the second ordination axis explaining 18% of the variance in epifaunal community composition at the sampling stations.Among these, depth and amount of stones were the strongest predictors.Stations and associated taxa separated into two main groups: basin stations on the right side of the plot and ridge/plateau stations on the left.Basin stations were characterized by greater depth and finer sediments.The ridge and plateau stations were spread along the second axis on the ordination plot, mostly reflecting the west-east gradient of stations.Ridge station 6 and plateau stations 8 and 9 grouped together and were characterized by a high number of stones.Ridge stations 1 and 2 and plateau station 10 were associated with high sediment chlorophyll a concentrations and had coarser sediments.Polynoidae and Porifera were closely associated with greater depth.Ampharetidae, Sabellidae and Ophiopleura borealis were associated with high sediment chlorophyll a concentration.Morphotype 10 was positively associated with the amount of stones.The majority of species identified to species level across all trawl stations were of Arcto-boreal-Atlantic affinity.They represented 50–59% of the total number of taxa per station.Species occurring only in the Arctic region were represented with 14–28% of the number of taxa.Arcto – boreal taxa comprised 11–29% of all taxa and were not observed at basin station 13.Pacific-boreal taxa were present only at the two shallower ridge/plateau stations 3 and 9, and with only 6% of total number of species.Taxa occurring with “other” biogeographic affinity were present at the deepest station 13 and at the relatively shallow ridge station 1,.In terms of relative abundance, trawl communities were by far dominated by Arcto-boreal-Atlantic species.They represented >90% of total abundance at the deeper stations and 77–87% at the remaining stations.The contribution of Arcto-boreal species increased with decreasing depth from 2 to 15%.The contribution of Arctic species to total abundance was low at the deeper stations, and did not exceed 9% at the shallower stations.The contribution of Arcto-boreal-Pacific taxa was less than 1%, and contribution of species occurring outside of boreal and Arctic areas did not exceed 2% of total abundance.Our study in one of the least known parts of the Pacific Arctic deep sea revealed marked epifaunal community differences among habitat types, partly supporting the first hypothesis tested.We found lower density, diversity and biomass, as well as different taxon composition in the deep basin compared to the shallower ridge and plateau fauna.However, there was no significant difference between ridge and plateau epifaunal communities, although western and eastern parts of the CBL differed in plateau/ridge community properties.As is typical with deep-sea studies, water depth and availability of hard substrate in the form of stones were the strongest predictors of benthic community structure, along with sediment grain size and indicators of food availability.Results of the study supported our second hypothesis as Arcto-boreal-Atlantic taxa dominated species richness and biomass; the latter increased with increasing water depth while taxa with Pacific affiliations were essentially absent.The total number of taxa/morphotypes found across the CBL in the present study was 78 and 86 with a grand total of 134.This is higher compared to the previous records from the same area, where 15 and 67 epifaunal taxa were documented.Most of the taxa from the CBL are found throughout the Arctic deep sea and shelf.The most speciose phyla across the study area were Echinodermata, Arthropoda and Cnidaria, which is typical of Arctic deep-water epifaunal communities.In general, knowledge on Arctic deep-sea epifaunal biomass is extremely sparse and mostly restricted to continental slopes.Our study adds to this limited knowledge with epifaunal biomass measured from trawl catches ranging from 173 to 906 g ww/1000 m2.The values obtained are generally within those registered from the Alaska Beaufort Sea slope, where biomass varied from 37 to 5250 g ww/1000 m2 between 500 and 1000 m with values mostly below 700 g ww/1000 m2 at stations at 1000 m.Our epifaunal biomass estimates from the CBL tended to be one to two orders of magnitude lower than the highest values recorded on Arctic shelves.At least in part, these differences are related to gear bias as was obvious from the density estimates.Total epifaunal densities from trawl samples were much lower than those recorded from ROV images.The difference in estimates of densities obtained by trawl and ROV have previously been reported for some fish species and decapods, again with higher values obtained from imagery.Trawl efficiency in the deep sea can be reduced based on difficulty of maintaining consistent bottom contact.In contrast, ROV images may provide more accurate quantitative estimates of community properties, at least for fauna easily seen on images.Densities reported from the single previous quantitative study of the CBL area and adjacent Canada Basin varied from 90 to 5830 ind/1000 m2.The total epifaunal densities reported from images at the HAUSGARTEN observatory in Fram Strait were generally higher and ranged from 120 to 54,800 ind/1000 m2.Like biomass, densities tend to be much higher on Arctic shelves.In general, epifaunal diversity, density and biomass varied across the ridge and plateau stations, but with no significant difference found between these two habitat types.The ridge stations were flat rather than steep and the pockmarks associated with the plateau stations did not indicate signs of chemosynthetic activity, perhaps making the sites in these habitats quite similar.Based on the ROV images, however, the pattern of numerical taxon dominance differed across the eastern and western groups of ridge and plateau stations.Eastern communities at the Northwind Ridge were dominated in abundance by annelids of the Ampharetidae and Sabellidae have been previously observed in relatively high numbers in the deep sea of the Southern Ocean, and in hydrodynamically active areas with strong deep sea currents such as in Fram Strait.These polychaetes are sessile surface deposit feeders and suspension feeders, though they are capable of also using other feeding strategies, a plasticity that might allow them to be common in the deep sea.In addition, the higher abundance of these polychaetes on the Northwind Ridge might be due to higher input of organic matter here compared to other stations, which can be efficiently taken up by these families.In the western CBL, ridge and plateau stations were characterized by high numbers of unidentified coronate tubes, a common deep sea taxon.The medusa phase of the coronates Atolla sp. and Nausithoe sp. was previously reported from the pelagic realm in this area, where the abundance of Atolla was significantly higher.Polyp stages of coronates are morphologically very similar, but given that Atolla medusae were also recorded in pelagic ROV dives, we suggest that morphotype 10 might be Atolla polyps.These polyps need hard substrate for attachment, and the higher availability of stones on the western side of the study area may explain the dominance of coronate polyps here.Unlike at pockmarks with active gas venting, we did not find characteristic seep organisms that are known to rely on chemosynthetic energy.A single pockmark previously investigated in the CBL also did not indicate active seepage or typical seep-associated biota.Active fluid flux is actually rarely observed in pockmarks, since many of them are relicts formed several thousand years ago.Instead of chemosynthetic biota, increased biological abundance and taxonomic richness were previously observed in inactive pockmarks elsewhere, including the pockmark of the CBL, where abundance of epifauna, and holothurians in particular, was high.Such enhanced values have been linked to morphology of pockmarks altering hydrodynamic conditions, causing turbulent re-suspension of material and enhanced settling of organic matter, resulting in higher food supplies and increased larval settlement.While densities could not be calculated in the present study, dense population of anemones were observed in the shallower pockmark, possibly indicating increased water movement over the pockmark.The deeper pockmark was instead characterized by higher number of pycnogonids, again a taxon previously recorded in pockmarks with active seepage of gases, cold seeps, mud volcanos and under-water pingos, though these studies did not offer explanations of these observations.Deeper locations of the CBL were characterized by lower biomass, density and species richness, as well as different benthic community composition from the shallower ridge and plateau stations.Such changes with depth are in agreement with other studies and knowledge on bathymetric trends in global deep-sea faunal communities.High proportional abundance of mobile swimming polynoid annelids and the sea cucumber Elpidia sp. was characteristic for the basin stations.The identified subfamily Pelagomacellicephala is characteristic of deeper waters including the Arctic Basin.High abundance of holothurians is also typical for deep-sea communitiesboth elsewhere and in the Arctic.Both taxa are mobile, a useful trait allowing these taxa to respond fast to seasonally and spatially changing food input in polar deep seas.Sponges were also prominent at basin stations in terms of densities and biomass, as well as proportional abundance.Indeed, sponges often occur in abyssal benthos and as well as submarine canyons, the east Greenland slope, and the Angola Basin in the SE Atlantic.Sponges found in basins of the CBL were mainly the polymastiids Radiella sol and Polymastia sp. growing in a mud-dominated environment.Though Polymastiidae have previously been found to colonize hard substrate, some develop root like structures, cement small particles of sediments to create their own hard substrate or use small sized hard substrate.The variable feeding strategies of deep-sea sponges such as suspension feeding, taking up dissolved organic matter, and carnivory might allow them to survive in oligotrophic conditions of the Arctic deep sea.Among the most abundant taxa of the basin stations also was the common Arctic deep-sea anthzoan Bathyphellia cf margaritacea, which was found almost everywhere in the study area.The species’ flexible choice of substrata might be a reason for its wide destribution across the study area.In the CBL, depth was among the main environmental factors significantly affecting epifaunal community structure, species richness, numerically important taxa, density and biomass.This is in accordance with other studies reporting depth zonation in epifaunal community composition, and decreases in density, diversity and biomass with depth.These changes are likely indicative of changes in environmental factors co-varying with depth.Food availability and presence of hard substrate were the factors affecting epifauna across habitats and also contributing most to the difference between eastern and western parts of the CBL.Quality and quantity of food was previously described as the most important factor structuring deep-sea benthic communities.The main source of organic matter for benthic deep-sea organisms is derived from the upper water layers, and benthic availability of food is strongly linked to depth, seasonality and presence of sea ice in the Arctic.Indicators of organic matter availability within sediments measured in this study suggested very low organic matter content across the study area and low-quality food for benthic organisms.The higher benthic pigment concentrations at Northwind Ridge was probably due to organic matter transported here from the productive Chukchi shelf, which is located closest to the Northwind Ridge station.This transport is mediated through the nutrient-rich Pacific-origin water abundant in the eastern part of the Northwind Ridge in the upper ∼225 m.While pigment concentration is a point-in-time measurement largely depending on the time of sampling, long-term trends in organic carbon supply are often more closely reflected in community characteristics of macrobenthos such as density that correlates well with production regimes in upper water masses and vertical carbon flux due to their limited mobility.Indeed, order of magnitude higher densities of macrobenthos were observed at the Northwind Ridge station compared to other stations, along with the higher abundance of sessile Ampharetidae and Sabellidae worms in our study.The majority of epifaunal taxa, however, did not respond to the potentially higher food supply by elevated abundance or biomass at that station.This is consistent with previous epifaunal studies on the Chukchi shelf and Greenland Sea slope, where total organic content and pigment concentration were less correlated with epifauna than other environmental factors.Detectability of pelagic-benthic coupling is lower for epifaunal than sessile/less mobile macrofaunal organisms because the higher mobility of epifaunal organisms allows them to move to food patches diluting the spatial coupling.stones are known to support complex epifaunal communities and enhanced faunal diversity compared to the surrounding soft bottom deep-sea environment.They are often colonized by encrusting and sessile fauna like cnidarians, crinoids, barnacles, sponges.This habitat enhancement was also indicated in our study, where species richness was positively correlated with the number of stones.The most abundant taxon associated with stones in our study was a sessile coronate polyp.Mobile fauna in the vicinity of stones are likely feeding on organic matter produced by the dropstone community.Finally, grain size composition also affected epifaunal community composition in the CBL, which is in accordance with, for example, studies from the Chukchi Sea where the importance of sediment grain size for epibenthic distribution and taxon richness has been shown.Although sediments grain size composition of the CBL actually varied little the highest mud content was observed in basins and the highest sand fraction on the Northwind Ridge.This pattern indicates higher current velocities at Northwind Ridge than in the basin, which was also evident by more abundant lebensspuren in the basins and higher abundance of the suspension feeding sabellid polychaetes and stalked crinoids.The biogeographic pattern of the CBL fauna was characterized by strong dominance of species with Atlantic affinity across the entire study area.The share of species of only Pacific affinity was small at shallow stations, and zero deeper than 850 m. Prevalence of species of Atlantic affinity in deep Arctic areas is consistent with earlier studies from the Beaufort Sea, Arctic Basins and Norwegian and Greenland Seas.This pattern reflects the current geomorphology, bathymetry and oceanography, as well as the evolutionary history of the Arctic Ocean, including changes in geological settings and glaciation events over time.The near absence of Pacific taxa might be explained by a limited connection to the Pacific Ocean.The deep-water connection between the Arctic and Pacific oceans closed 80–100 million years ago.Currently, the Arctic Ocean connects to the Pacific via the shallow Bering Strait, which partly acts as a barrier for dispersal of benthic organisms adapted to deep water.On the contrary, the more saline, denser Atlantic water dominates in the Arctic deep sea with the only deep-water connection to the Arctic via Fram Strait.The water mass distribution and circulation pattern strongly contribute to dominance of Atlantic affinity species over Pacific affinity in the study area.Additional possible underlying reasons for the observed biogeographic pattern are the multiple glaciation events during the Pleistocene, an asymmetry in glacial ice cover, and the consequent reinvasion of fauna in the Atlantic and Pacific parts of the Arctic.The Pacific Arctic shelves were only partly glaciated, providing refugia and allowing fauna to survive and maintain their presence on the shelf.After the glaciations, the Pacific Arctic was reinvaded by the fauna from the shallow refugia and through the shallow Bering Strait, which explains why most of the current Pacific species are stenobathic and therefore almost absent from the CBL.On the Atlantic side of the Arctic, shelves were covered by ice down to the deep ocean.Thus, shallow water fauna on the Atlantic side of the Arctic could not survive glaciation and had to find refugia in deeper unfrozen areas or be extirpated.After the glaciation, species adapted to depth reinvaded from the Atlantic, which is a reason why we find more eurybathic fauna on the Atlantic side of the Arctic.CBL fauna was in fact also eurybathic indicated by the fact that around 80% of the fauna used for the biogeographical analysis in our study is shared with Arctic shelves.Our results suggest taxon richness, biomass and density of epifauna decrease with depth in the CBL leading to marked differences between basin and plateau/ridge communities.These changes were mainly driven by depth.No statistically significant differences in community metrics were observed between ridge and plateau stations.Regional differences in numerically dominant taxa, however, were recorded between western and eastern ridge/plateau stations, which were attributed to differences in food supply and hard substrate availability.The majority of epifaunal species of the CBL were of Atlantic-boreal affinity documenting stronger biogeographic influence of Atlantic than Pacific waters on CBL communities.In addition, the study contributes to the yet incomplete biodiversity inventory of the Arctic deep sea with at least nine species showing new distribution records and more than 16 taxa added to the previously documented species list from this area.This documentation of the current biodiversity and community structure of Arctic deep sea fauna and its interaction with the environment is urgently needed given that the Arctic is changing due to climate change.The environmental changes most prominent in the study area include: decrease in sea ice cover with most pronounced changes in the Pacific sector of the Arctic; decrease in sea ice thickness, much of which occurred in the CBL region; increased inflow of warming Atlantic water into the Pacific Arctic; and the rising volume of fresher and warmer Pacific water inflow reaching the Chukchi shelf.It is anticipated that these changes, along with effects such as acidification, atmospheric changes, and potentially increased human impact, might significantly affect community composition, diversity and functioning of the Arctic ecosystems in the future.For example, a shift in benthic species composition; decrease in diversity of ice-associated taxa; northward faunal range expansion of fishes, decrease in phytoplankton cell size in the Canada Basin and increased primary production across much of the Arctic Ocean have been documented.These observations are based on time series and are therefore mostly restricted to shallow areas.The lack of Arctic deep-sea data restrict the evaluation of biological responses to large-scale change in the Arctic environment.In order to provide adequate answers concerning how Arctic deep-sea ecosystems will change, long-term observations are needed.Currently, the HAUSGARTEN observatory in Fram Strait is the only Arctic deep-sea research observatory, where biological and physical parameters are being documented.Response to environmental change may vary between Atlantic and Pacific parts of the Arctic deep sea.Thus, placement of a long-term observatory in the Pacific sector of the Arctic deep sea that includes measurement of environmental and biological parameters at different trophic levels is advisable.We suggest the CBL is an ideal location for this purpose because of the prominent climate-related alterations in that region which could act as a Pacific counterpart to the Atlantic HAUSGARTEN long-term observatory.KI and BB conceived the study idea, KI obtained the funding for field work, KI and IZ conducted the field work.IZ conducted the image analysis with input by BB, PR and KI, IZ conducted the data analysis and most writing with all authors participating in data interpretation and article preparation., All authors have approved the final article.
Epifaunal communities from the poorly studied Arctic deep sea of the Chukchi Borderland region were investigated to: (1) determine differences in community structure among ridges, plateau with pockmarks, and much deeper basins as three main habitat types, (2) analyse the environmental factors that might shape these communities, and (3) investigate biogeographic affinities dominating the epifaunal communities. Epifaunal samples were collected in summer 2016 with a beam trawl (6 stations) and ROV (10 stations) from 486 to 2610 m depth. Seventy-eight and eighty-six taxa were registered from ROV images and trawl samples, respectively, with Echinodermata and Arthropoda dominating overall taxon richness. Epifaunal densities were estimated at 2273 to 14,346 ind/1000 m2 based on ROV images but only 342 to 2029 ind/1000 m2 based on trawl samples. Epifaunal biomass based on trawl catches ranged from 173 to 906 g wet weight/1000 m2. There was no significant difference in density, biomass and community composition between plateau and ridge communities, though the western and eastern parts of the study area differed in plateau/ridge community properties. Abundance in the eastern part of the study area was dominated by annelids (Ampharetidae and Sabellidae), and the western part by an unknown cnidarian (likely polyps of Atolla). Trawl samples from both western and eastern regions were dominated by the echinoderms Ophiopleura borealis and Pontaster tenuispinus. Deep basin communities differed from shallower plateau/ridge stations by significantly lower number of taxa and densities based on the images, and by lower biomass based on trawl catches. Polynoid annelids and sponges were characteristic taxa of the basin stations. Water depth and number of stones providing hard substrate significantly influenced epifaunal community structure, with sediment pigments and grain size also being influential. Arcto-boreal-Atlantic species dominated communities in the Chukchi Borderland, presumably mediated by Atlantic water dominance in the deep water layers of the Pacific Arctic. This study adds to the limited knowledge of ecology of the Arctic deep sea and improves existing baseline data that can be used to assess future effects of climate change on the system.
63
Energy-weighted dynamical scattering simulations of electron diffraction modalities in the scanning electron microscope
Electron diffraction techniques in the scanning electron microscope are established and versatile tools for microstructural investigation of crystalline materials.The strong and complex local interactions of electrons with crystalline matter offer a plethora of information about the crystal structure and material properties of a sample that can be recovered from the recorded signal.A review of these is given in ref. .Kikuchi patterns are one representation of the diffracting behaviour of electrons in the form of a variation in the angular distribution of signal electrons.The geometry of these patterns is dictated by the unit cell of the crystal and its orientation.Other features, such as the width of the bands, for instance, are nevertheless influenced by the spatial distribution of electrons in the sample and their energy distribution.We can distinguish a number of different SEM modalities employing the Kikuchi diffraction mechanism.If the recorded electrons are the backscattered ones, then the technique is known as electron backscatter diffraction and the Kikuchi patterns obtained are called electron backscatter patterns.Automated pattern indexing software established this diffraction modality as one of the conventional tools of orientation mapping, phase identification and/or relative lattice strain estimation in crystalline materials .In order to increase the diffraction signal in this mode, the popular approach has been to tilt the sample to about 70° from horizontal towards the detector, which guarantees a maximum backscattered electron yield.However, the high tilt will also spread out the information volume of the electrons within the sample, resulting in limitation of the achievable spatial resolution.Stimulated by the increased attention to nanostructured materials, which promise new and enhanced properties when compared to their larger scale counterparts, the interest in improving the resolution of established characterization techniques has also expanded.The use of forward-scattered electrons through a thin sample as diffraction signal collected from the bottom of the foil has been shown to improve the lateral spatial resolution to below 10 nm ; this technique is commonly known as transmission Kikuchi diffraction or transmission EBSD.The modalities above are sometimes referred to as “channeling out” diffraction techniques to suggest that the diffraction information has been sampled by electrons on their way out of the sample and that the volume from which the signal is collected is located close to the exit surface.The SEM can also be used in “channeling in” mode when electron channeling patterns are acquired .In this case, Kikuchi-like diffraction patterns can also be obtained by varying the incident beam direction with respect to the crystal.Typically, those patterns have a smaller solid angle compared to their EBSD counterparts.Nevertheless, the physical scattering mechanisms that produce EBSPs and ECPs are related through the reciprocity principle .Theoretical models have been developed and successfully applied to retrieve this wealth of information by taking into account the full dynamical behaviour of electron diffraction .Electron diffraction calculations commonly handle inelastic scattering in a phenomenological way through the introduction of a complex optical crystal potential approximation.This assumption implies that inelastically scattered electrons, once they lose even a small amount of energy, will cease to contribute to the diffraction pattern.The predicted diffraction patterns based on this simplified model remain meaningful but, understandably, are lacking quantitative precision.Due to the strong interaction of incident beam electrons at SEM energies with matter, the inelastic cross section is always comparable to the elastic one, and a portion of inelastically scattered electrons will reach the detector and contributes to the imaged pattern.Depending on the types of inelastic channels allowed, these electrons can suffer diffraction after losing a small amount of energy, contributing then to the diffuseness of the Kikuchi patterns.This process is especially relevant for “channelling out” modalities where electrons with energies lower than the incident energy can still contribute to the diffraction pattern.Alternatively, if electrons are scattered at a large angle multiple times such that memory of their original direction is lost, they will also contribute to the background intensity.This is the case for both channeling modalities.We call the later type of inelasticallyscattered electronsSE2 in order to differentiate them fromSE1 electrons carrying diffraction information.It is therefore essential to explicitly consider inelastic scattering and its effects on the signal contributing electrons, such as their energy and spatial distributions .This is especially important if finer features of the Kikuchi bands are to be correctly predicted.A full account of the inelastic channels in electron diffraction poses a challenging problem.While general Schrödinger equation solutions for inelastic scattering in perfect crystals have been proposed by Yoshioka and solved for various electron microscopy applications, to our knowledge, readily implementable solutions relevant for SEM electron energies have yet to be proposed.In this work, we assume inelastic scattering events to be stochastic and that Monte Carlo techniques can estimate both the trajectories of electrons that suffered such events as well as their energy distribution.Such models have been proposed and widely used to correctly predict distributions of backscattered electrons .The assumption that the distributions of escape energies and trajectories of electrons carrying diffraction information can be estimated from the last elastic event predicted by MC models has already been successfully applied for EBSPs and ECPs .The electron energy at the last elastic event, prior to leaving the sample, is regarded as the diffraction energy, and the distance to the exit surface from the elastic event is used as the diffraction distance.Dynamical diffraction modelling is then applied for the full MC predicted electron energy and path distributions.Here, we extend this model to TKD patterns by considering the geometry of a thin film sample where the entry and escape surfaces are different such that the incoherent events acting as sources of diffracting electrons are scattering in a forward direction.While this approach may not take into account the full extent of inelastic scattering effects on diffracted electrons proposed by the Yoshioka equations, it leads to a model of manageable complexity which is straightforward to implement and whose predictions are easily understood.Most importantly, it represents a step forward in taking into account the full physics of electron diffraction in matter by considering the full distribution of energies of channeling electrons and produces accurate predictions when compared to experimental patterns, as shown in Section 3.2.In Section 2 we describe the typical geometries for EBSD, TKD and ECP data acquisition and formulate a general expression for the thickness integrated back-scattered electron intensity that is applicable to all three diffraction modalities.We describe the particulars of the Monte Carlo trajectory simulations in Section 2.2, along with the resulting differences between the modalities.Master patterns for the three modalities are described and compared in Section 3.1.In Section 3.2 we compare experimental and simulated TKD patterns, and Section 3.3 illustrates how the recently developed dictionary indexing technique can be applied to TKD patterns.We conclude the paper with a brief discussion and summary in Section 4.The Monte Carlo model enables us to predict how any of these system parameters influence the form of the weighting function.For instance, in the next section we discuss the impact of different sample geometries on TKD patterns, while in Section 2.3 the effect of foil thickness is investigated.Then, in Section 2.4 the sample-detector geometry is considered as a useful system parameter that can identify special cases for which the numerical solution of the scattering process can be simplified dramatically via the use of so-called master patterns.The use of Monte Carlo simulations in predicting energy and spatial distribution of diffracting electrons has been described before for EBSPs and ECPs on bulk samples.These simulations employ Joy and Luo’s modified version of Bethe’s continuous slowing down approximation as an empirical estimation for a sum of inelastic scattering processes probabilities.The probabilities of elastic scattering events are determined from the Rutherford scattering cross section in the single scattering approximation.Therefore, the loss of energy is uniquely determined by the CSDA while the angular deflections from the original direction are defined by the elastic scattering events.For further details on this simulation approach we refer to the book by Joy .Additionally, the Monte Carlo model can be used to predict general electron trajectories inside the sample and the system parameters that might affect them.In Fig. 1 we show angular distributions of escaping electrons predicted by the MC model for the TKD modality.The intensities are shown as stereographic projections in the sample’s southern hemisphere for a beam of 20 keV electrons incident on a 200 nm thick Ni foil.By binning the energy values of the electrons escaping from the bottom of the foil into high loss energy electrons < 17.5 keV), medium loss electrons and low-loss energy electrons we can show the effect of energy filtering and observe the behaviour of different energy electrons.Fig. 1 shows projections for the case when the sample is horizontal and the electron beam normal.Here we can observe, as expected, that higher energy transmitted electrons are much more focused in the middle of the southern hemisphere, which happens to coincide with the direction of the incident beam.With increased energy loss we can observe an increase in trajectory randomization or diffuseness.This can be explained by considering the possible trajectories of electrons inside the sample and their corresponding energy loss.Electrons escaping the sample with energies close to the incident beam will not have deviated far from the incident direction.Relative to this, high loss electrons are more likely to escape at large angles to their incident direction.Very high energy loss electrons appear to have no preferred escape direction and we can expect these electrons to only contribute to image background.In Fig. 1 we investigate the effect of tilting the sample on the angular distribution of exiting electrons from the bottom surface.Similarly, the high energy electrons will not deviate far from their incident trajectories.However, in this case, the incident direction does not correspond to the center of the SP space and we observe that the directional distribution of the low loss electrons clusters 30° below the SP horizon.The trajectories of higher loss electrons start to be randomized in the entire SP space.We can also observe in these images how the radial symmetry of electron scattering is broken by the tilt angle of the sample.Finally, the angular distribution of the highest loss electron distribution will look the same as for the flat sample as their “memory” of the incident direction is lost.It becomes apparent that the sample geometry constitutes an important parameter in the formation of the Kikuchi patterns.Similarly to the EBSD case, where the sample tilt determines the preferred trajectories of electrons of different energies scattering back from the sample , the tilt of the thin film in TKD will directly influence the angular distribution of transmitted electrons suffering diffraction at different energies.In the following section we will carefully review the effect of another system parameter, the sample thickness, on the TKD diffraction patterns.This behaviour is shown in Fig. 2 as kernel density estimate distributions of electron escape energy versus escape distance predicted information for two different Ni thin foil thicknesses, 100 nm and 200 nm respectively, in the TKD geometry.Darker colours show that more electrons are likely to escape the sample with the corresponding parameters.The likelihood intensity across the two images has been normalized to the maximum value in Fig. 2 such that the intensity across images can be compared.We also show the escape energy and distance region where 90% of electrons are expected to come from, which is indicated by the thick line.Comparing the two figures, 2 and, it is clear that the thickness of the thin sample strongly influences the shape of the distributions.Considering the y-axis, the energy range of the electrons exiting the sample broadens and the energy decreases to significantly lower values as the thickness of the film increases.These observations already indicate that we should expect more diffuse diffraction patterns from thicker samples when compared to thinner ones.In general, the greater the interaction volume of electrons with the sample, the more energy will be lost by electrons before diffraction and the greater the diffuseness of the Kikuchi patterns; as supported by literature .Considering the x-axis, we observe that the escape depth profile resembles the usual power-law distribution with the bulk of the electrons carrying diffraction information originating from a few nm below the escape surface.It should be noted, that the MC model used in this study does not aim to predict the full depth of diffracting electrons or interaction volume.Instead, we make the assumption that the mean value of the full diffraction depth distribution can be estimated to be of the same order as the electron mean free path.Due to the power-law distribution rule, we can be confident that the vast majority of escape depths is considered in this model.Consider the sample and detector geometries shown in Fig. 3 where the lighter region on the samples depicts the volume in which electrons suffer scatter events.The top row shows two potential EBSD geometries, one with the sample tilted at the standard 70° angle with respect to the horizontal plane, the other with the sample tilted at 50°.As previously discussed, the sample geometry will determine the manner in which the scattering radial symmetry will be broken.Nevertheless, the region of SP space sampled by the position of the detector will also influence the uniformity of the electron energies and diffraction distances distributions.In Fig. 3, the electrons that reach the top and bottom of the detector ought to have travelled approximately the same length inside the sample before channelling out; in Fig. 3 on the other hand, the electrons that reach the bottom of the detector have traveled a significantly larger distance inside the sample.Finally, for the ECP case illustrated in Fig. 3 and, a small sample tilt does not significantly change the distribution of path lengths inside the sample, and most trajectories have about the same path length.For EBSD and TKD simulations and sample orientations that deviate significantly from the standard orientations, one can not apply the above approximation, since the range of distances traveled inside the sample is quite broad; thus, in these cases one must carry out the integrations of Eq. for each individual EBSD/TKD pattern, which results in a slow computational tool.TKD master pattern simulations proceed along lines similar to the previously published EBSD and ECP modeling approaches.A uniform grid of points is generated on a spherical surface surrounding a hypothetical spherical crystal located at the center; each sampling point represents one outgoing beam direction k, and the radius of the sphere is the maximum integration depth t0.The sampling scheme employs the modified Lambert projection introduced in in which a uniform grid on a square is mapped onto the sphere by means of an equal-area projection.For each beam direction, and for a given sample thickness, one carries out the integrals of Eq., using the Monte Carlo λ weighting function determined for that sample thickness.In the following section, we show example TKD master patterns and compare them to similar patterns for the EBSD and ECP modalities.The line scans across the TKD patterns for different thickness films are more similar to each other, except for the shift in peaks in the zone axis.It is rather apparent that both the peak positions and the sharpness of the thin film TKD pattern are more similar to the ECP pattern, while the peaks and blurriness of the thick film TKD pattern are closer to those of the EBSPs.We explain this behaviour by considering the energy loss of electrons contributing to the patterns in each case.The Monte Carlo predicted energy loss spectra for all four cases described above are shown in Fig. 5 as fitted Poisson distribution curves.Thin film TKD patterns are produced by electrons with an energy range very close to the ECP case.Similarly, increasing the sample thickness causes the electron exit energy distribution to become wider and shift to lower energies, which corresponds to a broadening and slight blurring of the Kikuchi bands due to the increased Bragg angles; these phenomena are common to EBSPs and thick films TKD patterns.It becomes apparent that the sample thickness can be seen as an energy filtering mechanism in TKD.In terms of the traditional Hough-based indexing approach, one must thus select a butterfly mask of the appropriate width, depending on the sample thickness and incident electron energy.For the dictionary indexing approach, illustrated in Section 3.3, the pattern dictionary must be computed using the appropriate Monte Carlo and master pattern data, to ensure accurate matches between experimental and simulated patterns.The EBSD master pattern is an energy-weighted average of individual master patterns and the integration over the electron energy gives rise to a continuous range of Bragg angles and, thus, a general blurring of the master pattern features compared to the ECP case.This will also be the case for individual diffraction patterns that are extracted from the master patterns via bilinear interpolation, as explained in .The same detector parameters were then used to refine the orientation of the pattern in Fig. 6.The resulting simulated patterns are shown in the right column of Fig. 6.Note that the only adjustments to the simulated patterns were brightness and contrast changes to maximize the visual agreement between the simulated and experimental patterns.The overall intensity gradient follows directly from the use of the direction-dependent Monte Carlo statistical data, and is in good agreement with the intensity gradients of the experimental patterns.The satisfactory agreement between simulated and experimental patterns indicates that the energy-weighted dynamical scattering model employed in the pattern simulations is sufficient to obtain realistic pattern simulations.The recently developed open-source dictionary indexing approach , an alternative to the commercially available Hough-based pattern indexing algorithms, is based on the ability to compute a library of diffraction patterns for a uniform sampling of orientation space and a given set of geometrical detector parameters.The technique has been applied successfully to EBSPs and ECPs , and in this section we describe the first application of dictionary-based indexing to TKD patterns.A 30 kV TKD data set was acquired from a nano-crystalline Al sample approximately 150 nm thick.The sample was made by DC magnetron sputtering Al on a Ag seed layer deposited on a Si substrate; for details see .Foils for performing TKD were shaped using e-beam lithography, then released by etching the substrate with XeF2.Final removal was done using standard FIB liftout techniques on an FEI Helios dual beam FIB-SEM.The sample was mounted on a 38° pre-tilted holder and the microscope stage was tilted 20° so that the sample was tilted at 18° toward the EBSD detector, which was tilted at 8° from the vertical orientation.TKD was performed on an FEI Teneo SEM FEG-SEM, equipped with an EDAX/TSL Hikari EBSD camera with 480 × 480 pixels of size of 70 µm.The small data set consists of 86 × 196 sampling points with a 30 nm step size, resulting in a field of view of 2.58 × 5.88 µm2, and each TKD pattern was binned by a factor of 2 × to a size of 240 × 240 pixels.The patterns were first indexed in real-time using the EDX OIM-8 indexing software , resulting in an indexing success rate of 95.6%.A TKD master pattern, shown in Fig. 7, was computed using the approach described in Section 2, and orientation space was uniformly sampled using the cubochoric sampling approach described in , to obtain an orientation set consisting of 333,227 unique orientations inside the cubic Rodrigues fundamental zone; this corresponds to a sampling of orientation space with an average angular step size of 1.4°.The pattern shown in Fig. 7 was used to refine the detector parameters using the approach described in Section 3.2 and used along with the TKD master pattern to index the experimental patterns; the corresponding simulated pattern is shown in Fig. 7.The experimental and simulated TKD patterns were pre-processed before computation of the dot products; pre-processing, computation of the 333,227 simulated patterns, and indexing of the 16,856 experimental patterns took a total of 35 min on 24 Intel Xeon E5-2670 2.30 GHz CPU threads and an NVidia GeForce GTX 1080 GPU.The resulting orientations and related information were exported to both a binary HDF5 file and a CTF file for further processing.The bulk of the computation time is spent on the simulated patterns; pre-computing those patterns and storing them in a file would significantly speed up the indexing process.Fig. 7 shows an orientation similarity map.The dictionary indexing approach produces a list of the top N best matches.For each sampling point, the orientation similarity is computed by determining the average number of top matches that this sampling point has in common with its four nearest neighbors; this value is then displayed as a gray scale image, as shown in Fig. 7.Since sampling points near grain boundaries will have fewer best matches in common with their neighbors, the orientation similarity map provides an easy overview of the microstructure in which grain interiors have a uniform intensity level and all grain boundaries have lower intensity.The inverse pole figures in Fig. 7 and were obtained by the standard commercial OIM-8 indexing package and the dictionary indexing approach.The dark regions near the top of the field of view in Fig. 7 correspond to surface contamination from the XeF2 etching step and result in clusters of incorrectly indexed or unindexable points in both indexing approaches; patterns were deemed to be unindexable when either the Image Quality was low or the Pattern Sharpness parameter, as defined in , was low.Overall, the dictionary indexing approach has fewer incorrectly indexed points, in particular near grain boundaries.Inelastic scattering, a phenomenon usually discarded in diffraction simulations, has direct influence on the energy distribution of diffracting electrons and, consequently, on the imaged Kikuchi patterns.The broader the energy distribution of diffracting electrons, the more diffuse the Kikuchi band edges.Using a Monte Carlo model we can observe that the length of electron trajectories before diffraction is a determining factor in the broadening of the energy distribution.This factor, in turn, can be controlled in the Transmission Kikuchi Diffraction modality through the thickness of the sample, acting effectively as an energy-filtering mechanism.Another determining factor for the energy distribution is the sample-detector geometry which influences both TKD and EBSD modalities.We should note that the Monte Carlo model used in this work explicitly describes the lower escape distance values for the signal carrying electrons.A subset of electrons reaching the detector will, nevertheless, carry a probability of channeling over longer trajectories.Depending on their travel direction inside the crystal, these electrons are expected to give rise to contrast inversion of one or more Kikuchi bands.This will occur when the distance traveled is of the order of, or larger than, the extinction distance for a particular plane.Contrast inversions are thus expected to occur for both EBSD and TKD modalities when the sample is tilted such that long electron trajectories are possible; in addition, the sample should have a crystal structure that gives rise to short extinction distances.For the ECP modality, contrast inversions are not expected to occur unless very large sample tilt angles are used, which is not practical due to the possibility of the sample hitting the back-scatter detector.Similarly, when the TKD detector if mounted horizontally, below the sample, the electron trajectories inside the sample will have a narrow range of escape distances, so that contrast inversions are also not expected to occur.A statistical model more sensitive to the outlier cases of long distance channeling electrons is therefore necessary if we are to correctly predict band contrast inversion.The energy-weighted scattering model is shown to correctly predict Kikuchi bands sharpness for the different SEM modalities.When used for the dictionary indexing approach it was shown to produce indexed TKD patterns with fewer incorrectly indexed points compared to commercial Hoigh transform based indexing software.
Transmission Kikuchi diffraction (TKD) has been gaining momentum as a high resolution alternative to electron back-scattered diffraction (EBSD), adding to the existing electron diffraction modalities in the scanning electron microscope (SEM). The image simulation of any of these measurement techniques requires an energy dependent diffraction model for which, in turn, knowledge of electron energies and diffraction distances distributions is required. We identify the sample-detector geometry and the effect of inelastic events on the diffracting electron beam as the important factors to be considered when predicting these distributions. However, tractable models taking into account inelastic scattering explicitly are lacking. In this study, we expand the Monte Carlo (MC) energy-weighting dynamical simulations models used for EBSD [1] and ECP [2] to the TKD case. We show that the foil thickness in TKD can be used as a means of energy filtering and compare band sharpness in the different modalities. The current model is shown to correctly predict TKD patterns and, through the dictionary indexing approach, to produce higher quality indexed TKD maps than conventional Hough transform approach, especially close to grain boundaries.
64
Linking naturally and unnaturally spun silks through the forced reeling of Bombyx mori
Silks are fibrous proteins spun by various arthropods for a variety of natural functions .The silk of the mulberry silkworm Bombyx mori is of particular scientific interest, not least because its industrial-scale commercial production makes it an important model material for protein and polymer research .The silks in their natural form are used by the silkworm to construct tough cocoons for protection during metamorphosis .Using minimal specimen preparation, cocoon silk fibres can be unravelled from cocoons by softening them with water.The fibres themselves comprise two fibroin brins covered in a glue-like sericin coating, which can be further processed by removing the sericin and separating the brins in a process called degumming, which affects fibre mechanical properties .These cocoon silk fibres are generally considered to be natural or native fibres, spun in vivo using natural protein dope feedstock.In contrast, “silk” fibres can also be spun artificially from reconstituted silk fibroin protein molecules that have been obtained from native silks after dissolving in strong chaotropic agents .Importantly for industrial uses, RSF can be processed in a number of ways to produce films, fibres, and a variety of 3-D structures, with a wide range of potential applications, principally for use in biomedical research .Understanding the relationship between protein structure and mechanical properties is a vital step towards using silk or silk-derived proteins for specific applications.One of the most useful analytical tools allowing the matching of properties and structure is dynamic mechanical thermal analysis, with detailed work to date helping to elucidate the protein structure of artificially produced fibroin , native silk fibres and artificially spun fibres from natural silk dope .This work has revealed differences in thermal behaviour between the different silk-derived proteins; most importantly, differences in glass transition temperatures that can be linked to the degree of order in the protein, which in turn is a key component of protein structure determining mechanical performance .However, the exact manifestation of “order” in terms of protein secondary and tertiary structure is variable and still under debate for silk fibres ; as such, we use the terminology of “order” and “disorder” in the context of this work as a two-parameter description of energy management in a material where different hydrogen bonding states can directly influence mechanical properties.As detailed conformational structure is not a focus of this work, DMTA has advantages over other spectroscopic techniques, where structural influences on mechanical properties are inferred.DMTA has the ability to directly probe structural transitions that affect the mechanical properties of the material through changes in the loss tangent.The disadvantages of DMTA in turn are that these transitions have to be related back to specific states of matter and local chain conformations or intermolecular bond strengths.However, this relationship has been discussed in detail from both an empirical perspective and also using robust structure–property models for the past 40 years .Furthermore, it is now possible to relate these DMTA spectra to distinct mechanical property profiles obtained from tensile testing, as we shall demonstrate in this study.Native silks and RSFs are often composed of the same peptide motifs , and hence patterns of structural and property differences between the two resulting fibres might be due to both spinning and processing condition differences.Not a focus here, several studies have shown how RSF can be post-spun processed in a variety of ways, using chemical or isothermal treatment or applying postdraw, which influences the structures present .Other studies make the important link between the treatment, the mechanical properties and the protein structure .To date, artificially spun silks are unable to match their natural counterparts .This is most likely because the initial spinning and the common post-processing conditions used lead to dissimilar supra-molecular structures .This study presents forced reeled silks as an ideal study material to help inform about the differences between natural cocoon silk and unnaturally spun RSF, providing insights into the relative importance of spinning and post-processing conditions.Fibres drawn by forced reeling originate from natural silk dope and are spun in vivo but, we assume, under somewhat “unnatural” spinning conditions since the silk fibre is pulled directly from the spinnerets .Unlike naturally spun silks, processing conditions can be applied to forced reeled silks during spinning .Studying the relationship between the properties and degree of order of these silks produced under a range of conditions might help to explore the importance of spinning by establishing a full picture of the performance range of such materials.This, in turn, can help us establish a link between naturally, semi-naturally and artificially spun silk fibres based on native dopes and fibres spun from artificial RSFs.Since processing conditions can be manipulated during reeling, forced reeled silks can have an impressive range of properties compared to naturally spun silks .Under experimental reeling conditions, forced reeled silk variability between worms can be controlled by removing behavioural application of load by the silkworm, which can affect the silk’s mechanical properties .Silkworm paralysis removes this behavioural variation , which makes it possible to explore the performance range of the forced reeled silks through manipulation of processing conditions.Reeling speed is the best-studied processing condition influencing mechanical properties through affecting protein-chain order during reeling .A recent paper demonstrates how reeling speeds comparable to the average natural spinning speed can give greater toughness, concluding that reeling speed somehow affects structure and morphology of the fibre .Of course, post-processing conditions during reeling, such as postdraw or wet-reeling, would also be expected to create additional order .Adding tension to the fibre before it is fully set creates stress-induced molecular alignment , allowing hydrogen bonding during drying to “lock” the order into place .Indeed, in spider spinning, such post-processing additions affect the mechanical properties by increasing the order within the fibre ; this is also seen in synthetic polymer spinning .To further increase the degree of molecular order in a finished silk fibre, the post-processing conditions of parameters like temperature, stress and solvation would need to exceed the yield point or glass transition temperature to allow macromolecular mobility .This study aims to use the scope of properties produced by forced reeled silks to infer the relative importance of spinning and post-processing conditions, through comparison to naturally and unnaturally spun silk fibres.Since naturally spun silks have the least variable and most desirable properties, comparison to naturally spun silks is our main focus in this study.The mechanical property differences are explored using a combination of thermogravimetric analysis and DMTA temperature scans to infer the degree of order and local states of bonding in the polymer structure.We investigate how processing conditions can influence the envelope of forced reeled silk properties and the degree of order by examining the effects of reeling speed, and post-processing conditions such as degree of postdraw, specimen tension and storage conditions.The resulting insights can be interpreted in the context of all silk-derived materials from natural to synthetic.Final instar B. mori silkworms were reared on a mulberry diet.Worms were stored in laboratory conditions until they started spinning.All worms were immobilized for forced reeling using a synthetically produced, but naturally occurring, paralysis peptide, sourced from Activotec, Cambridge.The peptide is injected into the haemolymph, as described elsewhere .Worms were then suspended from a holder using tape around their body.The paralysis reduces the effect of silkworm behaviour on properties, so increases the reproducibility between different worms .Being invertebrates, all silkworms were handled according to local laboratory risk assessments/institutional ethical guidelines and do not currently fall under regulation by the UK Home Office or EU legislation.The reeling conditions and specimen storage details are given in Table 1.All non-postdrawn silks were reeled in air straight onto a motorized spool at the stated reeling speed .Postdrawn silks were run through a water bath via a Teflon-coated guide before winding through motorized wheels, which applied a controlled postdraw.A water bath is used as water acts as a plasticizer, thus reducing the likelihood of breaking and increases the likelihood of macromolecular movement during applied postdraw.The first wheel was set to the reeling speed of 20 mm s−1.The second wheel was placed at different distances away from the first wheel.The silk was collected onto a motorized spool for collection, travelling at the same speed as the second wheel.Dry storage silks were reeled onto a spool in laboratory conditions and then kept in sealed low humidity conditions until the fibres were mounted and tested in laboratory conditions.Naturally spun silk specimens were unravelled from a cocoon onto a spool.Specimens were collected from within over 500 m of the unravelled silk length, which showed consistent properties .Some tests used degummed naturally spun silk; the degumming process is described in detail elsewhere .The RSF fibres used for the DMTA test were kindly supplied by Wang Qin at Fudan University.The RSF solution was obtained using the same method as Wang et al. and the fibres were obtained through wet-spinning similar to the methods used in Yan et al. .The RSF stress–strain curves and DMTA profiles are provided for reference only, to allow the properties of forced reeled silks to be compared within the context of natural and unnaturally spun silks.Aluminium pans and 100 μl crucibles were pre-tared on the thermogravimetric analyser balance prior to adding the specimen.Approximately 0.3 mg of silk was then cut from a spool and carefully transferred into the aluminium pan.The specimen was then heated at a rate of 3 °C min−1 from ambient temperature to 300 °C in nitrogen gas flowing through the thermogravimetric analyser furnace at a rate of 100 cc min−1.Fibres were mounted under tension into 10 mm gauge length cardboard frames for tensile testing.The numbers of specimens tensile tested are given in Table 1.Specimens were pulled apart in laboratory conditions at a controlled strain rate of 40% min−1 until broken; only specimens that broke in the middle were used.The load–extension data were analysed using a Microsoft Excel macro and figures were drawn using Origin Software.Quasi-static tensile tests were also performed using the DMTA in controlled-force mode at room temperature.Dry nitrogen purge was applied for 15–30 min to remove the excess moisture of the specimen after it was loaded onto the clamps.A force-ramp rate of 0.1 N min−1 was applied for all forced reeled B. mori silks, with two or three specimens being tested from each processing treatment.The cross-sectional area is measured by gluing silk specimens to solder with superglue, as described in detail elsewhere .Silk fibres were then transversely sectioned, digested in protease for 24 h and imaged in a scanning electron microscope.Pictures were analysed using ImageJ software.The average area is applied to the specimens used in the tensile tests .Non-parametric statistics were performed using Minitab v.8 software, being the most suitable tests for low specimen numbers.A Moods sign test was used, which does not assume equal variance.All the dynamic mechanical thermal tests were performed on TA Q800 under DMTA multi-frequency strain mode.The parameters kept as constants were: temperature ramp rate at 3 °C min−1; frequency at 1 Hz; and dynamic strain at 0.1%.The selection of these constant parameters was based on the most common polymer testing procedure as well as a compromise between the length of test time and data quality.A preload force equivalent to 50 MPa stress was applied in order to keep the fibre tested in tension throughout the dynamic oscillation.Further details of the background of the technique can be found in Guan et al. .Two types of temperature scans were conducted for forced reeled silks: first, a full-range temperature scan from −100 °C to +270 °C, and secondly, cyclical temperature scans, with the first ramp up to +120 °C or +180 °C, abbreviated as 120 °C annealing and 180 °C annealing, respectively.Forced reeled silks are compared to naturally spun silks in the thermogravimetric analyser, which provides insights into the different structures present given their apparently similar spinning conditions.The water content of the forced reeled silks was higher than naturally spun and degummed naturally spun silk calculated from the weight loss up to 100 °C.Although these differences are small, the repeatability is good: several meters of silks were tested and the storage conditions prior to testing were identical, so any differences can attributed to the polymer structure, averaged over several meters of silk fibre.Since water is associated with hydrogen bonding to disordered silk , this suggests that forced reeled silks are intrinsically more disordered, with larger amorphous fractions.The earlier onset of thermal decomposition in forced reeled silks also supports their being more amorphous.These observations imply that forced reeled silks are more disordered and have lower values of the glass transition temperature, Tg, which would be due to the higher vibrational energy of the less rigidly bonded chain backbone .The differences between forced reeled and naturally spun silks were further explored in terms of the influence of the applied processing conditions on the forced reeled silk, here reeling speed.Previous results have shown that the mechanical properties of forced reeled silks are toughest at the reeling speed closest to the naturally spun reeling speed .This is confirmed by the data shown in Fig. 2a, where stress–strain curves at three reeling speeds, 6, 15, and 25 mm s−1, are compared with naturally spun cocoon silk.Silks reeled at a speed of 15 mm s−1 displayed average toughness closest to that of naturally spun silks, albeit with slightly higher variability, 15 mm s−1 forced reeled 100.0 ± 20.8 J cm−3.Even more pronounced than the data shown in Fig. 1, using DMTA forced reeled silks showed thermomechanical behaviour that was significantly different from that of naturally spun silks, with high variability and higher loss tangent peaks at lower temperatures.In general, stronger hydrogen bonding in more ordered structures would give higher peak temperatures .Recent work by Guan and collaborators has shown that each loss peak is caused by the glass transition of a specific silk structure, each with different numbers and combinations of hydrogen bonds between the different chemical groups, such as the main chain amide groups or the side-chain –OH groups of serine peptides .The main non-natural loss peak at 175 °C is assigned to the characteristic glass transition temperature of the highly disordered structure of RSF silk .The structure associated with this loss peak has two hydrogen bonds per peptide segment in a random disordered configuration .Importantly, the strong loss peak at 120 °C has not previously been observed during DMTA analysis of silk fibres.This peak is not due to water, as the TGA profile did not show weight loss at this temperature.Calculations using the Guan model suggest that the number of hydrogen bonds for this peak is reduced to one per segment within the disordered structure.The general conclusion from Figs. 1 and 2 suggests that forced reeled silks are more disordered than naturally spun silks, regardless of the applied processing condition of reeling speed.However, despite this difference, at reeling speeds closer to the average natural spinning speed, the total energy required to break the hydrogen bonded structure was highest, even above that of naturally spun silk.It emerged from our TGA and DMTA experiments that forced reeled silks have intrinsically different mechanical and structural properties compared to naturally spun silks.As previously stated, mechanical properties in fibres can be manipulated post-spinning by applying processing conditions to influence the properties, as illustrated in Fig. 3.Here, a consistent reeling speed of 20 mm s−1 was chosen for comparison between samples.Supporting the data in Fig. 2, naturally spun silks had relatively low sample variability compared with the range of forced reeled examples.As the methods for mounting and testing the fibres are the same for both naturally spun and forced reeled silks, this leads to the conclusion that the forced reeling processing treatment introduces this variability.The general conclusion for forced reeled silks would propose that more/greater postdraw and higher stretch rate lead to the strongest silks; however, these manipulations were accompanied by greater specimen variability in the post-yield stress–strain profile.These data also indicate sensitivity to specimen storage conditions such as silk tension and specimen storage humidity.Differences between forced reeled silks are expected to be minimal due to the paralysis condition .Higher postdrawn specimens displayed higher average failure stresses.This might be due to the higher fibre stiffness caused by higher molecular orientation, which in turn would be enhanced by the increased stretching force and higher stretch rate .Interestingly, the postdraw affected not only the mechanical properties, but also the fibre cross-sectional areas.Reeling from the same worm and applying the same amount of stretch, allowing less time to respond to this stretch, led to a significant decrease in fibre cross-sectional area.Additionally, the postdrawn specimens showed high variability throughout the stress–strain profile, resulting in variability in both stress and strain to failure.This is in contrast to the non-postdrawn specimens in Fig. 3d and e, which followed more consistent stress–strain contours and only varied in their failure points along the contours.Not found in the high postdrawn specimens nor in naturally spun silks, the post-yield plateau in stress seen immediately after yield indicates molecular chain elongation.This is probably associated with plastic flow, as coiled molecules relax through yield until they are stretched sufficiently to sustain the applied load with the equilibrium post-yield modulus .Maintaining tension in the non-postdrawn specimens gave consistently high failure stresses and strains, thereby producing the highest toughnesses seen in any of the fibres drawn.These fibres were the most similar to naturally spun threads, albeit with an increased post-yield plastic flow in the disordered structures resulting in slightly higher energy uptake, i.e. the area under the stress–strain curve.Storing the non-postdrawn specimen under dry conditions induced a different kind of variability.Many of these fibres were slightly embrittled by the dry conditions, with associated low failure stress near the yield stress .Water acts as a plasticizer for silk by generating lower temperature relaxation processes that can promote ductility, as can be seen in Figs. 4 and 5 and in the discussion of dynamic mechanical properties .Alternative explanations to the effect of dry-storage include the embrittlement of the sericin binder coating on the fibres, which weakens the whole fibre structure when cracks are initiated in the sericin layer and propagate through the whole fibre .Equally, cracks in the sericin layer may lead to brin separation, as seen in previous studies testing cocoon silks in different humidities, which will further increase variability by affecting core fibroin–water interactions .A disordered RSF fibre is shown for reference in Fig. 3f.These fibres have little or no post-yield modulus due to their structural morphology.In contrast with native silks, they have larger and more widely distributed ordered regions in the amorphous matrix , leading to little contribution of the ordered regions to mechanical performance post-yield.We must ask to what extent such post-processing conditions affect the amount of structural disorder of forced reeled silks.Fig. 4a presents the temperature scans of two non-annealed and non-postdrawn and one non-annealed but postdrawn forced reeled silk, with naturally spun silk for reference.The data of Fig. 2b suggested that all forced reeled silks have complex combinations of partially disordered structures in the fibroin that was introduced by the forced reeling spinning process.Post-treatment could only partially compensate for this increased and variable disorder.Here postdrawing had the strongest effect yet retained strong evidence of multiple types of disordered structure.The highest level of disorder was seen in the non-postdrawn specimens that had been slack-stored before measuring.In contrast, the greatest order seen in forced reeled silks emerged following annealing to 180 °C.Annealing to 180 °C removed small loss peaks associated with water below 100 °C and most of the higher temperature glass transition peaks below about 200 °C, leaving a higher temperature peak comparable to a peak in non-annealed, naturally spun silkworm silk.We suggest that annealing creates stronger bonding as disordered structures are irreversibly stretched under slight mechanical load .Over the temperature range for the annealing scan, this would eventually lead to the formation of the most strongly bonded disordered structure possible in a particular specimen.Importantly, this suggests that thermal treatment under mechanical load would be the most effective way to create ordered structures in a silk.Not surprisingly, this feature is similar to the common practice of annealing synthetic polymer fibres such as PET under load at elevated temperatures in order to increase the orientation and crystal fractions, which in turn increases stiffness and strength .Post-processing conditions therefore influence the properties and structures of forced reeled silks to cover a spectrum of disorder, ranging from the disordered artificially spun RSF to the more ordered naturally spun silks.For non-annealed silk, peaks associated with more water between −60 and +60 °C indicate more disorder and were highest for non-postdrawn silks, lower for postdrawn silks and lowest for naturally spun silks; no RSF was tested in annealing mode.Annealing to 120 °C removed the loss peaks under 100 °C by removing the water from the silk specimens.Similar annealing effects have been observed and explained likewise in soy protein .Following 120 °C annealing, the most disordered Bombyx-derived protein was RSF.The large RSF loss peak at 175 °C has been assigned to the highly disordered macromolecular structure, which shows the largest area under the loss peak due to the high contribution of disordered structure .These structures are caused by the formation of solid silk from solutions made by the chaotropic medium of aqueous concentrated lithium bromide solution .The non-postdrawn silk loss peak was the least ordered of our forced reeled silks and was most similar to the RSF, suggesting that the non-crystalline fraction is likely to be highly disordered with a structure analogous to RSF.The postdrawn forced reeled specimen showed an interesting rapid change around 175 °C.This sample loss tangent trace started out like an RSF structure peak but then rapidly changed to resemble a more ordered structure similar to a natural silk.This specimen further supports the assertion that discrete loss peaks in the loss tangent profile of silks are associated with specific hydrogen bonded structures.Furthermore, this result implies that, under certain conditions of forced reeling, hidden functionality can be locked into the fibre and later released under certain annealing conditions.In the specific bonding arrangement of this postdrawn silk, the structure can clearly be seen to “dynamically” reassemble under specific load and temperature conditions.The specific conditions leading to this dynamic reassembly have not been asserted, but are likely to be linked to the stress and humidity history of the silk .Forced reeled silks were variable in many aspects of their structures and properties, varying more than naturally spun silks and in some cases seemed more akin to derived RSF fibres.Previous work has suggested that such variation in mechanical properties might be attributable to topological “defects” such as surface imperfections present in the silks , which may be exacerbated by cross-sectional area measurement and behavioural control of the silk press during non-paralysed reeling .This may be the case for certain post-processing conditions, such as dry storage, which leads to sericin cracking when the fibre is under tension, leading to brin separation or crack propagation .In more general terms, our new data in combination suggest that this variation may be attributed to the amount of different disordered protein structures present, as revealed by DMTA.The exact manifestation of the order and disorder in terms of protein tertiary packing is an area of contestation that requires further study, but this by no means diminishes the power of DMTA to resolve structural differences and the impact these have on the mechanical properties of silk fibres .The precise mechanism by which the differences in structures arise between naturally spun and semi-naturally spun forced reeled silks is currently unknown, but is likely to be a product of both behavioural and physiological control exerted by the silkworm during spinning and processing.Behavioural control in this work was experimentally manipulated by use of a paralytic agent, which is believed to inhibit muscular control around the spinning apparatus, specifically the silk press .This inhibition prevents the animal from using its behaviour to fine-tune the ratio of applied force and reeling rate during fibre production.A lack of feedback and control during silk production could propagate rheological flow instabilities in the silk dope in the duct, as well as preventing any post-spinning postdraw of fibres.Furthermore, forced reeling may impact the silkworms’ physiological control, i.e. the exposure of the dope to chemical processing in the duct, known to affect silk’s self-assembly properties .Recent work suggests that specific chemically induced links between terminal groups on the protein chains might allow the flow field in the duct to stretch the macromolecules into an aligned structure that in turn promotes structural order and inter-chain hydrogen bonding .Identifying the biological origin of property variability in our forced reeled silks is far beyond the scope of the current work, but the tools presented here, in cooperation with other structural analysis tools, such as X-ray scattering, Fourier transform infrared and Raman spectroscopy, will allow these origins to be explored in the future.A fibre’s stress–strain profile ultimately determines its application and function.Concerning mechanical properties, we have shown that postdrawn forced reeled silks are able to achieve higher strength, albeit at the cost of extensibility, toughness and consistency.The toughest fibres were the non-postdrawn forced reeled silks under specific storage conditions.Thermal stability analysis using TGA and DMTA loss spectra both showed that naturally spun silks showed the highest degree of order, and that the forced reeled silk properties are controlled by the fraction and type of disorder.This suggests that forced reeling introduces disorder into the fibre that can only be fully addressed with post-reeling processing conditions such as thermal treatment under load.This would permit precise control of the structure and properties of semi-natural forced reeled silk fibres for bespoke applications.We have shown that, with controlled forced reeling of B. mori, one is able to produce fibres with a wide range of mechanical properties, which is beginning to make clear links between natural and artificial silk in terms of spinning and processing conditions.Moreover, our data demonstrate that DMTA analysis on silk fibres is not only useful as a tool to assess the quality of a fibre , but can also quantitatively assess a fibre’s potential for improvement through post-spin modification, regardless of the fibre’s origin or processing history.More specifically, our data elucidate the range of properties available to B. mori silks and silk-based fibres, which in turn has important implications for their industrial application.Two supplementary figures accompany this manuscript.Fig. S1 shows the experimental set-up for the forced reeling silkworms.A silkworm is attached onto a pole using tape around its body between the “thoracic” and “pseudo” feet.The silk is attached onto a rotating cylindrical spool, which is controlled by a motor and moved by hand to collect the silk along the spool.Fig. S2 shows the stress–strain curves before and after 120 °C annealing for naturally spun, postdrawn and non-postdrawn silk).Scatter points give the break points of repeats of the same silk types.
The forced reeling of silkworms offers the potential to produce a spectrum of silk filaments, spun from natural silk dope and subjected to carefully controlled applied processing conditions. Here we demonstrate that the envelope of stress-strain properties for forced reeled silks can encompass both naturally spun cocoon silk and unnaturally processed artificial silk filaments. We use dynamic mechanical thermal analysis (DMTA) to quantify the structural properties of these silks. Using this well-established mechanical spectroscopic technique, we show high variation in the mechanical properties and the associated degree of disordered hydrogen-bonded structures in forced reeled silks. Furthermore, we show that this disorder can be manipulated by a range of processing conditions and even ameliorated under certain parameters, such as annealing under heat and mechanical load. We conclude that the powerful combination of forced reeling silk and DMTA has tied together native/natural and synthetic/unnatural extrusion spinning. The presented techniques therefore have the ability to define the potential of Bombyx-derived proteins for use in fibre-based applications and serve as a roadmap to improve fibre quality via post-processing.
65
Perception and self-assessment of digital skills and gaming among youth: A dataset from Spain
The present dataset in this paper was collected in May 2017.The raw data are available in Excel and SPSS format.The main data file spreadsheet accompanying this article contains 1012 rows of data, with the columns containing variables derived from responses to the survey.The survey includes 15 questions followed by different possible answers located in 46 columns in the data view sheet.The survey includes 15 questions which are categorized into 11 parts: socio-demographic; player/no-player, the frequency of use, the most played games, money spent on games, operational skills, information and navigation skills, social skills, creative skills, mobile skills, and classification data.The survey is also accompanying this article together with descriptive analysis in Table 1.The presented dataset comprises raw and pre-analyzed statistical data on the internet skills level, internet use habits, internet knowledge, and social perception of the Spanish population between 16 and 35 years old, carried out in May 2017.The Statistical Package for Social Sciences has been used to encode the collected data.The most significant results of the survey are detailed through the following categories: socio-demographic and socio-economic characteristics of the interviewees, gaming habits, operational skills, information/navigation skills, social skills, creative skills, and mobile skills.Regarding the independent variables, the authors used socio-demographic characteristics which included age, gender, and marital status as well as socio-economic factors comprising education, employment, community, and residence."The participants' age was categorized into three age groups, including, and gender was coded as male and female.Marital status was sorted into single, married, divorced, widowed, and living with a partner.Place of residence is divided into the eighteen autonomous communities of Spain, while population size of the community were categorized into five levels.Employment was grouped into employed, unemployed, and student, while education level was graded into seven levels.The dependent variables presented in 9 main sections comprise Player/no-Player, the frequency of use, the most played games, money spent on games, operational skills, information and navigation skills, social skills, creative skills, and mobile skills.The dataset has been uploaded in Figshare and it is available on: https://figshare.com/s/b816bdb62edf960aaf05.The data has been uploaded as SPSS and Excel files while the survey is in PDF format.Readers can retrieve and reuse publicly available information by visiting the link given above.In order to obtain information, a structured survey has been designed.To measure Internet skills, we used the Likert-type format to provide more flexibility for interviewees, following the same scales used by Deursen, Helsper, and Eynon.According to Helsper, van Deursen & Eynon, various scales of self-reports are used to measure internet skills.We adapted the complete survey with some modifications in the response items in order to make them easier to understand in Spanish.According to Helsper, van Deursen & Eynon, it is crucial that respondents understand the questions and answers correctly, so the response items have been changed in order to be understandable for a different language and culture."This modification includes replacing the following items: to. "Also a new option of “I don't know” has been added to the modified responses.Because “not knowing” about something could have different reasons including “Knowing about the topic but never tried” or “Not knowing anything about the topic”.To collect the data, the Computer-Assisted Telephone Interview technique was used for individuals aged 16–35 years living in Spain through a landline telephone.Due to having different area codes for each community of Spain, using landlines allows categorizing the data considering the autonomous communities.Moreover, it is quite cost-effective and timesaving.Considering that 78% of the households in Spain use landlines, there is a high possibility to reach at least one young person through a landline.1012 interviews were obtained.The sample was segmented by autonomous communities, proportional to the real distribution of the population, asserting the importance of considering the region of residency in the sampling of the Internet use studies.From the census, the sample was constructed based on representative quotas of the Spanish population.A public database was used with the existing landlines, and the interviews were carried out according to the established quotas.The sampling procedure followed a three-stage selection process : primary sampling units, municipalities, were randomly selected ; secondary sampling units, households, were randomly selected by phone number; and individuals within households were randomly selected through a cross-stratification of sex and age and size of municipality which was subdivided into 7 types of communities according to their size.The survey was conducted between May 20 and May 31, 2017.The margin of error for the total sample was +3.10%, for P = Q = 50% and under the assumption of maximum indeterminacy.There was almost an even distribution between men and women in the sample with an average age of the respondents of 22 years old, and an average level of education of secondary education.The proposed conceptualization of a range of Internet skills and social perceptions of media and video games are used to provide accurate measurement.They are as follows:The skills to operate with digital media, including opening a downloaded file and saving a photo found online, was the best-known action by respondents, while the programming language was the least known.The skills to search, select, and evaluate information in digital media and the skills of navigating and orienting oneself to a hypermedia environment.Among them, finding a previously visited website and looking beyond the first three results of a search is the one best known by respondents.While to make a consultation, to check the reliability of the found information, and to make a decision of trusting a website are the least well-known information/navigation actions for them.The skills to employ the information contained in digital media as a means to reach a particular, personal or professional goal.The participants showed high skills in removing people from contact lists and defining with whom to share content.To distinguish the type and the time of information to share and not share online were the two actions less known by them.The skills to create the content of acceptable quality to be published on the Internet.Creative skills are the least known by respondents.Among them, the skill to publish videos or music online was the most known action.While very few of the respondents had knowledge of how to design a website.The different actions involved in the use of smartphone or tablets.The respondents showed high knowledge in installing applications and taking photos or videos with the smartphone while tracking the usage costs of the mobile application was the one least known by them.“Video games provoke addiction” was the statement with which most respondents agreed, following by “video games cause isolation in players”.On the other hand, less agreement was shown with the statements: “they are a waste of time” and “they are violent”.“Video games stimulate memory and attention” was the statement with which most respondents agreed, followed by “Video games help develop good problem solving and strategic thinking skills.,On the other hand, less agreement was shown with “the things that are learned can be applied to daily or professional life”.When they talk about video games, the respondents believed that the media, firstly, provokes addiction, secondly it promotes violence, and finally it increases the risk of social isolation.The survey was adapted from Helsper van Deursen & Eynon and was applied to the Spanish context."The validity of the scales has been confirmed in the aforementioned study using Cronbach's alpha.The analysis of the data has been carried out using SPSS."The results showed that the overall Cronbach's alpha values are relatively significant meaning that the data set is reliable. "To test the consistency of the items of each scale, Table 1 provides the correlation between a particular item and the sum of the rest items and the value of Cronbach's alpha if the item is deleted.For example, in the case of operational scale, item “C” has the lowest correlation and if deleted the new alpha will become 0.691.The statistical measures demonstrated that creative skills of Spanish people are the most developed skills, followed by the operational skills, information skills, mobile skills, while social skills are the least developed.DA: Substantial contributions to the conception and design of the data set, the analysis and interpretation of data for the work.Drafting the work or revising it critically for important intellectual content.JSN: Substantial contributions to the conception and design of the data set.Drafting.The work or revising it critically for important intellectual content.LM: Substantial contributions to the analysis and interpretation of data for the work.This study was approved by the Universitat Oberta de Catalunya Board of Ethics.The consent, oral and informed via telephone, was obtained from the participants before they began the survey.The survey involves the use of anonymous information, i.e. the information never had identifiers associated with it.Our data set use of non-sensitive, completely anonymous questions.The survey and interview procedures involves participants that are not defined as “vulnerable” and participation will not induce undue psychological stress or anxiety.
The present article offers a dataset of how young Spanish people perceive and evaluate their digital skills, showing the confidence level of their Social skills, Mobile skills, Information/navigation skills, Operational skills, and Creative skills. It also provides data on the use and the typology of video games in which youth are involved. This data demonstrates how young people evaluate their relationship with interactive and digital media, and supports knowledge to understand such interaction in the context of skills and abilities. It also presents socio-demographic and socio-economic characteristics, including gender, age, marital status, education, occupation and, community/residence for the Spanish population between 16 and 35 years old. This data was acquired by interviewing 1012 individuals using computer-assisted telephone interviews (CATI) in May 2017.
66
Calibrating Mars Orbiter Laser Altimeter pulse widths at Mars Science Laboratory candidate landing sites
Accurate estimates of surface roughness allow for quantitative comparisons of surface descriptions leading to improved understanding of formation processes, improved identification of landing site hazards and calibration of radar returns, and more accurate estimates of aerodynamic roughness used in terrain–atmosphere interactions within climate modelling.This makes it a useful tool for studying Mars, where quantitative characterisation of terrain can help us unlock the history of surface evolution after drawing comparisons with Earth analogues.Using estimates of aerodynamic roughness, such as that in Marticorena et al., we can further our understanding of the surface conditions under which dust lifting occurs, which can lead to the formation of global dust storms that can grow from local storms within weeks, obscuring almost the entire surface of the planet.Our aim is to study how the pulse-width of laser altimeter backscatter shots from the surface of Mars can be used to estimate surface roughness globally at a smaller length-scales than can be derived from along-track topographic profiles alone.Theoretically derived global surface roughness maps have been produced and used since this pulse-width data was first collected, however a literature search shows that the actual relationship between these pulse-widths and ‘ground-truth’ has yet to be found.To date, there is no commonly accepted scientific definition of planetary surface roughness, referred to simply as surface roughness, and as a result many definitions exist.Here, it is defined as a measure of the vertical exaggerations across a horizontal plane or profile, at a defined baseline.It is important to understand that surface roughness is variable, and as such changes depending upon the length scale at which it is measured.This length scale is known as the baseline, and can range from centimetres to kilometres.The common methods of measuring planetary surface roughness are outlined in Shepard et al., with the chosen method often dependent on the data type and the field.Kreslavsky et al. discuss the difficulties in choosing an intuitive, which allows a researcher to interpret and compare roughness, and stable measure of surface roughness, whereby anomalously high or low elevations or slopes across a plane or a profile can significantly affect the estimated surface roughness value for that plane or profile.The measure used here is the root-mean-square height, as defined in Shepard et al., which can be considered as unstable.However, experience using ICESat pulse-widths over bare-earth terrains shows this method to perform best, compared to the interquartile range, which is considered to be more stable.HiRISE, High Resolution Stereo Camera and MOLA elevation data were downloaded from the Planetary Data System collated into site specific Geographic Information System projects.Orthorectified images and DTMs were provided from HiRISE and HRSC; for MOLA, PEDR elevation profiles and gridded data were used.HiRISE data were downloaded from the online repository at the University of Arizona, HRSC data were downloaded from NASAs Planetary Data System, and MOLA data extracted from the MOLA PEDR and gridded dataset available as part of the Integrated Software for Imagers and Spectrometers 3 core data download.Data coregistration was completed using a ”hierarchical coregistration technique”, as described in Kim and Muller, with the lowest resolution data, the MOLA elevation data, used as a basemap.This assumes that the MOLA dataset is correctly georeferenced.HRSC DTM elevation values were then compared to the MOLA PEDR and gridded data elevation values to ensure both vertical and horizontal accuracy of the HRSC datasets, with work by Gwinner et al. suggesting that HRSC DTMs are co-registered to MOLA with a Root-Mean-Square-Error of 25 m.Finally, the HiRISE orthorectified image and DTM data were coregistered by comparing the HRSC nadir image data to the orthorectified images from HiRISE.The HiRISE DTMs were then coregistered to the correctly georeferenced HiRISE orthorectified images, and the HiRISE DTM values were compared to those from the HRSC DTM elevation values.Correct co-registration of the HiRISE datasets is vital if the correct surface roughness values are to be extracted at MOLA pulse locations.The HiRISE DTMs were mosaicked into one dataset for each site, using a mean elevation where DTMs overlap, unless the elevations differed significantly, in which the overlap regions were ignored.The DTMs were checked for quality by producing slope maps from the mosaicked data.Doing so highlights mosaicking errors, as well as errors from the DTM production process, such as pits, spikes, patchwork effects, and linear features which are not present in the imagery.Small mosaicking errors were observed in regions and masked out of the study, however, these errors did not occur near MOLA data and so would not have affected the study and were too small to be clearly mapped in Fig. 2.Data from both the PEDR and the Slope-Corrected datasets were then extracted within a region of interest for each site, and mapped with the other datasets and the surface roughness maps.From the PEDR, the received optical pulse-width was used as the pulse-width value, which has been corrected for filter characteristics and threshold settings to give an estimate of the roughness of the surface within the footprint of the pulse.A further investigation, using this dataset, was conducted using the shots that triggered receiver channel 1, considered to be the most reliable dataset, and known here as the Trigger 1 dataset.Surface roughness values were then extracted from each map at the centre of each MOLA pulse location, as given in each of the pulse-width datasets.These pulse-width values were then plotted against the extracted surface roughness values for each baseline separately.The R-squared values of a linear line-of-best-fit were calculated with pulse-width on RMS height, and the best correlating baseline was found by selecting the plot with the highest R-squared value.This was carried out for each pulse-width dataset, surface roughness method, and region separately.No further corrections are made to the pulse-widths within the datasets in this work, which compares the pulse-widths derived within the three datasets outlined in Section 2 to surface roughness derived from the HiRISE DTMs.Thus we are effectively comparing σt to the height variability within the footprint of the pulse, albeit with the Slope-Corrected dataset removing the effect of long baseline slopes.For this reason, the DTM data is not detrended to remove the effect of background slope.Plots showing the best correlated plots from each region using the PEDR, Trigger 1, and Slope-Corrected pulse-widths plotted against the RMS height are shown in Figs. 3, 4, and 5 respectively.All the results discussed here are significant at the 95% confidence level.Of all the sites, Eberswalde Crater consistently showed the highest R-squared values for each of the three pulse-width datasets.The Slope-Corrected pulse-width dataset revealed the highest R-squared values, as expected.The slope of the line-of-best-fit for the Slope-Corrected pulse-widths is similar to those observed when the PEDR pulse-widths are used.For Eberswalde Crater, the R-squared values were not improved when using the Trigger 1 pulse-widths, compared to the other pulse-width datasets, as shown in Table 1.The highest R-squared value of 0.6 using the Slope-Corrected pulse-widths suggests that MOLA pulse-widths may not be reliable enough to be used in the selection of landing and roving sites.Gale Crater reveals the next highest R-squared values, when averaged across the three pulse-width datasets.Here, the highest correlations occur at larger baselines than typically observed at Eberswalde Crater, with the highest R-squared value occurrence using the PEDR pulse-widths.It is, however, clear that this dataset contains many erroneous data points, which have been removed in the Slope-Corrected dataset.This is particularly evident in Fig. 3, which shows a string of erroneous points occurring at 51 ns at varying surface roughness values, shown in the box in Fig. 3; these data occur in a single orbit.It happens that this string of poor data sits close to the line-of-best-fit in the PEDR pulse-width plot, and thus improves the R-squared value compared to the Slope-Corrected plot.Gale Crater has the most number of points removed, of the 1571 points present in the PEDR dataset, only 1271 and 433 points are present in the Trigger 1 and Slope-Corrected datasets respectively.The fact that less than a third of the PEDR data points remain in the Slope-Corrected dataset suggests that despite presenting the lower R-squared value, the Slope-Corrected dataset is the most reliable.The highest correlation baseline for the Slope-Corrected pulse-width plot is 300 m, twice that found at Eberswalde Crater.Holden Crater presents the largest change in R-squared values when comparing across the three MOLA pulse-width datasets.A very low R-squared value was found in the PEDR pulse-width plot despite there appearing to be a clear relationship of points around the line-of-best-fit in Fig. 3.This result is caused by a group of poor quality data that exists between pulse-widths of 50–150 ns, and 0–5 m surface roughness values.These data are not present in the Trigger 1 or Slope-Corrected pulse-width datasets, and as a result the R-squared value is significantly improved.This finding suggests that the identification of poor data is very important in improving the correlation between surface roughness and MOLA pulse-widths.The slope of the best-fit line is similar to that found at Eberswalde Crater, which shows consistency within the dataset; these sites also reflect similar geological formation processes: impact craters which have then potentially been modified within a fluvial–lacustrine environment.The line-of-best-fit for the results from Gale Crater cannot be directly compared due to the different best correlation baselines.At Mawrth Vallis, all pulse-width datasets showed very low R-squared values.As the R-squared values are so low the baselines, at which the best correlations occur, should be ignored.To explore why these low R-squared values only occur here, histograms of surface roughness and pulse-width distributions are shown in Fig. 6.The distribution of surface roughness at 150 m is split into two distinct distributions: Eberswalde Crater and Gale Crater have similar distributions, as do Holden Crater and Mawrth Vallis.Holden Crater and Mawrth Vallis have similar distributions of surface roughness, which suggests that it is not the distribution of surface roughness which is the cause of the poor results observed at Mawrth Vallis.However, these sites do not share the same distribution of pulse-widths.Eberswalde Crater and Gale Crater show similar distributions for both surface roughness and pulse-widths.The distribution for Holden Crater decreases quickly after the peak, but has a long tail, which is expected given the lower frequency of very rough terrain shown in the surface roughness distribution.The distribution of pulse-widths at Mawrth Vallis, on the other hand, initially drops off slowly, but has a shorter tail, suggesting that very less rough terrain is detected.This result suggests that it is the detection of rough features, rather than the distribution of surface roughness, which causes the low R-squared value.To explore this finding further, maps of surface roughness and very rough terrain are shown in Fig. 7.Fig. 7 shows only the spatial coverage of rough terrain, known as Rough Patches, considered here to have surface roughness values larger than 4 m at 150 m baselines.The 150 m baseline was chosen as it was the most commonly occurring baseline for two of the three sites which showed some correlation between surface roughness and MOLA pulse-widths, and it allows for direct comparisons between the spatial distribution and the extent of Rough Patches.A surface roughness value of 4 m was chosen as the threshold after reviewing the surface roughness distribution in Fig. 6, as the approximate point where all regions begin their long-tailed distributions.A visual inspection of the Rough Patches shows Eberswalde Crater and Gale Crater to have spatially large Rough Patches, which cover a significant proportion of the terrain.Here, large outcrops of rough terrain are interspersed with smoother terrain, which itself has some small outcrops of rough terrain associated with small impact craters and channel morphology; at Holden Crater the Rough Patches appear to be smaller, but follow a similar pattern.Mawrth Vallis shows a distinctly different pattern, whereby the typical Rough Patches typically appear to be much smaller.Larger patches of Rough Terrain are inhomogeneous and contain regions of smoother terrain within their boundary,to produce a spotty effect.Craters present in this terrain appear to be similar to those observed elsewhere, but are associated with some of the roughest features, unlike the other sites where channel morphology and extensive slopes appear to be roughest.For the first time, we have employed Geographical Information System technology to do a detailed inter-comparison between spaceborne Laser Altimeter pulse-width data and high-resolution DTM data.The results suggest that the Slope-Corrected pulse-width dataset from Neumann et al. provides the best estimates of surface roughness, where surface roughness is measured using the RMS height.This dataset produced the highest observed R-squared values over Eberswalde Crater and Holden Crater, whereas at Gale crater the highest value is observed using the PEDR pulse-width dataset.Pulse-widths over Mawrth Vallis showed very low R-squared values for all pulse-width datasets and surface roughness baselines.The removal of known poor data from the PEDR dataset in the production of the Slope-Corrected pulse-width dataset is the likely cause of the improved R-squared values over Eberswalde Crater and Holden Crater, where it is most pronounced at Holden Crater.Here, there is a significant collection of poor data at low surface roughness values and high pulse-widths using the PEDR pulse-width dataset, which are not present in the Trigger 1 and Slope-Corrected datasets.These poor data could be results from early in the mission, whereby the received shots saturated the receiver.As a result, the R-squared values over this region using Trigger 1 and Slope-Corrected datasets are 0.46 and 0.47 respectively, compared to 0.06 using the PEDR data.The appearance of the Holden Crater Trigger 1 and Slope-Corrected pulse-width results also suggests that the generic removal of Triggers 2,3, and 4 data is not a reliable method of improving observed R-squared values, as doing so removes data at higher pulse-width and roughness values, which is considered to be good in the Slope-Corrected dataset.In addition, as the number of points in the Trigger 1 and Slope-Corrected pulse-width datasets is similar over Holden Crater, this shows that the same data are not being removed, hence poor data remains in the Trigger 1 dataset over this region.Using the Trigger 1 pulse-widths at Eberswalde Crater does not change the observed R-squared value compared to the PEDR pulse-widths, instead there is a change of baseline at which the best correlation occurs.The fact that the number of shots using the Trigger 1 data over this region is smaller than the Slope-Corrected data shows that using this data only removes many shots considered to be of good quality in the Neumann et al. work.Again, the Slope-Corrected dataset is therefore assumed to be the most reliable of the three pulse-width datasets, as it removes data known to be poor, rather than the generic removal of many good data, as observed at Holden Crater, and has the highest observed R-squared value of the four sites.So why do the results at Mawrth Vallis not follow similar patterns?,The Mawrth Vallis plots in Figs. 3–5 show no correlation between MOLA pulse-widths and surface roughness, for each of the three pulse-width datasets used here.It is thought that the nature of Rough Patches within the terrain has an effect on the ability to discern roughness from the MOLA pulse-widths.At the previous three sites the extent of the Rough Patches appears to be large and continuous, therefore there is a higher probability for the MOLA footprints overlapping only rough or only smooth terrain than at Mawrth Vallis, where the smaller extent and the spotted appearance of the Rough Patches mean there is a higher probability that individual footprints will overlap both rough and smooth terrains.The nature of the echo pulses over Mawrth Vallis is therefore expected to be complex, which could lead to incorrect measurement of pulse-widths, given the on board calculated threshold for the pulse-width start and stop timing systems and the filtering system employed on MOLA, which matches the shots to one of the four channels: smooth, moderate, rough, and clouds.Overall, the instrument has poor overall sensitivity to surface roughness estimates from HiRISE DTMs where there is an observed correlation.This could be due to the low intensity of reflected light due to scattering of light in the atmosphere and from the surface across a pulse footprint as large as those observed here.All this suggests that estimates of surface roughness from single shots cannot be used as an estimate of surface roughness.Instead, a downsampling of data to produce estimates of regional roughness using an average of shots should be used, as in Abshire et al. and Neumann et al.Furthermore, the sensitivity of the instrument may be limited to indicate whether a region is rough, moderate or smooth.Anderson et al. found their predictions of the MER landing sites to be true using MOLA pulse-width data, however these sites were smoother than those considered here due to the engineering constraints of the rovers.As a common baseline has not been found here, it remains unknown whether the surface roughness is estimated at 150 or 300 m baselines.The pulse-width dataset considered to be most reliable here, the Slope-Corrected dataset from Neumann et al., produces the best correlations over the Eberswalde Crater and Holden Crater sites at the same 150 m baseline, but is this similarity due to the sites sharing similar morphology?,By chance?,Or are these examples of sites where the method “works”?,This requires further investigation and as these terrains are not representative of the wide variety of terrains on Mars, the follow-up work explores MOLA pulse-widths over much rougher terrain.It is expected that a wider distribution of roughness could improve the observed R-squared values when MOLA pulse-widths are compared to surface roughness, and may help find a definitive baseline at which MOLA pulse-widths respond to surface roughness globally, rather than for individual terrains.Additionally, it improves the probability of the pulse-footprints overlapping only rough or only smooth terrain, removing the problem that could be causing the lack of observed correlation at Mawrth Vallis.The principal conclusion to be drawn from this work is that individual MOLA pulse-width data cannot be used reliably to infer surface characteristics at pulse footprint scales for the selection of landing and roving sites.Instead, the work confirms that pulse-width data should be downsampled to give regional indications of roughness, by averaging over several shots, as observed in Abshire et al. and Neumann et al.The most reliable results were derived from the Slope-Corrected MOLA pulse-width dataset, primarily due to the removal of poor quality data, as well improved slope correction techniques applied to this dataset.The observed correlations appear to be dependent on the nature of the rough terrain across the sites.Where the rough terrain is large in extent, there is a correlation between pulse-width and surface roughness, whereas where the rough terrain is spatially small and not uniform, there is no observed correlation.However, the work has been unable to find a common baseline at which the best correlations are observed, with best correlation baselines occurring at 150 to 300 m. With the highest R-squared value is 0.6, observed at Eberswalde Crater, there is a large scope for error even at sites where there appears to be a good correlation, and, as this is observed at 150 m baselines, this represents only a minor improvement in the understanding of global surface roughness compared to the along-track elevation profiles produced in Kreslavsky and Head.
Accurate estimates of surface roughness allow quantitative comparisons between planetary terrains. These comparisons enable us to improve our understanding of commonly occurring surface processes, and develop a more complete analysis of candidate landing and roving sites. A (secondary) science goal of the Mars Orbiter Laser Altimeter was to map surface roughness within the laser footprint using the backscatter pulse-widths of individual pulses at finer scales than can be derived from the elevation profiles. On arrival at the surface, these pulses are thought to have diverged to between 70 and 170 m, corresponding to surface roughness estimates at 35 and 70 m baselines respectively; however, the true baseline and relationship remains unknown. This work compares the Mars Orbiter Laser Altimeter pulse-widths to surface roughness estimates at various baselines from high-resolution digital terrain models at the final four candidate landing sites of Mars Science Laboratory. The objective was to determine the true baseline at which surface roughness can be estimated, and the relationship between the surface roughness and the pulse-widths, to improve the reliability of current global surface roughness estimates from pulse-width maps. The results seem to indicate that pulse-widths from individual shots are an unreliable indicator of surface roughness, and instead, the pulse-widths should be downsampled to indicate regional roughness, with the Slope-Corrected pulse-width dataset performing best. Where Rough Patches are spatially large compared to the footprint of the pulse, pulse-widths can be used as an indicator of surface roughness at baselines of 150-300 m; where these patches are spatially small, as observed at Mawrth Vallis, pulse-widths show no correlation to surface roughness. This suggests that a more complex relationship exists, with varying correlations observed, which appear to be dependent on the distribution of roughness across the sites. © 2014 Elsevier Ltd. All rights reserved.
67
Time-of-use and time-of-export tariffs for home batteries: Effects on low voltage distribution networks
With the rollout of smart meters in the UK, along with the regulator’s desire to mandate half-hourly settlement of all electricity consumers based on their actual half-hourly consumption , there is considerable interest in the development of time-of-use tariffs.These roughly align domestic electricity prices with demand, incentivising demand shifting and use of energy storage systems to reduce electricity demand at peak times.Similar developments are happening at varying rates around the world.In the UK, TOU tariffs have historically existed as Economy 7 and Economy 10 tariffs, whereby consumers see lower off-peak electricity prices for seven or ten hours overnight.These were originally introduced in the late 1970 s to ensure consumption of overnight baseload power from coal and nuclear plants.With the decline in coal power, it is possible that fewer Economy 7 and Economy 10 tariffs will be available in the coming years.However, the growth of renewables, particularly variable renewables such as wind and solar, along with increasing penetration of embedded generation and active energy technologies such as electric vehicles and heat pumps, exerts new stresses on the grid.The cost of network reinforcement in the UK is expected to reach up to £36bn by 2050 if we maintain passive approaches to network reinforcement and demand management , but these costs could be reduced significantly by taking advantage of smart demand technologies and appropriately incentivising their activity.These incentives could include new types of TOU tariffs.Economy 7 and 10 tariffs have two price tiers, and are henceforth known as two-tier tariffs.Smart meters make it possible to add further tiers, allowing for tariffs that more closely reflect the full short-run social marginal cost of generating and distributing electricity, thus increasing economic efficiency .Three-tier tariffs already exist in several countries, and typically use a peak price tier to disincentivise use of electricity at peak times.A recent survey in the UK has shown that over a third of bill payers are in favour of switching to a three-tier TOU tariff, indicating a substantial potential market, with electric vehicle owners significantly more willing to switch .Recently, the first three-tier TOU tariff was launched in the UK by Green Energy .This tariff is known as ‘TIDE’, and at the time of writing its three tiers are an overnight off-peak rate of 6.41 p/kWh between 23:00 and 06:00, an evening peak rate on weekdays only of 29.99 p/kWh between 16:00 and 19:00, and a mid-peak rate of 14.02 p/kWh at all other times.Green Energy also offer a discount on the purchase cost of a home battery as an incentive to sign up to the tariff.The price spread in Green Energy’s TIDE tariff is particularly high; in Ontario, for example, there is a province-wide residential three-tier TOU tariff set by Ontario Energy Board, and prices range from 0.065 CAD/kWh off-peak to 0.132 CAD/kWh on-peak .In Ontario, distinct summer and winter tariffs are used to account for the changing load profiles through the year, primarily because of significant variations in heating and cooling demands over the year.In Australia, typical off-peak prices in residential TOU tariffs are around 0.15 AUD/kWh, and typical on-peak prices are around 0.55 AUD/kWh .This is a higher spread than in Ontario, but lower than that set by Green Energy in the UK.In Great Britain, differential charging is also used by distribution network operators to cover the cost of operating the distribution networks.As a whole, these charges are known as Distribution Use of System charges, and three-tier tariffs known as red-amber-green tariffs are used for non-domestic consumers with half-hourly settlement.A RAG tariff as used for DUoS charging typically has an off-peak green price time band overnight, a peak red price time band in the early evening, and amber price time bands in between .DUoS charges for domestic consumers currently exist as a single rate, and are paid by suppliers acting as ‘supercustomers’, who pass the charges onto customers by factoring the costs in when developing tariffs.Two-tier electricity tariffs have also been implemented in an effort to reduce reverse flow from solar PV in areas with high penetrations of solar power.In Cornwall, Regen SW, on behalf of the local DNO, Western Power Distribution, recently trialled a two-tier tariff known as the Sunshine Tariff .This offered off-peak electricity from 10:00-16:00 for six months of the year.In that study, it was found that households with automation technology were able to shift 13% of their consumption into the 10:00–16:00 period, compared with 5% for those without automation .Similarly low levels of engagement have been found in other TOU tariff trials, with field trials of TOU tariffs in 1500 German households resulting in average percentage reductions in peak demand of around 6% , and a pilot trial in 300 Cypriot households reducing total consumption in peak hours by no more than 3.5% .In response to these low levels of engagement without automation, it has been recommended that automation and aggregators should be used for demand management .Much research has looked at the possibilities for using energy storage for peak shaving on distribution networks.Recently, Pimm et al. investigated the potential of battery storage for peak shaving , assuming perfect foresight of net demand and perfect coordination of the storage.It was shown that in the UK, 3 kWh of battery storage per household could potentially allow a 100% switch to heat pumps without increasing peak demands at the secondary substation level.It was also shown that the export peak brought about by high levels of solar PV penetration could potentially be reduced to the level it would be if there were no PV by using 5 kW h of battery storage per household.These findings of large potentials for peak shaving using battery storage have been confirmed by Schram et al. , who also highlighted the importance of collaboration between households and other stakeholders, such as distribution system operators and retailers, to achieve the peak shaving potential at neighbourhood level.Leadbetter and Swan conducted investigations into the optimal sizing of battery storage systems for residential peak shaving, with results suggesting that typical system sizes should range from 5 kW h/2.6 kW for homes with low electricity usage, up to 22 kWh/5.2 kW for homes with high usage and electric space heating.Peak shaving of between 42% and 49% was reported in five regions of Canada.It was also found that very little cycling is required for peak shaving, and that as such the system’s life is limited by the calendar life of the batteries.Yunusov et al. used smart meter data to assess the impact of battery storage location on performance for peak shaving and phase balancing, focusing on two real low voltage networks.Some of the same authors have also considered real-time optimisation of DNO-owned storage being used for peak shaving, developing storage controllers that take into account demand forecasts and consumer clustering .Zheng et al. developed a control technique for peak shaving with battery energy storage systems using a demand limit.Whenever grid import is greater than the demand limit, the battery is discharged in an effort to bring import down to the demand limit, and whenever grid import is less than the demand limit, the battery is charged in an effort to bring import up to the demand limit.More recently, Babacan et al. developed a convex optimisation approach to storage scheduling, and showed that residential electricity tariffs featuring demand charges and supply charges can reduce peak flows of electricity, reduce power fluctuations in net demand profiles, and increase self-consumption of solar PV.There exists a significant gap in the literature surrounding the effects on the distribution network of energy storage responding to time-of-use tariffs, even though it is likely that distribution networks will need considerable reinforcement to cope with the presence of EVs and heat pumps, just as they have needed reinforcement to cope with the presence of high penetrations of solar PV in certain areas.It is important to understand what kind of effect household-level storage might have on distribution networks when responding to time-dependent tariffs, in order to improve network planning and potentially inform future electricity tariffs and charges.Therefore this paper addresses this knowledge gap, comprehensively investigating the effects on the distribution network of home batteries responding to time-dependent tariffs, and asking the question:What levels of peak shaving occur as a result of residential battery storage operating according to time-of-use and time-of-export tariffs?,As well as comprehensively investigating the possible effects of time-of-use tariffs on peak demands, we also present and thoroughly investigate a novel approach to reducing export of solar PV, and investigate methods of avoiding rebound peaks caused by time-of-use tariffs in areas with many home batteries or EVs.An existing household energy demand model is used to generate demand data for households, and this data is analysed to investigate the effects of home batteries operating to maximise cost savings in areas with various penetrations of solar PV and heat pumps.The rest of this paper is laid out as follows.Section 2 details the methodology that is used for the analysis.Section 3 presents results from the time-of-use tariff analysis, including the peak shaving that will occur if home batteries respond to various three-tier electricity tariffs.Section 4 details approaches to counteracting the rebound peak caused by storage or EVs responding to time-of-use tariffs.Section 5 presents results from the time-of-export tariff analysis, showing the effects of charging for export of solar PV generation at certain times.Finally, our conclusions are presented in Section 6.The approach used in this work can be summarised as follows: household-level net demand data in areas with various penetrations of solar PV and heat pumps are generated using a stochastic demand model, then for many different time-dependent electricity tariffs, the operation of battery storage is determined using a time-stepping approach, assuming that the storage is operated to maximise cost savings.The peak power flows at the low voltage substation level are calculated both with and without storage, assuming that 100 houses are connected to the substation.Since a stochastic demand model is used, this process is repeated many times, and the effects of storage on peak power flows are averaged.Before continuing to explain the methods in more detail, it should be made clear that in this work, we disregard the effects of time-dependent electricity tariffs on consumer behaviour, and instead focus on the effects of home batteries operating according to time-dependent tariffs.There are two main reasons for disregarding the effects of such tariffs on consumer behaviour.Firstly, as mentioned in the introduction, several field trials have found low levels of engagement in time-of-use tariffs in terms of consumer behaviour, often concluding that it is important to leverage technology and automation rather than relying on consumer behaviour , so disregarding consumer behaviour in the analysis will have little effect on the results presented here.Secondly, consumer behaviour is difficult to model and so to fully take it into account in this analysis, it would be necessary to obtain high resolution electrical and thermal demand data for many households exposed to a wide range of time-of-use tariffs.To the authors’ knowledge, such data does not exist.This paper is focused on peak shaving, whereby energy storage or demand response is used to reduce peak power flows in distribution networks.Peak shaving allows the deferral of distribution network infrastructure reinforcement as loads increase and as embedded generation increases.In order to understand the effect of introducing electricity storage within residential distribution networks, it is necessary to acquire data on the electricity demand profiles of domestic properties.To this end, the CREST Demand Model , developed at the Centre for Renewable Energy Systems Technology at Loughborough University, has been used.The CDM uses time use survey logs taken by thousands of UK householders as part of the UK Time Use Survey , along with data on the numbers and types of appliances found in UK households, to stochastically synthesise a realistic load profile for a household based upon many parameters, including number of residents, time of year, and whether it is a weekday or weekend day.The resulting demand data is at one minute resolution, and can be aggregated over a number of households.The CDM is an integrated thermal-electrical model, with sub-models for occupancy, irradiance, external temperature, electrical demand, thermal demand, solar PV, and solar thermal collectors.Being an integrated model, many of the different sub-models are interlinked, so for example a change in irradiance will affect four sub-models: solar thermal collector, solar PV, thermal demand, and electrical demand.Several of the sub-models have been separately validated, and the whole model has been validated by comparing its output with independent empirical data.The CDM is an open-source development in Excel VBA, and its authors make clear that it is primarily for application in low voltage network and urban energy analyses, exactly the type of analysis presented in this paper.The most recent version of the CDM does not have a multiple day feature, so in order to simulate multiple consecutive days, separate days were modelled while maintaining the same household and appliance properties between days.Therefore within the resulting data there is some discontinuity in demand at midnight, however as this is not a time when the distribution network is under stress, we don’t consider this to be an issue in the context of this work.Average UK household electricity demand profiles, as synthesised by the CDM, are shown against time of day in Fig. 1.Morning and evening peaks are clear, with both being higher in winter than in summer.Also clear is that the evening peak is wider during winter than during summer.These increases are all related to increased lighting and heating demands in winter.The maximum average demand is shown to be 0.84 kW; this is not the same as the average peak demand, which is higher.The maximum average demand is very similar to the 0.91 kW found in smart meter trials conducted within the Customer-Led Network Revolution project run by Northern Powergrid .The shape of the curve, and time of maximum average demand, are also very similar.It should be noted that the average demand values rise from zero at midnight at the start of the day.This is because the demand model doesn’t have a multiple day feature, as explained above.Since the distribution network is not under stress at midnight, this is not an issue and does not affect the results shown later in the paper.The demand profile of one house over 24 h in mid-winter is shown in Fig. 2.It is clear that the demand profile at a single household level is very spiky, as high power appliances are only occasionally used.The intermittent operation of the compressor in a fridge-freezer is also clear, particularly overnight.As mentioned above, the CREST Demand Model includes a thermal sub-model, generating realistic heat demands for space and hot water heating based upon the synthesised occupancy and irradiance profiles.For an individual household, the heat output profile of a heating system has a characteristic ‘spikiness’, due to thermostat deadbands and the thermal inertia inherent in buildings.Heat pumps produce heat over longer periods than gas boilers because they do not produce heat at such high temperatures, so they have a less spiky heat output profile.To study the effect of fixed time-dependent electricity tariffs on peak shaving in residential areas with battery storage, we use household net demand data generated using the CDM.It is assumed that the storage is just operated to maximise monetary savings through the tariff, with peak shaving being consequential.In considering peak shaving of demand, a fixed three-tier time-of-use tariff is used, similar to Green Energy’s TIDE tariff.In some analyses presented in this paper, the storage is fully charged up overnight in the green band, and in some analyses the storage is only charged using excess solar power.The storage is only discharged in a single discharge window, also known as the red band, around late afternoon / early evening each day, the start and end times of which are varied in the analysis.The start and end times remain fixed from day to day.The storage is discharged as rapidly as possible from the start of the discharge window in an attempt to bring net demand down to zero, as if incentivised by a high electricity price at that time.Since battery degradation is not taken into account in this work, discharging as rapidly as possible in the discharge window maximises the savings from using storage when exposed to such a tariff.Amber bands run between the green and red bands.In many of the UK’s distribution tariffs, the tariff cost in the amber band is so close to that in the green band that storage inefficiencies would make it uneconomical to charge battery storage during the green band and discharge it during the amber band.We assume that this is also the case in the TOU tariffs studied here.Using this approach, battery operating schedules can be generated using a simple time-stepping procedure, and the potential peak shaving from TOU tariffs can be found without considering prices.For each tariff of interest, the effects of this approach on ADMD are found, and since the CDM is a stochastic model, 150 different aggregations of 100 households are simulated, and the results averaged.The house sizes are randomly taken from a distribution representative of the UK.For the studies tackling reduction of peak solar PV export, a fixed two-tier time-of-export tariff is used, penalising export in the middle of the day.Penalising export might be regarded as an unrealistically drastic measure, however we are setting out to examine the effects of such a scheme on peak shaving of export in areas with high penetrations of solar PV.It is assumed that the storage can only be charged in a single charge window, also known as the export red band, in the middle of each day.The start and end times of the export red band are fixed for any particular simulation, but we investigate the effect of a range of times.An example two-tier time-of-export tariff is shown in Fig. 3.In this case the storage is charged as rapidly as possible from the start of the export red band using any excess solar power, in an attempt to reduce the level of export to zero.The storage is then discharged as hard as possible outside of the export red band, without causing the household’s net demand to become negative, to bring its State of Charge down as low as possible before the start of the next day’s export red band.Such operation would maximise the monetary savings available from using storage.In this work it is assumed that the storage in each house never acts to reverse the flow of power to/from the house at any instant, because there is currently no incentive to do so in the UK.The results would be different if batteries were also incentivised to cause export from a house or import to a house.To generate datasets for use in this analysis, the demand profiles of aggregations of 100 houses are found using the CREST Demand Model.This is a typical number of houses connected to a secondary substation in the UK, which transforms electricity from medium voltage down to low voltage for distribution to households.The CREST Demand Model is a stochastic model of domestic energy demands; the household sizes, building types, and appliances are assigned randomly based on UK distributions, and every time the model is run a different set of electrical and thermal demands are generated based on various factors including household occupancy, irradiance, and the set of appliances in the house.Therefore the demand profiles are generated for many different aggregations of 100 houses, and the effects of storage responding to time-dependent tariffs are found for all of these, then the average effects are found and presented.In all of the analyses presented here, 150 different aggregations of 100 houses are used, and in each analysis, the peak flows to and from the aggregation are averaged over the 150 aggregations.Each of the 150 aggregations is a different set of houses.To account for the time it takes for energy storage to reach what might be considered as steady-state operation, each demand profile consists of one week of net demands.For each household, two separate weeks of net demand data are generated: one week in summer and one week in winter.In each case, five weekdays are followed by two weekend days.When analysing peak demands, only the winter data is used and the storage starts the week full.This maximises the ability of the storage to meet demand peaks, ensuring that peaks are not unnecessarily missed because initial conditions caused the storage to be empty at the times of peak demand on the first day.Similarly, when analysing peak exports, only the summer data is used and the storage starts the week empty, ensuring that export peaks in the first day are not missed because the storage was full.Different seasons are used in each analysis because changing amounts of daylight throughout the year make it a good idea to have different tariffs for different seasons, and peak demands tend to occur in winter and peak exports from solar PV tend to occur in summer.Throughout this paper, charging and discharging efficiencies of 92.2% have been used, giving a round-trip efficiency of 85%, typical for battery storage .It has been assumed that the full storage capacity can be used.In reality, battery storage is typically not used with 100% depth of discharge to increase the battery’s life, however manufacturers typically quote “useable storage capacity” or “effective storage capacity”, which is equivalent to what we have used.Degradation is not considered here, though it could be considered in future work in this area.We use a maximum discharging C rate of 1, typical for a Li-ion battery, so that the battery can be completely discharged from full to empty in no less than one hour.The maximum charging C rate varies depending on the analysis.In the first analysis, a maximum charging C rate of 1/7 is used.In all of the following analyses, we use a maximum charging C rate of 1/3, again typical for a Li-ion battery, so that the battery can be completely charged from empty to full in no less than three hours.It is assumed that the storage is able to conduct load following, thus rapidly responding to changes in net demand.In discussions with battery developers it has been found that in some cases it can take a few minutes for battery inverters to prepare to allow discharge of the battery, due to precautions that must be taken to ensure that a grid supply is present in case maintenance or repair work is being carried out on local cables.However, it is known that the time taken for these procedures can be reduced to seconds using always-on inverters.In this section, we determine the effects of TOU tariffs on demand peaks at the secondary substation level, looking at ranges of times for the peak price bands, and paying special attention to existing and recently-trialled TOU tariffs.We begin by investigating the effect of TOU tariffs on the potential contribution of home batteries to reducing peak demands, initially using the first TOU tariff to be introduced in the UK in 2017.A fixed three-tier TOU tariff is implemented, as explained above.We begin by assuming that the storage is always charged gradually within a 7-hour overnight green band, at a rate of C/7.This is the slowest charging rate that can be used while ensuring that a full charge will always occur in the off-peak period of 23:00-06:00 every night.A peak red band runs at some point in the late afternoon or evening, and the storage is discharged as hard as possible within this band without causing the household’s demand to become negative.The start time of the red band, along with its length, are varied in order to understand the effect of the red band parameters on peak demand at the secondary substation level.Amber bands run between the green and red bands and, as explained previously, it is assumed that the storage is neither charged nor discharged in the amber bands.Fig. 5 shows a contour plot of percentage reduction in ADMD against the red band start time and length, for an aggregation of 100 houses with 3 kWh of battery storage per house but no solar PV.The presence of negative values throughout shows that ADMD is in fact slightly increased when batteries operate according to this tariff in areas with no solar PV.By way of example, we can see that Green Energy’s TIDE tariff, with its 3-hour red band from 16:00-19:00, might lead to a 1.7% increase in ADMD in areas where households have 3 kWh of battery storage but no solar PV, if the storage was charged at the slowest rate possible to ensure a full charge can occur every night.The increase in ADMD is a result of the overnight charging of the batteries, increasing the late night demand such that the time of ADMD is actually moved into the late-night period.This effect has been seen in other studies, and is sometimes known as a “rebound peak”.The rebound peak is evident from Fig. 6, a plot of aggregated demand profiles with and without storage over the course of 24 h. Evidently, the net demand of the aggregation is considerably reduced in the red band, with the reduction tailing off slightly towards the end of the red band when some of the batteries have become depleted.It can clearly be seen that charging of batteries from the start of the green band at 23:00 has shifted the peak demand to this time.In the analysis presented in Figs. 5 and 6, the storage was always charged at a rate of C/7.As explained above, this is the slowest rate possible while ensuring a full charge can always occur in the off-peak period of 23:00–06:00 every night.The increases in ADMD could be even higher if the batteries were charged at a faster rate than C/7, or if the storage capacity were higher.The latter is particularly relevant when considering areas with large numbers of electric vehicle chargers responding to TOU tariffs.ADMD increases could be reduced or avoided by incentivising battery charging when domestic demands are lower.Other approaches to avoiding a rebound peak are proposed and investigated in Section 4.In terms of home batteries, it is likely that they will mainly be installed in houses with solar PV, at least in the near-term, in which case charging using excess solar power may be prioritised.However, it should be noted that in the first three-tier TOU tariff in the UK, the price in the seven-hour overnight off-peak price band was 4.99 p/kWh, lower than the export Feed-in Tariff at the same time of 5.03 p/kWh .If export tariffs are paid based on metered export volumes, then it would have been more economical for batteries in households signed up to that tariff to be charged overnight, rather than to be charged using excess solar power.However, in early 2018, the off-peak price in that tariff was raised above the export Feed-in Tariff.In areas with solar PV and with relatively low export tariffs, it is likely that battery controllers will forecast solar irradiance and household demand, and focus on charging the battery using excess solar power, thus reducing overnight charge.Fig. 7 shows how the results look if the batteries are only charged using excess solar power, in areas with 3 kW of solar PV per house.The maximum charging rate is set to C/3, typical for a Li-ion home battery .It can be seen that percentage reductions in peak demands at the secondary substation can be over 12% in this case, with the optimal red band running for at least six hours from around 17:00.This timing is consistent with the average household electricity demand profiles shown in Fig. 1.An example of the effects of a five hour red band running from 17:00 to 22:00 is shown in Fig. 8.It can be seen that in residential areas with large numbers of batteries and reasonably large amounts of installed solar PV, it might be worth having two red bands to capture both the morning and evening peaks, particularly when those batteries have moderate or large storage capacities.With charging from solar PV, the peak demand reductions of around 12% are significantly lower than the potential reduction of over 60% that previous work has found to be possible with perfect foresight of local demand and the same level of storage capacity , as can be seen in Fig. 9.With 10 kW h of battery storage per house, again being charged only using excess solar generation from 3 kW of solar PV per house, the percentage reductions in ADMD from fixed TOU tariffs are shown in Fig. 10.Comparing this with Fig. 7, it can be seen that with larger amounts of storage, there is less of a drop-off in ADMD reduction as the red band length is increased.This makes sense, as a large storage capacity is less likely to become depleted during the red band.With 10 kW h of battery storage per house, 16% ADMD reduction could be achieved with a six hour red band starting at 16:30, again assuming that the storage is only charged using solar PV.From these results we can conclude that home batteries operating according to TOU tariffs cause only small reductions in peak demand on LV networks, because LV demand peaks are spread out over time.This is clear from Fig. 11, which shows relative frequency distributions of the times at which peak flows occur.The inter-day variance is clear, and intra-day variance can be clearly seen in Figs. 6 and 8.Fixed TOU tariffs don’t sufficiently anticipate demand peaks at the LV level.It has even been found that if solar PV is not utilised for charging, the overnight charging of home batteries could cause increases in peak demands at the LV level.This has some significance for future home charging of battery electric vehicles, which are typically charged overnight, and whose battery capacities are often considerably higher than the home battery capacities considered here.Significant peak demand reductions are only possible using smarter strategies, such as voltage/current monitoring and forecasting levels of demand and embedded generation; these improved strategies could be used to control storage according to some other type of incentive scheme, such as maximum demand tariffs combined with TOU tariffs for national energy objectives, for example.We now consider the effects of introducing heat pumps into residential areas with battery storage operating according to time-of-use tariffs.In all cases, it is assumed that a 10 kWth air source heat pump with COP of 3 is included in each house, and used to provide space and hot water heating.A 125 L hot water tank is also included.Again, it is assumed that the storage is only charged using excess generation from 3 kW of solar PV on each house.Fig. 12 shows the peak shaving against red band start time and length, in areas with 3 kWh of battery storage per house, being charged only using excess generation from 3 kW of solar PV per house.Clearly the optimal red band in this case runs for three hours from 17:00, however this only achieves a 3.5% reduction in peak electricity demand at the secondary substation.This compares with a potential 45% reduction in peak demand with full foresight of net demand patterns and the same level of PV and storage capacity .Fig. 12 shows the effect of larger storage capacities, in this case 10 kW h per household.Again, a larger storage capacity has the effect of increasing the optimal red band length and moving the optimal start time earlier in the evening.In the best case here, LV peak demand reduction is less than 6%; with full foresight of net demand patterns and the same level of PV and storage, peak demand reductions could be over 60%, bringing ADMD down from around 2.05 kW to less than 0.8 kW .It can be concluded that TOU tariffs incentivising operation of home batteries in areas with heat pumps have very little effect on peak demands at the low voltage level and considerably miss out on the potential peak shaving that could be achieved.As shown in Section 3.1, time-of-use tariffs can have the effect of causing an increase in peak demand at the secondary substation level, as a result of all of the storage in the area charging at the same time.This increase in peak demand is known as a rebound peak, and it is most likely to occur in areas with high levels of stationary storage capacity and low levels of installed solar PV capacity.A rebound peak is most likely to occur the night before cloudy days, since storage controllers in households with solar PV will utilise forecasts of local generation, and prioritise overnight charging before cloudy days.The rebound peak effect is also likely in areas with moderate or high numbers of EVs.It is particularly concerning when considering home charging of EVs, with current home chargers in the UK typically being capable of powers between 3.6 kW and 22 kW.Also, since EVs are often away from home during daytime, charging using solar PV is usually not an option, and so overnight charging during off-peak price bands will be prioritised, increasing the simultaneity of demand.There are several possible approaches to reducing or avoiding a rebound peak, including:Staggered off-peak price bands between households,Coordinated control of residential energy storage and EV charging,Coordinated charging of EVs has been investigated by several others , considering approaches to minimise costs and studying the interactions between distribution system operators, charging system providers, and retailers.Remotely-controlled switching of EV chargers was trialled within the My Electric Avenue project.The technology was known as Esprit, and worked by instigating temporary curtailment of recharging on a rolling basis across the local cluster of EVs .It was shown that sufficient curtailing of EV loads took place to allow an additional 10% of customers to connect EVs before voltage problems occur.However, such a system requires consumers to allow an external actor to control their charging system.Similar to staggered off-peak price bands, Hayes et al. have investigated the effect of individualised demand-aware price policies, such that the average price received by each end user is non-discriminatory.In that work it was shown that such individualised price policies can avoid rebound peaks, increase the load factor, and reduce network losses.We investigate the effect of staggered off-peak price bands here, by applying a randomised offset to the times of each household’s off-peak price band on each day.The effectiveness of randomised offsets for the off-peak band is shown in Fig. 13, whereby each house is randomly assigned one of the following offset times each day: minutes.In this way, the off-peak price band always starts at some point between 23:00 and 01:00.Seven hour charging windows are maintained in all cases, and in this analysis, each house is given 3 kWh of battery storage.It can be seen that by spreading out the times over which charging of storage commences, such randomised offsets can prevent a rebound peak from occurring when the maximum charging rate is set to C/7.However, when the maximum charging rate is set to C/3, a rebound peak still occurs as long as the red band is longer than ∼140 min.From these results it is clear that when considering the effect of staggered off-peak price bands on rebound peaks, it is necessary to look at various factors including household demands and the installed capacity of embedded generation and energy storage.It is also clear that unless there is some incentive to charge domestic energy storage or EVs at lower rates of power, it is quite possible that rebound peaks will still occur even if staggered off-peak price bands are used.Therefore we propose that residential maximum demand tariffs are investigated as a means of explicitly incentivising consumers to reduce the stresses they place on the electricity distribution network.Similarly, maximum export tariffs would incentivise consumers to reduce stress on distribution networks in areas with high levels of rooftop solar PV capacity.The latest generation of smart meters in the UK already have the capability to record maximum import and maximum export .Therefore the effect of capacity charges on peak residential electricity demands will be the focus of a future research paper.As well as using time-dependent tariffs to incentivise demand reduction at certain times, it is also possible to use them to incentivise reduction of export from rooftop solar PV at certain times, thus reducing stress on the grid at times of high solar PV output.This can be achieved with a simple two-tier export tariff, whereby charges are paid if solar power is exported within a time band in the middle of the day.A two-tier TOU tariff was recently trialled in Cornwall in an attempt to increase electricity consumption in the middle of the day; known as the ‘Sunshine Tariff’, this ran from April to September and comprised a low price of 5 p/kWh for electricity consumed between 10:00-16:00, and a much higher price of 18 p/kWh from 16:00 to 10:00 .Unlike the Sunshine Tariff, which was a time-of-use tariff, we are considering time-of-export tariffs, which explicitly penalise export at certain times.To investigate the effect of time-of-export tariffs on reducing peak export of solar PV using electricity storage, we use a similar approach as that used in the previous section.In this case, we set up a two-tier tariff whereby an export red band runs at some point in the middle of the day, and the start and end times of this are varied.The storage is only charged in the export red band, and outside of the export red band the storage is discharged as hard as possible to try and bring net demand down to zero – any effects of this operation on increasing demand peaks are disregarded here as we are focusing on the effects on peak export.Again, full details of the methodology are given in section 2.As previously, the maximum charging C rate is set to 1/3 and the maximum discharging C rate is set to 1.Results of this analysis for houses with 3 kWh of battery storage and 3 kW of solar PV are shown in Figs. 14 and 15 shows example aggregated net demand profiles with a six hour export red band of 10:00-16:00.Evidently, the optimal time for the centre of the export red band is in the early afternoon, with longer export red bands providing the greatest peak export reductions.It is clear that the effects of time-of-export tariffs on After Diversity Maximum Export are small, with even the best case giving reductions in ADME at the secondary substation level of only 6%.Previous work has shown that the best possible peak export reduction with the same capacities of storage and solar PV is around 40% .It can be seen in Fig. 15 that little reduction in export is achieved towards the end of the export red band, since many of the batteries have become full.While not shown here, it has also been found that peak export reductions from time-of-export tariffs remain reasonably low even when considering much greater levels of battery storage; as shown in ref. , with perfect foresight, peak exports could be more than halved when storage capacity is greater than 4 kW h per house.So, as with time-of-use tariffs for peak demand reduction, we can conclude that electricity storage being operated according to fixed time-of-export tariffs will have little effect on peak solar PV export.This is because the times of peak demand and peak export are spread over periods of several hours, as is evident from several figures including Figs. 11 and 15.Time-dependent tariffs do not sufficiently anticipate peak flows at the secondary substation level, and other schemes could provide much greater benefits.Such schemes might take the form of capacity charges proportional to a household’s peak import and export powers; limits on Feed-in Tariff payments for solar PV systems if the system owner does not have a means of limiting peak export to a certain percentage of the installed PV capacity; or a requirement to fit some form of curtailment device on certain high power equipment.If energy storage’s potential for low voltage peak shaving is to be realised, a key outstanding question is how to encourage consumers to adopt and appropriately operate energy storage technologies.Our exploration of fixed time-of-use and time-of-export tariffs as a means of incentivising the operation of battery storage has demonstrated that time-dependent electricity tariffs have little effect on peak flows of electricity at the low voltage level, even in areas with high penetrations of solar PV and heat pumps, and significantly miss out on the potential peak shaving that could be achieved.This is because demand and generation peaks are typically spread out over the course of several hours.Surprisingly, it was found that operating electricity storage according to the first three-tier time-of-use tariff to be introduced in the UK could actually increase peak electricity demands at the low voltage substation, if the storage all begins to charge at the start of the overnight off-peak band when average electricity demands are still moderately high, causing a “rebound peak”.Upon the launch of that tariff, the overnight off-peak electricity price was lower than the export tariff for solar PV, so it would actually have been more economical for storage in houses with solar PV to be charged overnight rather than using excess solar power, thus causing these small increases in peak demands at the low voltage substation.These findings raise questions around the appropriate level of the export Feed-in Tariff for solar PV.It is likely that the issue of increased evening peak demands caused by time-of-use tariffs will become significant as electric vehicles are increasingly adopted.It has been shown that staggering the times of off-peak price bands for the households in an area can help to counteract the rebound peak effect, but this approach is limited in areas with large numbers of home batteries or EVs.In such areas it is also important to provide some explicit incentive to reduce maximum demands, such as a maximum demand tariff.Considering what little positive effect time-of-use and time-of-export tariffs have on low voltage demand and export peaks in residential areas with home battery storage, we believe that other measures of incentivising use of energy storage to provide low voltage peak shaving should be investigated.These measures might include capacity charges proportional to maximum import and export over a certain time period; storage sharing/rental arrangements between householders and aggregators/DNOs; exposure of storage to dynamic electricity prices, e.g. through use of premium export Feed-in Tariffs ; only awarding export Feed-in Tariff payments when generation is below a certain percentage of capacity; and mandatory fitting of curtailment devices to high power equipment such as PV systems and electric vehicle chargers.Considering the findings presented in this paper, future work in the C-MADEnS research project will focus on the potential of capacity charges to incentivise low voltage peak shaving when combined with time-of-use tariffs for national peak demand reductions.Given the very rapid response possible with battery storage, it is expected that an intelligent control system responding to an explicit incentive to reduce import and export peaks would be much more effective than time-of-use tariffs in incentivising low voltage peak shaving.
Time-of-use electricity tariffs are gradually being introduced around the world to expose consumers to the time-dependency of demand, however their effects on peak flows in distribution networks, particularly in areas with domestic energy storage, are little understood. This paper presents investigations into the impact of time-of-use and time-of-export tariffs in residential areas with various penetrations of battery storage, rooftop solar PV, and heat pumps. By simulating battery operation in response to high resolution household-level electrical and thermal demand data, it is found that home batteries operating to maximise cost savings in houses signed up to time-dependent tariffs cause little reduction in import and export peaks at the low voltage level, largely because domestic import and export peaks are spread out over time. When operating to maximise savings from the first three-tier time-of-use tariff introduced in the UK, batteries could even cause increases in peak demand at low voltage substations, if many batteries in the area commence charging at the start of the overnight off-peak price band. Home batteries operating according to time-dependent electricity tariffs significantly miss out on the potential peak shaving that could otherwise be achieved through dedicated peak shaving incentives schemes and smarter storage control strategies.
68
The blink reflex magnitude is continuously adjusted according to both current and predicted stimulus position with respect to the face
The eye blink elicited by electrical stimulation of the median nerve at the wrist hand-blink reflex is a defensive reflex subserved by an entirely subcortical circuit at brainstem level.Human electromyographic recordings from the orbicularis oculi muscles show that the HBR consists of a bilateral response with an onset latency of ∼45 msec.The HBR is functionally similar to the R2 component of the trigemino-facial blink reflex.The magnitude of the HBR increases when the proximity between the stimulated hand and the face is reduced.Such increase has allowed the identification of a portion of space surrounding the face with a protective function, the defensive peripersonal space.Similarly to what has been observed in non-human primates, potentially harmful stimuli occurring within this space elicit stronger defensive responses compared to stimuli located outside of it.The HBR enhancement is consequent to a tonic, cortico-bulbar facilitation of the polysynaptic medullary pathways that relay the somatosensory input to the facial nuclei at pontine level.The strength of this facilitation is determined by a number of cognitive factors, which demonstrates its defensive value; for example, the HBR magnitude increase is finely adjusted depending on the estimated probability that the threatening stimulus will occur, as well as on the presence of defensive objects near the face.These observations highlight the behavioural relevance of such fine top-down modulation of this subcortical reflex.In contrast, the temporal dynamic of this top-down modulation has not been explored.Indeed, in previous experiments the eliciting stimuli were delivered using a long temporal interval after the hand was placed at the target distance from the face.Therefore, the only information about the temporal profile of the cortico-bulbar facilitation underlying the HBR increase is that it is exerted tonically, well before the eliciting stimulus is delivered.Given that individuals navigate in a fast changing environment, one would expect the cortico-bulbar facilitation to adjust within a time frame appropriate to minimise the potential for harm of sudden external events, and as a function of the predicted spatial position of external threats.Here, in two experiments we investigated the temporal characteristics of the cortico-bulbar facilitatory effect, and its adjustment depending on the predicted position of the stimulus.In Experiment 1 we exploited the well-established HBR enhancement observed when the stimulated hand is located inside the DPPS of the face compared to when it is located outside.We tested whether the HBR enhancement is modulated by the length of the time interval between when the hand reached the target position and the subsequent delivery of the eliciting stimulus.In Experiment 2 we exploited the ability of the nervous system to accurately predict limb positions during voluntary movement: participants continuously moved their hand between the ‘Far’ and ‘Near’ positions and the stimulus was automatically delivered either inside or outside the DPPS, when the hand was moving either towards or away from the face.We therefore tested whether the HBR facilitation depends on the direction of the movement of the stimulus with respect to the body.Sixty six healthy participants were screened for this study, to identify HBR responders.All participants gave written, informed consent before taking part in the study.All procedures were approved by the local ethics committee.Electrical stimuli were delivered to the right median nerve at the wrist using a bipolar surface electrode attached to a Digitimer constant current stimulator.Stimulus duration was 200 μsec.Stimulus intensity was adjusted, in each participant, to elicit a clear HBR in at least three consecutive trials.The definition of a clear HBR was subjective, and based on the visual inspection of the EMG recording, as in previous HBR experiments.EMG activity was recorded from the orbicularis oculi muscle, bilaterally, using pairs of surface electrodes.The active electrode was located ∼1 cm below the lower eyelid, and the reference electrode ∼1 cm laterally of the outer canthus.Signals were amplified and digitized at a sampling rate of 10 kHz.In Experiment 2, the position of the hand was continuously monitored using a 3D localizer programmed to trigger a stimulus when the hand reached two pre-defined positions, one inside and one outside the DPPS.This device allows localizing the position and orientation of the hand, and consists of an alternating current static magnetic transmitter that emits an electromagnetic dipole field.Tracking sensors were attached to the moving hand and to the forehead, and their positions were located relative to the position of the static transmitter.Participants sat in a comfortable chair with their forearms resting on a pillow laying on a table in front of them.In each participant we first determined whether they were ‘responders’, by increasing the stimulus intensity until a clear HBR was elicited in three consecutive trials, or the participant refused a further increase of stimulus intensity.Participants with a reproducible HBR underwent further testing.The percentage of recruited subjects who were HBR responders was consistent with previous studies.During the experiments, participants were asked to keep their gaze fixed on a cross placed centrally in front of them, at a distance of ∼100 cm, 20 cm below eye level.White noise was played to mask any possible auditory cue about the incoming stimulation.In 17 responders we tested whether the ‘Far’–‘Near’ HBR enhancement was modulated by the length of the time interval between when the hand reached the target position and the subsequent delivery of the eliciting stimulus.Stimuli were delivered with the hand in two positions: either while the forearm was at ∼130° with respect to the arm, a posture resulting in the wrist being at a distance of ∼40–60 cm from the ipsilateral side of the face, or while the forearm was at ∼75° with respect to the arm, and the wrist at ∼4 cm from the ipsilateral side of the face.Stimuli were delivered with a delay of 2, 5, 10, or 30 sec after the hand reached the target position.A total of 80 stimuli were delivered, in two blocks.In each block 5 stimuli were delivered for each position and delay, for a total of 40 stimuli.Stimuli were delivered in the ‘Far’ and ‘Near’ positions in alternating trials.The order of delays was pseudorandomised, with no more than two consecutive stimuli delivered at the same delay.At the beginning of each trial, participants were verbally instructed to place their hand in either the ‘Far’ or the ‘Near’ position, but they were not informed of the delay between when they placed the hand in the target position and stimulus delivery.The interval between two consecutive stimuli was ∼30 sec.In 20 responders we tested whether the cortico-bulbar modulation of the HBR excitability depends on the direction of the movement of the stimulus with respect to the body.Stimuli were delivered with the hand in two positions: either while the forearm was at ∼100° with respect to the arm, a posture resulting in the wrist being at a distance of ∼40 cm from the ipsilateral side of the face, or while the forearm was at ∼85° with respect to the arm, and the wrist at a distance of ∼13 cm from the ipsilateral side of the face.Participants were instructed to move their hand between the positions ‘Far’ and ‘Near’.Therefore, the trajectory between the ‘Far’ and ‘Near’ positions included the ‘Semi-far’ and ‘Semi-near’ at which the hand was stimulated.Participants were instructed to move the hand at constant speed, and the frequency of oscillation between the ‘Far’ and ‘Near’ positions was approximately .25 Hz.The position of the hand was continuously sampled using the 3D localizer, which triggered the electrical stimulus when the hand was in one of the two target positions.Participants received 10 stimuli at each stimulation position and movement direction, for a total of 40 stimuli.Stimuli delivered at ‘Semi-far’ and ‘Semi-near’ positions were alternated.Stimuli delivered while the hand was moving ‘Towards’ and ‘Away’ from the face were delivered in pseudorandom order, with no more than two consecutive stimuli delivered while the hand was moving in the same direction.The interval between two consecutive stimuli was always ∼30 sec.EMG data were analysed using Neuroscan 4.5, MATLAB and Letswave 5.EMG signals from each participant were high-pass filtered, full wave rectified, and averaged across ipsilateral and contralateral recording sides.HBR responses were averaged separately for each subject and experimental condition.Statistical analyses were conducted on low-pass filtered waveforms, at each time point of the averaged EMG waveform, for each participant.In Experiment 1, we performed a two-way, repeated-measures ANOVA, with ‘Position’ and ‘Time’ as experimental factors.In Experiment 2, we performed a two-way, repeated-measures ANOVA with ‘Position’ and ‘Movement’ as experimental factors.To investigate the time course of the possible effects of these experimental factors, the ANOVA was performed on each time point of the averaged HBR.Such a point-by-point ANOVA yielded a waveform expressing the significance of the effect of each factor, as well as of their interactions across the time course of the HBR response.When main effects or interactions were significant, Bonferroni-corrected post hoc paired t-tests were performed.A consecutivity threshold of 10 msec was chosen to account for multiple comparisons, as in Sambo, Forster, et al. and in Sambo and Iannetti.Statistical significance was set at .05.In Experiment 1 we tested whether the HBR enhancement due to the stimulated hand being located inside the DPPS of the face was modulated by how long the hand was kept in the target position before receiving the successive stimulus.The factor ‘Position’ was a significant source of variance within two time windows: 60–89 and 111–123 msec post-stimulus.This indicates that the HBR magnitude was overall larger when the stimulated hand was inside the DPPS of the face than when it was outside, thus confirming a number of previous observations.The factor ‘Time’ was a significant source of variance within two time windows: 65–81 and 84–98 msec post-stimulus.Post hoc paired t-tests between the four levels of the factor ‘Time’ revealed no significant differences between all pairs of time delays.Crucially, there was no ‘Position’ × ‘Time’ interaction, indicating that the HBR increase in the ‘Near’ position was similar at the four explored time delays.The results of Experiment 1 indicate that the top-down cortical modulation underlying the HBR enhancement is similar at the four explored delays, and therefore can occur as quickly as 2 sec from when the hand is placed in the stimulated position.In Experiment 2 we tested whether the cortico-bulbar modulation of the HBR excitability depends on the predicted position of the stimulus, as well as by the direction of stimulus movement.The factor ‘Position’ was a significant source of variance within the 49–87 msec post-stimulus time window, while the factor ‘Movement’ was not.Crucially, there was a significant ‘Position’ × ‘Movement’ interaction within two time windows: 51–61 and 66–86 msec post-stimulus.We explored this interaction by performing two post-hoc paired t-tests, comparing the HBR responses elicited while the hand was in the ‘Semi-near’ and ‘Semi-far’ positions, for both ‘Towards’ and ‘Away’ movement directions.In the ‘Away’ condition, HBR was significantly greater when the hand was in position ‘Semi-near’ than in position ‘Semi-far’, thus reproducing the previously observed increase of HBR magnitude while the hand is close to the face.In contrast, in the ‘Towards’ condition, the HBR was not different in the ‘Semi-far’ and ‘Semi-near’ positions, because of a larger HBR in the ‘Semi-far’ position.This finding indicates that the excitability of the medullary circuit mediating the HBR is continuously adjusted as a function of the predicted hand position, and this prediction depends on the direction of the movement of the threat with respect to the body.When the hand is moving towards the face, the threat value is increased, resulting in a large HBR even if the actual hand position is ‘Semi-far’.In this study we investigated the temporal characteristics of the cortico-bulbar modulation of the brainstem circuits mediating the HBR, as well as their dependency on the predicted position of the stimulated hand during a voluntary movement.We observed three main findings.First, the top-down cortical modulation of the medullary circuitry subserving the HBR occurs as quickly as 2 sec from when the hand is placed in the stimulated position.Second, it is continuously adjusted as a function of both the current and predicted hand position.Third, it depends on the direction of the movement of the stimulus with respect to the body: the hand movement towards the face results in a large HBR even if the actual hand position is far from the face.This is consistent with the notion that a stimulus approaching the body has a higher threat value.These findings indicate that the central nervous system is able to rapidly adjust the excitability of subcortical defensive responses, and thereby exploit the predictions about the spatial location of the threatening stimulus in a purposeful manner.These modulations take into account both the current and predicted position of a potential threat in respect to the body.This neural mechanism ensures appropriate adjustment of defensive responses in a rapidly-changing sensory environment.Experiment 1 showed that the HBR enhancement observed when the stimulated hand is located near the face occurs within two seconds from when the hand has been in position prior to receiving the stimulus.Indeed, there were no differences in the ‘Far’–‘Near’ effect across the four time delays explored.Experiment 2 further characterised the temporal properties of the HBR enhancement: the HBR magnitude was modulated continuously as a function of both the current and the predicted position of the stimulated hand with respect to the face.This, together with the previous evidence that the brainstem medullary interneurons subserving the HBR response are under cortico-bulbar control, indicates that such top-down modulation is continuously and purposefully regulated.It is well-known that the blink reflex can be cognitively modulated at short time scales.For example, Codispoti, Bradley, and Lang observed that the blink reflex elicited by an auditory stimulus is enhanced by the presentation of an unpleasant image preceding the auditory stimulus by as short as 300 msec.Similarly, Ehrlichman, Brown, Zhu, and Warrenburg showed that the blink reflex is increased when the eliciting auditory stimulus is preceded by an unpleasant odour by 400 msec.However, these modulations entailed emotional stimuli which are known to alter the arousal level and generally facilitate motor responses.In contrast, the cortico-bulbar modulation underlying the HBR enhancement reported in the current study is specific for the medullary interneurons receiving somatosensory input from the stimulated hand.Therefore, on the basis of the proprioceptive and visual information about the spatial location of the stimulated hand, the nervous system remaps the respective position of the hand and the face onto the same external reference frame, and thereby infers their distance.This distance estimate is used to adjust the cortical modulation of medullary circuits subserving the HBR.Therefore, on the basis of the proprioceptive and visual information about the spatial location of the stimulated hand, the nervous system remaps the respective position of the hand and the face onto the same reference frame, and thereby infers their distance.This distance estimate is used to adjust the cortical modulation of medullary circuits subserving the HBR.The fact that the excitability of defensive reflexes is continuously adjusted depending on the position of the threats with respect to the body has a clear survival value, as such reflexes are triggered by rapidly changing stimuli in the sensory environment.Indeed, unnecessary facilitation of, for example, blinking has a cost: the probability of the individual to be harmed in other ways increases with the strength of blinking.Therefore, rapid enhancement or reduction of the facilitation of the blink reflex allows optimal avoidance of environmental threats.Experiments 1 and 2 showed that the modulation of the HBR circuitry in the medulla occurs within tens of milliseconds.Experiment 2 yielded an important additional finding: the HBR is modulated according to a model that takes into account both the actual position of the hand with respect to the face and the predicted location of the hand.In Experiment 2 participants were required to voluntarily move the hand either towards or away from the face.The estimated position of the hand in external space during a voluntary movement is based on an internal forward model that reliably predicts the consequences of motor commands.Such a model relies on the motor command itself, as well as on the comparison between the predicted and the actual proprioceptive and visual feedback generated by the movement.Such continuous comparison allows precise estimation of limb position during self-paced movements.Therefore, participants were able to predict accurately the direction of hand movement, and the forthcoming hand position when the stimulus was delivered.A related question is whether the proprioceptive and/or visual information alone would result in similar predictions about hand locations, and, therefore, in similar HBR modulations.Performing the same paradigm of Experiment 2 while the hand is passively moved by an external source would allow addressing this point.A result similar to that reported here would indicate that sensory feedback alone is sufficient to make predictions about forthcoming hand position.Regarding the respective contribution of proprioceptive and visual feedback, previous experiments have shown that the ‘far–near effect’ is entirely unaffected when the eyes are closed or when the participants cannot see the hand.This suggests that proprioceptive information is sufficient to determine a HBR modulation similar to that observed with eyes open and during voluntary movement.Stimuli delivered in the ‘Semi-far’ and ‘Semi-near’ positions while the hand was moved away from the face elicited HBR responses whose magnitude was larger when the stimulus was closer to the face.This ‘Semi-far’–‘Semi-near’ effect is reminiscent of the typical far–near modulation of the HBR magnitude, and its size was similar to that observed while delivering stimuli at similar distances from the face, but with the hand kept still for several seconds before receiving the stimulus.Crucially, when the hand was moved towards the face, the ‘Semi-far’–‘Semi-near’ effect vanished, because, in this movement direction, the magnitude of the HBR elicited by stimuli delivered while the hand was still away from the face was as large as that of the HBR elicited by stimuli delivered when the hand was closer to the face.In contrast, when the hand was moved away from the face, there was a typical ‘Semi-far’–‘Semi-near’ difference.In other words, there was a clear dissociation between direction of the movement and HBR increase.What could be the mechanism underlying such dissociation?, "A parsimonious explanation could be that the brain's ability to predict the position of limbs during voluntary movements is different as a function of the direction of movements: movements away from the body would result in inaccurate predictions.However, these predictions are not heavily dependent on movement direction, and even possible differences in prediction accuracy would unlikely explain the dramatic difference observed in the two movement directions.Alternatively, and more likely, there might be two interacting mechanisms: the evaluation of the actual hand position, and the prediction of its position during a voluntary movement.In other words, the models that the brain uses to decide the strength of the modulation of subcortical reflexes might be asymmetrically tuned: they yield a pre-emptive, stronger defensive response when there is a prediction that the threat will be closer to the body territory to be defended, but also when there is a prediction that the threat will move away from the face.This can be conceptualized as an additional “safety rule” in the model, that minimises the likelihood of responding with an HBR of normal magnitude when the threat is still close to the face.Such asymmetric modulation is reminiscent of the observations of Zhao, Irwin, Bloedel, and Bracha, who explored the conditioned anticipatory eye blink responses during hand movements towards or away from the face.They observed that only when the hand was quickly moved towards the face, a movement that eventually resulted in a tap of the forehead, an eye blink was generated before the forehead tap.Albeit the anticipatory eye blink described by Zhao et al. is an additional, independent eyelid response preceding the blink reflex induced by the actual trigeminal stimulation, the direction-specificity of this phenomenon reflects the nervous system ability to make meaningful predictions about environmental threats and elicit appropriate defensive response.In this sense, their observation is similar to our finding that hand movements towards the face results in an upregulated HBR response even when the hand is still far away from the face.A perhaps surprising observation is that the HBR elicited when the hand was in the ‘Semi-near’ position was similar in the two directions of movement.The lack of a further increase of the ‘Semi-near’ HBR in the towards direction is probably due to a ceiling effect: when the threat content of the environmental situation is estimated to be high because of proximity with the defended area, the nervous system exert a maximal facilitation on the medullary circuitry subserving the blink response.Indeed, when the HBR is elicited in response to stimuli located in a number of spatial locations, an abrupt rather than a gradual increase of the HBR magnitude is observed with greater proximity of the hand to the face, and, accordingly, such distance-dependent modulation of HBR magnitude can be effectively modelled using a series of step functions.The present results indicate that the cortical modulation of the strength of the blink reflex occurs continuously, and takes into account the predictions about the spatial location of the stimulus in a purposeful manner: when the stimulus moves towards the body, and has therefore a higher threatening value, the blink reflex is anticipatorily upregulated.This real-time, predictive control of the excitability of subcortical reflex circuits ensures optimal behaviour in rapidly-changing sensory environments.The authors declare no conflicts of interest.
The magnitude of the hand-blink reflex (HBR), a subcortical defensive reflex elicited by the electrical stimulation of the median nerve, is increased when the stimulated hand is close to the face ('far-near effect'). This enhancement occurs through a cortico-bulbar facilitation of the polysynaptic medullary pathways subserving the reflex. Here, in two experiments, we investigated the temporal characteristics of this facilitation, and its adjustment during voluntary movement of the stimulated hand. Given that individuals navigate in a fast changing environment, one would expect the cortico-bulbar modulation of this response to adjust rapidly, and as a function of the predicted spatial position of external threats. We observed two main results. First, the HBR modulation occurs without a temporal delay between when the hand has reached the stimulation position and when the stimulus happens (Experiments 1 and 2). Second, the voluntary movement of the hand interacts with the 'far-near effect': stimuli delivered when the hand is far from the face elicit an enhanced HBR if the hand is being moved towards the face, whereas stimuli delivered when the hand is near the face elicit an enhanced HBR regardless of the direction of the hand movement (Experiment 2). These results indicate that the top-down modulation of this subcortical defensive reflex occurs continuously, and takes into account both the current and the predicted position of potential threats with respect to the body. The continuous control of the excitability of subcortical reflex circuits ensures appropriate adjustment of defensive responses in a rapidly-changing sensory environment.
69
Autologous mesenchymal stem cell application for cartilage defect in recurrent patellar dislocation: A case report
Recurrent patellar dislocation is a repeated dislocation that follows from an initial episode of minor trauma dislocation .Conservative management gives a minimal result in re-dislocation, with persistent symptoms of anterior knee pain, instability and activity limitation.Meanwhile, there is no gold standard treatment of realignment procedures .This can further cause cartilage lesion in the patella and femoral condyle, and consequently increase the risk of re-dislocation .Mesenchymal stem cells have been widely explored for treating cartilage defect due to their potency of chondrogenic differentiation .We present a novel approach of treating cartilage lesions in recurrent patellar dislocation by combining of arthroscopic microfracture and autologous bone marrow derived MSCs after Fulkerson osteotomy.This work has been reported in line with the SCARE criteria .A 21-year-old male presented with left knee discomfort.Ten years ago, the patient felt discomfort on the medial side of the knee and felt his knee cap slide out laterally.The patient experienced several episodes of instability ranging from a feeling of “giving away” until a prominent lateral sliding-off of his knee cap.Anterior knee pain has also occurred during activities such as climbing stairs or exercising.Physical examination revealed slight pain on the anterior side of the patella, but no atrophy or squinting patella.Knee range of motion was normal when the knee cap position was normal, but was limited when it was dislocated.Lateral subluxation of the patella was found when the knee was extended from 90° flexion position, positive patellar apprehension test, with medial patella elasticity/patellar glide >2 quadrants.The Q angle, in the 90° flexed knee position, was 10°, which was still normal.The plain radiograph imaging showed no abnormality.Insall-Salvati index was 1.12 .The patient was diagnosed with recurrent patellar dislocation, with suspected cartilage lesion of the left knee.The first surgery was an arthroscopy diagnostic and distal realignment procedure.We found articular cartilage defects on the lateral condyle of the femur with a diameter of 3 cm, and on the postero-medial with a diameter of 2.5 cm, and the depth of both was more than 50% of the cartilage thickness.We determined that the articular defect was Grade 3 according to International Cartilage Regeneration & Joint Preservation Society .We performed a dissection of lateral retinaculum using an electrocautery, continued by incising the medial side of tibia tuberosity and detaching the patellar tendon by using an oblique osteotomy procedure on tibia tuberosity, where the fragment slide 1 cm antero-medially and fixed with two 3.5 mm partial threaded cancellous screw, followed by percutaneous plication on the medial side of the patella using non-absorbable string.Post-operative ROM was 90° flexion without any dislocation and the position of the screws was good.One month after surgery, full ROM and weight bearing exercises were started, including knee exercise until maximum flexion was reached along with quadriceps muscle exercise.Eighteen month after that surgery, we performed an iliac crest bone marrow aspiration; arthroscopic microfracture by using an awl until 4 mm depth was reached on the site located ±3–4 mm from the articular cartilage defect on the posteromedial patella and femoral lateral condyle; and tibial tuberosity screw removal.Approximately 30 mL of bone marrow was aspirated from the posterior iliac crest.Bone marrow aspirate was diluted in phosphate-buffered saline and centrifuged at room temperature.The buffy coat was washed and cultivated for 3–4 weeks until reaching the required amount.The cells were harvested and characterized with flow cytometer.The MSCs, having negative bacteria and fungi tests, were injected intra-articularly into the left knee.Then, 2 mL HA were injected weekly for 3 weeks.Non-weight bearing exercise was conducted for 6 weeks.Outcomes were assessed by using International Knee Documentation Committee score, visual analog scale score and imaging.Baseline IKDC score was 52.9 and VAS score was 8.Nineteen months after the first surgery, IKDC score was improved to 93.1, while the VAS score decreased to 2.Six months after MSCs implantation, evaluation by MRI FSE cor T2-weighted signal showed a significant growth of articular cartilage covering most of the defect.Two years after the MSCs implantation, there was no complaint and full ROM was reached.Recurrent patellar dislocation are uncommon problem, with recurrence rate 15%–44% after conservative management , while cartilage lesions following recurrent patellar dislocations are quite common , but still no gold standard or consensus on the management .This patient was diagnosed as chondromalacia Grade 3 Outerbridge classification and Grade 3 ICRS .One of the suitable procedures for recurrent patellar dislocation with chondromalacia, especially Grade 3 or 4, was Oblique Fulkerson-type osteotomy, with or without the release of lateral retinaculum .This distal realignment procedure could decrease patellofemoral pain by anteriorization of tibial tuberosity, decreasing articular contact pressure and at the same time medializing knee extensor mechanism .Therefore, we performed the Fulkerson-type osteotomy with lateral retinacular release, combined with percutaneous medial plication since the patient was already 21 years of age and the bone was expected to be mature so that the risk of premature physeal closure in proximal tibia can be avoided .This technique has demonstrated good results, although it had a risk of tibial stress fracture in the healing process .The lateral retinacular release is an adjuvant after tibial tubercle medialization to re-center the patella .It was reported that isolated lateral retinacular release significantly gives an inferior long-term result compared to medial reefing .Percutaneous plication of medial patella procedure was indicated to build a strong construct by shortening the patellofemoral ligament, in order to prevent lateral sliding of the patella .Treatment of articular cartilage defect remains challenging since it has limited self-healing capacity.Lesions that do not reach the subchondral zone will be unlikely to heal and usually progress to a cartilage degeneration .Limited blood supply in the cartilage and low chondrocyte metabolic activity disrupt natural healing that is supposed to fill the defect by increasing hyaline cartilage synthesis activity or stem cell mobilization from bone marrow to site of injury .The proper initial procedure for chondral lesion >4 cm2 was marrow stimulation by mosaicplasty or microfracture; and for a lesion <4 cm2 and >12 cm2 accompanied with symptoms, autologous cartilage implantation beneath a sutured periosteal flap was promising.This procedure could not regenerate cartilage in the long term, due to loss of flap or cell suspensions.A scaffold was then used to act as an anchorage for chondrocytes adherence on cartilage defects and to promote the secretion of chondrocyte extracellular matrix .The BM-MSCs implantation could be an alternative source of the chondrocytes.Human BM-MSCs are relatively easy to isolate and to be cultured in such a condition that may retain their capability to differentiate into chondrocytes .The MSCs effect was reported as effective as ACI and even had the advantage over ACI in terms of the number cells obtained, better proliferation capacity and less damage in the donor site .Treating large cartilage defects by using BM-MSCs showed good outcome, but the transplantation procedure was invasive .Wong et al. conducted a clinical study of the BM-MSCs intra-articular injection in combination with high tibial osteotomy and microfracture for treating cartilage defect with varus knee.They reported that intra-articular MSCs injection improved the outcomes in the patients undergoing HTO and microfracture .Here we performed also a less invasive approach by injecting the autologous BM-MSCs intra-articularly, following the arthroscopic microfracture using an awl to penetrate the subchondral bone plate in the cartilage defects, which led to clot formation.This clot contains progenitor cells, cytokines, growth factors and pluripotent, marrow-derived mesenchymal stem cells, which produce a fibrocartilage repair with varying amounts of type-II collagen content .Cytokine within the fibrin clot will attract the injectable stem cells to the cartilage lesions.The HA injection in this patient was aimed to suspend the MSCs and to support regenerative potency of MSCs with chondroinductive and chondroprotective potency of HA.Intraarticular injection of MSCs suspended in HA could be an alternative treatment for large cartilage defect .Supporting microfracture technique by intra-articular HA injections had a positive effect on the repair tissue formation within the chondral defect .The MRI showed that there was a growth of articular cartilage covering most of the defect even though it was not perfect as yet.This case report demonstrated that combining Fulkerson osteotomy with the lateral retinacular release and percutaneous medial plication was effective in treating chronic patellar instability.The combination of microfracture and MSCs implantation was safe and could regenerate the articular cartilage in this patient.Andri Lubis is a consultant for Conmed Linvatec and Pfizer Indonesia.No sponsorship for this case report.This is a case report; therefore it did not require ethical approval from ethics committee.However, we have got permission from the patient to publish his data.We have written and signed informed consent obtained from the patient to publish this case report and accompanying images.Andri Lubis contributed in performing the surgery and MSCs implantation, data collection, data analysis.Troydimas Panjaitan contributed in data collection and data analysis.Charles Hoo contributed in writing the paper.This is a case report, not a clinical study.The Guarantor is Andri M.T. Lubis, M.D., Ph.D.Not commissioned, externally peer-reviewed.
Introduction: Recurrent patellar dislocation can lead to articular cartilage injury. We report a 21-year old male with left patella instability and articular cartilage defect. Presentation of case: A 21-year-old male presented with left patellar instability and pain. Knee range of motion (ROM) was limited when patella was dislocated (0–20°). The J-sign positive, patellar apprehension test was positive, with medial patella elasticity/patellar glide >2 quadrants. The Q angle, in the 90° flexed knee position was still normal. The plain radiograph imaging showed no abnormality. Insall-Salvati index was 1.12. The patient was diagnosed with recurrent patellar dislocation and cartilage lesion of the left knee, and was treated with combining Fulkerson osteotomy with the lateral retinacular release and percutaneous medial plication, followed by microfracture procedure and MSCs implantation. Discussion: Recurrent patellar dislocation is uncommon problem while cartilage lesions following recurrent patellar dislocations are quite common, but still no consensus on the management. Conclusion: Combination of Fulkerson osteotomy with the lateral retinacular release and percutaneous medial plication was effective in treating chronic patellar instability. The microfracture procedure and MSCs implantation was safe and could improve the cartilage regeneration in patients with articular cartilage defect due to recurrent patellar dislocation.
70
Neural Profile of Callous Traits in Children: A Population-Based Neuroimaging Study
This cross-sectional study was embedded in the Generation R Study, a prospective population-based cohort from Rotterdam, the Netherlands.Study protocols were approved by the local ethics committee, and written informed consent and assent was obtained from all parents and children, respectively.At mean age 10 years in children, mothers completed a questionnaire about callous traits in their children, and children were invited to participate in a neuroimaging assessment.For the current study, participants were included if they had data on callous traits and sMRI scan or DTI scan available.Callous traits were assessed through maternal report when the child was on average 10 years old, using a brief validated questionnaire adapted from the Youth Self-Report and the Inventory for Callous-Unemotional Traits.The questionnaire comprises seven items on mainly interpersonal callous traits, which were scored on a 4-point scale, including “Does not find other people’s feelings important,” and “Is cold and indifferent.,Although this measure does not comprehensively capture the full spectrum of unemotional or psychopathic traits, it has been shown to adequately capture childhood callous traits on a dimensional scale, correlates strongly with other measures of youth psychopathy, and is predictive of adult antisocial traits.Endorsement of the seven items is shown in Supplemental Table S1.Cronbach’s α in the current sample was .73.At age 10 years, co-occurring emotional and behavioral problems were assessed through mother report and child report using the well-validated Child Behavior Checklist and Brief Problem Monitor, respectively; mothers and children also completed the Strengths and Difficulties Questionnaire Prosocial scale.Concurrently, maternal psychopathology was assessed through four subscales of the self-reported Brief Symptom Inventory.Child intelligence was measured at age 6 years with the Snijders-Oomen nonverbal intelligence test.See Supplement for more detailed information.An overview of the imaging procedure, sequences, and quality assessment has been described previously and can be found in the Supplement.Every child was invited to participate in a mock scanning session before the MRI scan to familiarize them with the procedure.If at any point the child was too anxious about the procedure, he or she did not progress to the MRI scan.All images were acquired on a 3T Discovery MR750W scanner using an eight-channel head coil.All analyses were adjusted for the following covariates.Child gender and date of birth were retrieved from birth records.Child ethnicity was defined according to the classification of Statistics Netherlands, i.e., Dutch, other Western, and other non-Western.Maternal educational level was categorized into primary, secondary, and higher educational attainment.Before the main analyses, we validated our measure of callous traits by examining whether correlations with mother-reported and child-reported emotional and behavioral problems, prosocial behavior, and IQ were in line with the previous literature.We then proceeded to examine neural correlates of callous traits, specifically structural brain morphology and white matter microstructure, using separate linear regressions.All sMRI and DTI analyses were adjusted for covariates as described above.A hierarchical stepwise approach was used to limit the number of comparisons.With respect to sMRI measures, total global and subcortical volumetric indices first were assessed in association with callous traits.Analyses pertaining to subcortical volumes were corrected for intracranial volume.A false discovery rate correction was applied to these analyses to address multiple testing.If an association with any global measure was observed, subsequent vertexwise analyses were conducted to investigate local differences in cortical morphology associated with callous traits.With respect to DTI, initial analyses were performed with global fractional anisotropy and mean diffusivity in association with callous traits.Next, if an association between global fractional anisotropy or MD and callous traits was observed, 1) subsequent analyses were conducted on individual white matter tracts, and 2) associations with axial diffusivity and radial diffusivity were explored.For these analyses, multiple testing was addressed using an FDR adjustment.In sensitivity analyses, our models were additionally adjusted for co-occurring emotional and behavioral problems, nonverbal IQ, and maternal psychiatric problems, in line with recent recommendations based on developmental studies.In addition, gender differences of observed associations were explored using interaction analyses.Similarly, we investigated whether Child Behavior Checklist conduct problems moderated the association of callous traits with global volumetric and white matter outcomes.We also explored nonlinear relationships by adding quadratic terms.Because of skewness, callous traits sum scores were square root transformed to approach a normal distribution.Standardized coefficients are presented throughout.All analyses were conducted using R statistical software.Missing values on covariates were dealt with using multiple imputations in mice version 2.25; estimates from analyses of 100 imputed datasets were pooled.As expected, callous traits showed high positive correlations with mother-reported conduct problems, followed by oppositional defiant disorder and attention-deficit/hyperactivity disorder symptoms.In contrast, we observed significantly lower correlations for affective, anxiety, and somatic symptoms.Similarly, child-reported externalizing and attention problems correlated more strongly with callous traits than did internalizing problems.Mother-reported and child-reported prosocial behavior were negatively correlated with callous traits.Total brain, cortical gray matter, and white matter volumes all were negatively associated with callous traits.Right amygdala volume was negatively associated with callous traits, which did not survive FDR correction.No associations were found between subcortical volumes and callous traits.Similar results were observed in analyses with additional adjustment for co-occurring psychiatric problems, nonverbal IQ, and maternal psychopathology.In vertexwise analyses, 10 brain regions showed negative correlations between cortical surface area and callous traits, which were localized in the frontal and temporal lobes of both hemispheres.No vertexwise associations were found between cortical thickness and callous traits.Three gyrification clusters in the temporal lobe were negatively associated with callous traits.Additional adjustment for IQ and maternal psychopathology did not considerably alter these observations, but after adjustment for co-occurring psychiatric problems, only the superior frontal gyrus was associated with callous traits.Global MD, but not global fractional anisotropy, was negatively associated with callous traits.Similarly, global AD and RD were negatively associated with callous traits.Several white matter tracts contributed to this global association, including the superior longitudinal fasciculus, corticospinal tract, uncinate, and cingulum.These associations all survived FDR correction.Comparable results were observed in analyses with additional adjustment for co-occurring psychiatric problems, nonverbal IQ, and maternal psychopathology.Callous traits were negatively associated with AD of the inferior and superior longitudinal fasciculi and corticospinal tract and with uncinate and cingulum RD.A visualization of the associated white matter tracts is presented in Supplemental Figure S2.Callous traits were significantly higher in boys than in girls.Boys scored higher on almost all callousness items; correlations between behavioral problems and callous traits were similar across genders.Nonverbal IQ negatively correlated with callous traits in boys but not in girls.No interaction was observed for structural volumetric measures.A significant gender-by-brain interaction was observed for the associations of MD with callous traits.Stratified analyses demonstrated that our findings in the full sample were driven by the associations in girls, and these effects were observed in several tracts across the brain.No such associations were found in boys.Conduct problems did not moderate the associations of callous traits with global volumetric and white matter outcomes.Associations with quadratic terms were all nonsignificant.This is the first study to characterize the structural neural profile of callous traits in the general pediatric population.Based on sMRI and DTI data from over 2000 children, we demonstrate that callous traits at age 10 are characterized by widespread macrostructural and microstructural differences across the brain.We highlight three key findings.First, childhood callous traits were associated with reduced global gray matter and decreases in cortical surface area and gyrification across several frontal and temporal areas.These observations are consistent with prior research using high-risk samples.Second, we observed increased global white matter microstructure in children with elevated callous traits, suggesting increased white matter integrity across various white matter tracts.Third, we found that white matter, but not gray matter, associations differed by gender, with associations observed only in girls.Together, the present findings contribute to a more complete understanding of the relationship between brain structure and callous traits and may be used as a guiding framework for future research to uncover causal neurodevelopmental pathways.Findings from the sMRI analyses indicated that callous traits are associated with lower global brain volumes.More specifically, decreased cortical surface area and reduced gyrification were observed in various brain regions, including the temporal gyri and severalfrontal gyri.These regions have previously been associated with behavioral inhibition, social cognition, and emotion regulation, which have been implicated in the development of callousness.Our findings corroborate studies that observed gray matter volume reductions in orbitofrontal, cingulate, and temporal cortices in older youths with callous traits in the clinical range and support other studies that observed reduced cortical surface or gyrification across similar regions.We identified a nominally significant association between callous traits and lower right amygdala volume, which did not survive multiple-testing correction when accounting for other subcortical regions.Whereas aberrant amygdala function has been robustly associated with callous-unemotional traits, structural volumetric differences of the amygdala are rarely observed.This inconsistency between structural and functional neuroimaging findings could partly be explained by the use of different significance thresholds in studies taking a region-of-interest versus whole-brain approach.Our findings suggest the involvement of many regions with small effects.By extending these clinical MRI studies, our findings corroborate the notion that callous traits exist along a continuum in the general population, which has also been evidenced in genetic studies.Moreover, associations remained consistent after additional adjustment for co-occurring emotional, behavioral, and attention problems; IQ; and maternal psychopathology.In other words, whereas callous traits were significantly associated with other psychiatric symptoms and IQ—consistent with the extant literature—these comorbid symptoms did not explain our global neuroimaging findings.Co-occurring emotional and behavioral problems did, however, account for a large portion of the explained variance in vertexwise cortical surface area analyses, supporting the presence of at least some shared neural alterations in callous traits and comorbid psychiatric problems.Of interest, unique variance for callous traits was observed in the superior frontal gyrus, which has been linked to callous traits in clinical cohorts.Whereas structural brain connectivity has been examined in the context of externalizing problems more generally, few studies to date have examined the white matter microstructure profile of callous traits.This work has mainly focused on the uncinate fasciculus in older, selected samples and produced mixed results, reporting both lower and higher microstructure in adolescents with elevated callous traits.Two studies employing a whole-brain approach—both of which are based on data from adolescent arrestee cohorts—reported that callous traits were associated with higher white matter integrity in many tracts across the brain, including the corticospinal tract, superior longitudinal fasciculus, and uncinate.These findings are consistent with the higher microstructural integrity in various tracts observed in the current study, e.g., uncinate and cingulum, which connect frontal with temporal/parietal brain regions.This is noteworthy considering the substantial differences in design and sample characteristics between these studies and ours, including the focus on different developmental periods, proportion of boys to girls, and the use of a high-risk versus general population sample.The decreases in MD identified across these studies suggest higher white matter microstructure, possibly indicating accelerated or precocious white matter development in children with elevated callous traits.Importantly, decreased integrity has also been observed within high-risk samples.The reason for such discrepancy is unclear; potential reasons include different sampling strategies, varying levels of exposure to adversities and comorbid psychiatric problems, case-control versus dimensional perspectives, and different definitions of the callousness phenotypes.Our current findings are in contrast with our previous publication where we showed lower white matter microstructure in preadolescent children with elevated levels of delinquent behavior, suggesting that callous traits and other externalizing behaviors are associated with differential neural correlates even though these behaviors are correlated.This is consistent with fMRI studies showing, for example, amygdala reactivity to fearful faces to be negatively associated with callous traits and positively associated with conduct problems across multiple independent samples, despite these psychiatric phenotypes’ being positively correlated with one another.Findings from sMRI and DTI have been much less consistent, although differential amygdala volume reductions have been observed for callous-unemotional versus conduct problems.In this study, conduct problems were not found to moderate associations between callous traits and global brain measures.Importantly, in sensitivity analyses, we adjusted for all co-occurring problems, which left our sMRI and DTI findings unchanged even though callous traits were substantially correlated with externalizing behaviors.This, together with our previous observations, suggests specific brain-callousness correlates independent of other types of psychopathology, indicating that there is added value in screening for callous traits in children at elevated risk for antisocial behavior.This is the first study to examine neural correlates of callous traits using both sMRI and DTI.Overall, our findings corroborate 1) previous high-risk sMRI studies reporting associations between callous traits and lower brain volume across frontal and temporal regions and 2) previous high-risk DTI studies indicating higher microstructural integrity of the white matter tracts connecting these areas.As such, our findings support these seemingly discrepant associations and suggest that these are not simply the result of methodological differences between studies.The inverse relationship between the sMRI and DTI findings could potentially indicate decreased cortical functioning and consequently more dysregulated white matter connectivity, or vice versa.Multimodal neuroimaging approaches incorporating fMRI assessments are required to disentangle the origins of these observations.Whereas boys and girls are known to differ considerably in prevalence of callous traits and trajectories of brain development, it is unclear whether there are gender differences in the neural profile of callous traits, as existing studies have primarily focused on male subjects.The equal distribution of boys and girls in our sample offered a unique opportunity to address this gap.We found no gender differences in global volumetric measures.However, we did find that the relationship of global white matter microstructure with callous traits was significant only in girls.Given that white matter has been shown to develop more quickly in girls compared with boys, it is possible that our findings reflect advanced white matter maturation in girls with elevated callous traits and thus potential residual age confounding.In post hoc analyses, we found that age did not moderate the association of global MD, AD, or RD with callous traits in girls.However, potentially, chronological age does not adequately capture differences in neurobiological maturation.Recent smaller studies have observed more pronounced cortical differences for callous traits in adolescent boys versus girls, which is not what we observed here.These findings could potentially signify that callous traits and their associated neural profile reflect differential development in girls compared with boys.Repeated neuroimaging assessments at later ages—in combination with pubertal development measures—will be particularly valuable for clarifying whether these gender differences persist across brain development or whether the developmental trajectories are similar for boys and girls, with possibly different onsets.Our study had several strengths, including the use of a large sample of nonselected children from the community and the analysis of both sMRI and DTI data.Our hierarchical analytical approach allowed us to investigate both global and specific brain metrics without substantially increasing the risk of type II error.Stringent sensitivity analyses further enabled us to ascertain that our findings were robust to additional adjustment for co-occurring psychiatric problems, IQ, and maternal psychopathology.Finally, our study was the first to examine neuroanatomical correlates of callous traits in a sample with an equal distribution of boys and girls.Despite these strengths, several limitations should be noted.First, our measure of callous traits did not adequately cover unemotional/affective aspects, which are important features of callous-unemotional and broader psychopathic traits and which have been studied in the wider literature in clinical samples.Future work will need to take this limitation into account by exploring associations across a broader spectrum of traits and, additionally, employ a multi-informant approach to childhood callous traits.Second, our findings were cross-sectional and hence should be interpreted as a neurobiological characterization of callous traits, rather than an underlying biological mechanism.Furthermore, we were unable to assess whether observed brain-behavior associations predicted functional outcomes, both concurrently and longitudinally, such as academic performance.Furthermore, the participants are still too young for examining other relevant functional domains, such as substance use, risk-taking, and contact with law enforcement.In the future, it will be important to draw on longitudinal designs with repeated measures of neuroimaging and callous traits to trace neurodevelopmental trajectories of callous traits and their utility for predicting clinically relevant outcomes in later life.Third, a growing body of literature points to the existence of distinct developmental pathways to youth callous traits, with groups being differentially related to exposure to early adversity in childhood and accompanying anxiety symptoms versus development of similarly severe callous traits through inherited vulnerabilities.Our current population-based cross-sectional design did not allow us to study these differential developmental pathways.Repeated assessments of both neuroimaging and callous traits across childhood are needed, particularly with regard to differential developmental pathways.Nevertheless, we adjusted for behavioral as well as emotional problems in sensitivity analyses, which did not alter our main findings.Fourth, nonverbal IQ was assessed 4 years before callous traits and MRI assessments; it would have been better to have concurrent assessments of each.Despite this, intelligence is moderately stable during childhood, which supports the reliability of our analysis with adjustment for IQ at 6 years.Fifth, whereas our hierarchical analysis approach reduces the likelihood of false-positive results, it also increases chances of false-negative results—i.e., very focal findings might have been obscured if global associations were not found.Sixth, though the Generation R Study is an ethnically diverse study, most participants are of European descent.More research needs to be conducted in nonwhite populations, which is a considerable gap in the literature.Finally, more research should employ multimodal approaches, for example, integrating fMRI data to further characterize the neural profile of callous traits.In conclusion, we found evidence for widespread macrostructural and microstructural brain alterations in callous traits based on a large community sample of children.These results underscore that youth callous traits are not uniquely associated with brain differences in frontolimbic or frontostriatal connections; rather, structural brain differences were observed in a wide range of areas across the brain.Our study provides further support for the value of conceptualizing pediatric callous traits as a neurodevelopmental condition.Priority should be given to prospective developmentally sensitive research, which will enable examination of early environmental and neurobiological pathways to callous traits, potential gender differences, and their utility in predicting clinically relevant functional domains in later life.Finally, the current results may indicate that children with elevated callous traits show differences in brain development, which holds promise for etiologic research for a better understanding of the development of severe antisocial behavior later in life.
Background: Callous traits during childhood, e.g., lack of remorse and shallow affect, are a key risk marker for antisocial behavior. Although callous traits have been found to be associated with structural and functional brain alterations, evidence to date has been almost exclusively limited to small, high-risk samples of boys. We characterized gray and white matter brain correlates of callous traits in over 2000 children from the general population. Methods: Data on mother-reported callous traits and brain imaging were collected at age 10 years from participants of the Generation R Study. Structural magnetic resonance imaging was used to investigate brain morphology using volumetric indices and whole-brain analyses (n = 2146); diffusion tensor imaging was used to assess global and specific white matter microstructure (n = 2059). Results: Callous traits were associated with lower global brain (e.g., total brain) volumes as well as decreased cortical surface area in frontal and temporal regions. Global mean diffusivity was negatively associated with callous traits, suggesting higher white matter microstructural integrity in children with elevated callous traits. Multiple individual tracts, including the uncinate and cingulum, contributed to this global association. Whereas no gender differences were observed for global volumetric indices, white matter associations were present only in girls. Conclusions: This is the first study to provide a systematic characterization of the structural neural profile of callous traits in the general pediatric population. These findings extend previous work based on selected samples by demonstrating that childhood callous traits in the general population are characterized by widespread macrostructural and microstructural differences across the brain.
71
Are BVS suitable for ACS patients? Support from a large single center real live registry
Drug-eluting stents are the first choice devices in percutaneous coronary interventions.Despite recent advantages, shortcomings related to the use of DES still are present such as delayed arterial healing, late stent thrombosis, neo-atherosclerosis and hypersensitivity reactions to the polymer .To overcome these limitations, coronary devices made of fully bioresorbable material were developed to provide mechanical support and drug-delivery within the first year, followed by complete resorption.The first bioresorbable vascular scaffold was commercially introduced in September 2012 as the Absorb BVS.The BVS provides transient vessel support and gradually elutes the anti-proliferative drug everolimus.After degradation of the polymer no foreign material remains and need for late reintervention triggered by foreign material should thus be reduced .First-in-man trials have proven the safety of the BVS up to five years with a fully completed bioresorption process, a late luminal enlargement due to plaque reduction and a persistent restoration of vasomotion .The 1-year results of the larger ABSORB II, ABSORB Japan, ABSORB China and ABSORB III randomized controlled trials comparing BVS with DES, confirmed the safety in relatively simple coronary lesions with similar clinical event rates for both devices .In all these early studies, ACS patients were largely excluded while BVS would comprise a more attractive choice in this setting as ACS patients are in general younger with a longer life expectancy, less previous MI and revascularizations with implantation of metallic stents, that would conflict with a therapy aiming at maximal recovery and restoration of normal anatomy of both the coronary artery and myocardium.Furthermore, lesions primarily consisting of soft plaque would be conceptually easy to expand thus facilitating BVS implantation in ACS population.On the other hand, ACS patients are in a much higher pro-thrombotic state which might accelerate thrombus formation on the larger struts of the BVS impacting much more on shear stress compared to the thinner struts of current metallic DES.Few registries focused on the performance of the BVS in patients presenting with ACS, mainly ST-elevation myocardial infarction.BVS STEMI First examined the procedural and short-term clinical outcomes of 49 STEMI patients, revealing excellent results: procedural success was 97.9% and only 1 patient suffered an event .Kočka et al. reported similar results in the Prague-19 study .Extending the initial Prague-19 study, the BVS Examination is currently the largest registry on BVS in STEMI with encouraging MACE rates, although with a not negligible definite/probable scaffold thrombosis rate .The recently published TROFI II randomized trial investigated arterial healing in 90 STEMI patients treated with a BVS compared to those treated with an everolimus-eluting stent.Based on OCT, arterial healing at 6 months after BVS implantation was non-inferior to that after EES implantation .In general, the previous studies on BVS in ACS are limited in size and procedural details and there is a need for more data on the efficacy of BVS in the setting of PCI for ACS.The aim of this study was to compare the angiographic and clinical outcomes of BVS in ACS patients with stable patients.Two investigator-initiated, prospective, single-center, single-arm studies performed in an experienced, tertiary PCI center have been pooled for the purpose of this investigation.Patients presenting with NSTEMI, stable or unstable angina, or silent ischemia caused by a de novo stenotic lesion in a native previously untreated coronary artery with intention to treat with a BVS were included in BVS Expand registry.Angiographic inclusion criteria were lesions with a Dmax within the upper limit of 3.8 mm and the lower limit of 2.0 mm by online quantitative coronary angiography.Complex lesions such as bifurcation, calcified, long and thrombotic lesions were not excluded.Exclusion criteria were patients with a history of coronary bypass grafting, presentation with cardiogenic shock, bifurcation lesions requiring kissing balloon post-dilatation, ST-elevation myocardial infarction patients, allergy or contra-indications to antiplatelet therapy, fertile female patients not taking adequate contraceptives or currently breastfeeding and patients with expected survival of less than one year.As per hospital policy patients with a previously implanted metal DES in the intended target vessel were also excluded."Also, although old age was not an exclusion criterion, BVS were in general reserved for younger patients, and left to operator's interpretation of biological age.Patients presenting with STEMI, were approached to participate in the BVS STEMI Registry, which started two months after the BVS Expand registry.The study design has been described elsewhere .The most important inclusion criteria were presentation with STEMI and complaints < 12 h.The remaining inclusion criteria were similar to the BVS-EXPAND registry.This is an observational study, performed based on international regulations, including the declaration of Helsinki.Approval of the ethical board of the Erasmus MC was obtained.All patients undergoing clinical follow-up provided written informed consent to be contacted regularly during the follow-up period of the study.PCI was performed according to current clinical practice standards.The radial or femoral approach using 6 or 7 French catheters were the principal route of vascular access.Pre-dilatation was recommended with a balloon shorter than the planned study device length."Advanced lesion preparation was left to the operator's discretion.Post-dilatation was recommended with a non-compliant balloon without overexpanding the scaffold beyond its limits of expansion.Intravascular imaging with the use of Intravascular Ultrasound or Optical Coherence Tomography was used for pre-procedural sizing and optimization of scaffold deployment on the discretion of the operator.All patients were treated with unfractionated heparin.Patients with stable angina were preloaded with 300 mg of aspirin and 600 mg of clopidogrel.Patients presenting with ACS were preloaded with 300 mg of aspirin and 60 mg of prasugrel or 180 mg of ticagrelor.The angiographic analysis was performed by three independent investigators.Coronary angiograms were analyzed with the CAAS 5.10 QCA software.The QCA measurements provided reference vessel diameter, percentage diameter stenosis, minimal lumen diameter, and maximal lumen diameter.Acute gain was defined as post-procedural MLD minus pre-procedural MLD.Survival status of all patients was obtained from municipal civil registries.Follow-up information specific for hospitalization and cardiovascular events was obtained through questionnaires.If needed, medical records or discharge letters from other hospitals were collected.Events were adjudicated by an independent clinical events committee.The primary endpoint was MACE, defined as the composite endpoint of cardiac death, myocardial infarction and target lesion revascularization.Deaths were considered cardiac unless a non-cardiac cause was definitely identified.TLR was described as any repeated revascularization of the target lesion.Target vessel revascularization was defined as any repeat percutaneous intervention or surgical bypass of any segment of the target vessel.Non-target vessel revascularization was described as any revascularization in a vessel other than the vessel of the target lesion.Target lesion failure was defined as a composite endpoint of cardiac death, target vessel MI and TLR.Scaffold thrombosis and MI were classified according to the Academic Research Consortium .Clinical device success was defined as successful delivery and deployment of the first study scaffold/stent at the intended target lesion and successful withdrawal of the delivery system with attainment of final in-scaffold/stent residual stenosis of < 30% as evaluated by QCA.Clinical procedure success was described as device success without major peri-procedural complications or in-hospital MACE.The intention-to-treat group includes all the patients regardless of whether or not the scaffold was successfully implanted.The per-treatment group consists of all patients in whom the BVS was successfully implanted.All analyses were performed in the PT group.As a measure of scaffold expansion, the expansion index was calculated as post-procedural MLD divided by nominal device diameter.A cut-off value of < 0.70 below was used to define underexpansion.Categorical variables are reported as counts and percentages, continuous variables as mean ± standard deviation."The Student's t test and the chi square test were used for comparison of means and percentages.The cumulative incidence of adverse events was estimated according to the Kaplan–Meier method.Patients lost to follow-up were considered at risk until the date of last contact, at which point they were censored.Kaplan–Meier estimates were compared by means of the log-rank test.For the endpoint MACE, a landmark survival analysis was performed with the landmark time point of 30 days.All statistical tests were two-sided and the P value of < 0.05 was considered statistically significant.Statistical analyses were performed using SPSS, version 21.A univariate logistic regression analysis was performed to look for predictors of TLF and probable/definite ST.From September 2012 up to October 2014, 452 patients were intended to be treated with one or more BVS.Thirteen patients were excluded based on protocol related exclusion criteria of the BVS Expand registry and the BVS STEMI registry and 79 patients declined to participate in one of the two follow-up registries.Thus 360 patients remained for the purpose of this study.There were 9 cases of device failure in which a metallic stent was implanted and the per-treatment group consisted of 351 patients.A flowchart of the study is given in Fig. 1.Baseline characteristics are presented in Table 1.Presentation with ACS was present in 72.6% of the patients and 27.4% were stable patients.Mean age was significantly different between the two groups: 57.9 ± 10.7 years for ACS patients and 63.4 ± 8.9 years for non-ACS patients.Dyslipidemia, history of MI, history of PCI and renal insufficiency were factors that occurred significantly more frequent in stable patients.ACS patients had more single vessel disease.Lesion characteristics are presented in Table 2.In both groups, the left anterior descending coronary artery was most commonly treated.Lesions in stable patients were more complex, with a higher percentage of AHA/ACC type B2/C lesions.Pre-procedural TIMI flow was significantly different.The mean lesion length was comparable in both groups.Pre-procedural QCA analysis revealed significant differences between the groups in MLD: 0.69 ± 0.51 mm for ACS patients versus 1.04 ± 0.40 in stable patients.After excluding the thrombotic total occlusions, this statistical difference remained.Pre-procedural %DS was 65.45 ± 20.91% in the ACS group versus 58.62 ± 13.84% in non-ACS group.Post-procedural QCA measurements revealed a superior acute performance in the ACS population: remaining %DS was significant lower.Final MLD was larger and also acute lumen gain was higher.Procedural and angiographic details are summarized in Table 3.In ACS patients, pre-dilatation was performed in 75.7% of the lesions, compared to 89.0% in stable patients.Pre-dilatation balloon to artery ratio was comparable.Post-dilatation was significantly less frequently performed in the ACS group.Advanced lesion preparation was less often performed in ACS patients than in stable patients.A total of 582 BVS were implanted: 399 in the ACS group and 183 in stable patients.In the ACS population 6 cases of device failure occurred, all due to delivery failure.Main causes of these delivery failures were calcification and angulation.Eight in-hospital MACE were reported.Whereas in the stable population 3 device failures and no in-hospital MACE were documented in stable patients.Clinical device and procedural success were 98.0% and 95.4% for the ACS population and 97.7 and 96.9% respectively for stable patients.Data on survival status was available in 100% with a median follow-up period of 731 days.A total of 340 patients had a follow-up duration of at least 365 days.Cumulative clinical events rates are summarized in Table 4.Clinical outcomes appeared to be comparable with no significant difference between patients presenting with ACS as compared to stable patients.Rate of death was 0.0% in the ACS group versus 3.1% in the non-ACS group.Three patients died within the first year.One patient, with extensive cardiovascular disease died at day 166, 4 days after he went through a definite ST and MI, most probably due to a brief interruption of his antithrombotic medication during an elective surgery.The second patient died a few days after his prostate was surgically removed.In this case, dual antiplatelet inhibition therapy was also shortly interrupted causing a MI.The last patient died of a sudden cardiac death 66 days after baseline PCI.MACE rate in the ACS population was comparable to the non-ACS population.MACE was mainly driven by MI and TLR.TLR rate was comparable in both groups.Rate of TVR was in 3.2% in ACS patients versus 3.5% in stable patients.Non-TVR rate was 3.2% and 5.5% in respectively ACS and non-ACS patients.Rate of definite ST was similar in both groups: 2.0% in the ACS group versus 2.1% in stable patients.Of note, early ST only occurred in the ACS group, late thrombosis was more prevalent in stable patients.A landmark survival analysis of MACE, definite/probable ST, MI and TLR indicated a trend for higher event rates of the ACS population in the short-term.Conversely, mid-term event rates were higher in stable patients, although log rank test failed to prove significance.In an univariate analysis of TLF the following characteristics tended to be related by at least a twofold increase in odds ratio: renal insufficiency, bifurcation, male gender and age above 65 years.The use of intravascular imaging at baseline might be protective for TLF.The present study reports on the comparative procedural and the one-year clinical outcomes of ACS patients versus non-ACS patients treated with an Absorb bioresorbable scaffold.The main findings of this study are summarized as follows: 1) angiographic outcomes were better in ACS patients despite the fact that less aggressive lesion preparation and less frequent post-dilatation were performed; 2) overall one-year ST rate in ACS patients was similar to the non-ACS patients.Interestingly, early definite ST occurred only in the ACS population while late ST seemed more frequent in stable patients; 3) despite the higher rate of early complications in the ACS group, landmark analyses after one month demonstrated that event rates were lower in this group than the stable patient group; and 4) clinical outcomes at one year were comparable among ACS and stable patients.Differences between ACS patients and stable patients exist at multiple levels.On a patient level, patients presenting with ACS often are younger and thus have a longer life expectancy.Cardiovascular disease in this group is less extensive when compared to stable patients.Additionally, a different plaque composition is present, featured by a lipid-rich necrotic core with a thin fibrous cap.All these factors make ACS patients very attractive for bioresorbable technologies where full expansion is important and acute recoil a concern.Moreover, in ACS patients DAPT pretreatment is usually short and frequently not yet resulting in active platelet function inhibition, while the thrombus burden is greater with high platelet activation and a systemic inflammatory response.These factors might amplify the risk of acute thromboses and cause a higher risk of MACE.For these reasons studies like ours are important to investigate the suitability of BVS in ACS patients.To the best of our knowledge, no data is available comparing the performance of BVS in ACS with stable patients compared to stable patients.The BVS Expand registry and the BVS STEMI registry are two single-center, single-arm registries describing procedural clinical outcomes of patients treated with BVS.At variance of previous studies investigating the Absorb bioresorbable scaffold, all events were adjudicated by an independent clinical event committee.Also, all angiograms were analyzed using QCA.Lastly, combining the results of the two registries, both handling less restrictive inclusion criteria, we were able to create a study population reflecting a real-world population with a considerable amount of ACS patients.The superior acute angiographic outcome in ACS patients compared to stable patients is an important observation.In previous studies it was demonstrated that the acute performance of the Absorb scaffold is somewhat inferior to metallic stents for stable angina patients.For example, in-device acute lumen gain in the ABSORB II trial was 1.15 ± 0.38 mm in BVS group versus 1.46 ± 0.38 mm in the EES group.In the ABSORB III trial reported lumen gain was 1.45 ± 0.45 mm versus 1.59 ± 0.44 mm.Finally, in the ABSORB Japan and ABSORB China trials acute lumen gain numbers were as follows: 1.46 ± 0.40 mm versus 1.65 ± 0.40 mm and 1.51 ± 0.03 versus 1.59 ± 0.03 respectively.Remarkably, in STEMI patients no difference in acute gain was observed between BVS and DES.This finding also suggests that the somewhat inferior angiographic results only imply for stable angina patients while the current semi-compliant balloon and wide strut BVS design are sufficient for the general softer plaque composition of ACS patients.In the current study, post-dilatation was significantly less frequently performed in ACS patients, however angiographic outcomes were better.Post-procedural MLD, RVD, %DS and in-scaffold acute lumen gain were all superior compared to post-procedural QCA measurements in stable patients.These promising angiographic results in ACS patients support the use BVS in this setting as they are predictive for clinical events.Overall, one-year ST rate in ACS patients was similar to the non-ACS patients.The observed rate of early ST in the ACS population might raise some concerns.Previous studies have stated that presentation with ACS is an independent risk factor for the development of stent thrombosis .Using metal devices, multiple studies have documented that stenting of lesions with appeared plaque rupture are prone to delayed healing, characterized by higher percentages of uncovered, malapposed and protruding stent struts with a subsequent risk of stent thrombosis .Furthermore, underexpansion appeared to be an important predictor .This is also the case for ST in BVS patients .In ACS patients, high thrombus burden, increased platelet activation and vasospasm are mechanisms that trouble optimal sizing resulting in higher rates of malapposition.In the acute setting, lesion preparation using pre-dilatation and intravascular imaging are less frequently performed than in stable patients.Although the acute scaffold expansion is on average better in the ACS population than in the stable population, it is very important to properly size the vessel and to optimize the final scaffold expansion in order to avoid early ST.The landmark analysis beyond one month up to 12 months showed favorable results with regard to ST and TLR for the ACS patients.The somewhat higher event rates in the non-ACS group are a representation of a more complex non-study real world patient population.Therefore, the one-year MACE rates of 5.5% and 5.3% are acceptable and comparable to trials using BVS in relatively simple lesions: 5.0% in the ABSORB II trial and 3.8% in the ABSORB China trial .A comparable endpoint, target lesion failure, in the ABSORB III and ABSORB Japan trials were 7.8% and 4.2% respectively .In these studies, STEMI patients were excluded.Compared to studies investigating clinical outcomes of metal DES in STEMI patients, event rates in our report are higher than for EES but for lower compared to first-generation DES .Recently, few concerns were raised concerning a potentially increased incidence of ST after implantation of a BVS .Also, in our registry rate of definite ST was higher compared to that of currently available metallic DES .The importance of patient selection, lesion preparation, pre- and post-dilatation and also the consideration of intra-vascular imaging have to be underlined .A pilot imaging study suggested suboptimal implantation as an important cause for BVS ST .Use of intravascular imaging could improve pre-procedural vessel sizing, optimize lesion coverage and eventually reduce adverse events.Next generation BVS with smaller scaffold struts may reduce the early event rates in ACS patients.For the current design, using more potent P2Y12 inhibitors such as ticagrelor, a direct-acting platelet inhibitor or cangrelor, an intravenous antiplatelet drug, could be valuable.In the ATLANTIC trial, ticagrelor was administered prehospital in the ambulance to STEMI patients, leading to a reduction in ST rate .The CHAMPION PHOENIX trial assessed ischemic complications of PCI after administration of cangrelor and showed a decrease in these complications, with no significant increase in severe bleeding .The upcoming HORIZONS-ABSORB AMI will compare the performance of BVS to DES when cangelor is used on top of heparin or bivalirudin in STEMI patients .Rate of mortality in ACS patients is worse compared with patients who present with stable CAD .In our patient cohort, mortality was 0% in the ACS population probably reflecting our exclusion criteria for the STEMI population.As shown by our landmark survival analyses, events in the ACS group are especially clustered in the early phase after BVS implantation.On the other hand, one-year Kaplan Meier curves for events are lower in ACS patients.This is probably due to patient selection, where ACS patients present with different patient and lesion factors, and the higher intake of prasugrel and ticagrelor in these patients.In summary, our results warrant further confirmation in a large-scale trial with a high number of ACS patients and an optimal implantation strategy tailored at the limitation of this first generation fully bioresorbable scaffolds.Ongoing and upcoming trials such as the AIDA, Compare Absorb and HORIZON-ABSORB AMI, will provide data derived from larger patient cohorts and in direct comparison to metallic DES .These results are derived from two single-center, single-arm registries with no direct comparison with metallic DES.The total number of patients in this study was limited.Baseline differences in patient and lesion characteristics could have led to biased outcome in clinical event rates.Furthermore, deciding which patient or lesion was suitable for BVS implantation could have led to selection bias.However, there was a fair amount of patients presenting with ACS and with B2/C lesions were included, indicating the complexity of the present study population.Despite the higher rate of early complications due to early ST in the ACS population, the one-year clinical outcomes for BVS implantations in ACS patients versus non-ACS patients are comparable.The early ST rate observed in ACS needs further attention and optimized antiplatelet therapy may play a role.Angiographic outcomes for BVS in ACS patients are at least as good as non-ACS patients.Therefore, ACS patients may be suitable candidates for the treatment with the BVS if early procedural related complications can be avoided.This study was supported by an unrestricted grant from Abbott Vascular."Robert-Jan van Geuns, Nicolas van Mieghem and Yoshinobu Onuma received speaker's fee from Abbott Vascular.The other authors have no conflicts of interest to declare.All authors have approved the final article.
Objectives To investigate one-year outcomes after implantation of a bioresorbable vascular scaffold (BVS) in patients presenting with acute coronary syndrome (ACS) compared to stable angina patients. Background Robust data on the outcome of BVS in the setting of ACS is still scarce. Methods Two investigator initiated, single-center, single-arm BVS registries have been pooled for the purpose of this study, namely the BVS Expand and BVS STEMI registries. Results From September 2012-October 2014, 351 patients with a total of 428 lesions were enrolled. 255 (72.6%) were ACS patients and 99 (27.4%) presented with stable angina/silent ischemia. Mean number of scaffold/patient was 1.55 ± 0.91 in ACS group versus 1.91 ± 1.11 in non-ACS group (P = 0.11). Pre- and post-dilatation were performed less frequent in ACS patients, 75.7% and 41.3% versus 89.0% and 62.0% respectively (P = 0.05 and P = 0.001). Interestingly, post-procedural acute lumen gain and percentage diameter stenosis were superior in ACS patients, 1.62 ± 0.65 mm (versus 1.22 ± 0.49 mm, P < 0.001) and 15.51 ± 8.47% (versus 18.46 ± 9.54%, P = 0.04). Major adverse cardiac events (MACE) rate at 12 months was 5.5% in the ACS group (versus 5.3% in stable group, P = 0.90). One-year definite scaffold thrombosis rate was comparable: 2.0% for ACS population versus 2.1% for stable population (P = 0.94), however, early scaffold thromboses occurred only in ACS patients. Conclusions One-year clinical outcomes in ACS patients treated with BVS were similar to non-ACS patients. Acute angiographic outcomes were better in ACS than in non-ACS, yet the early thrombotic events require attention and further research.
72
Hydrophobically Modified siRNAs Silence Huntingtin mRNA in Primary Neurons and Mouse Brain
RNA interference is a highly efficient gene-silencing mechanism in which a small interfering RNA binds a target mRNA, guiding mRNA cleavage via an RNA-induced silencing complex.1,2,This biological phenomenon is widely used as a genetic tool in biomedical research.Advances in RNA chemistry have expanded siRNA applications toward therapeutic development, with robust efficacy seen in phase 2 clinical trials for liver diseases.3,4,5,Despite its prevalence in biomedical research, the use of RNAi in neurodegenerative research has been limited.6,There is a significant unmet need for simple, effective, and nontoxic siRNA delivery methods to modulate gene expression in primary neurons and brain.A range of approaches has been evaluated,7 including AAV viruses,8,9 peptide conjugates,10 oligonucleotide formulations,11 infusion of naked or slightly modified siRNAs,12,13 ultrasound,14 and convection-enhanced based delivery.15,None of these approaches has received wide acceptance due to toxicity, a requirement for extensive repetitive dosing, and/or limited spatial distribution.Lipofection and electroporation of siRNAs are challenging in primary neurons due to low transfection efficiencies and their extreme sensitivity to external manipulation.16,Delivery of siRNA precursors has been used successfully, but viral transduction cannot readily be turned off and requires extensive formulation and experimental optimization to achieve reproducible, nontoxic silencing in neuronal cells.17,18,19,20,21,22,In this study, we describe the delivery, distribution, and silencing capacity of hydrophobically modified siRNAs in primary neurons and in mouse brain.hsiRNAs are siRNA-antisense hybrids containing numerous chemical modifications designed to promote biodistribution and stability while minimizing immunogenicity."As a model for our studies, we silenced the huntingtin gene, the causative gene in Huntington's disease.HD is an autosomal-dominant neurodegenerative disorder caused by a toxic expansion in the CAG repeat region of the huntingtin gene leading to a variety of molecular and cellular consequences.Tetrabenazine, the only FDA-approved therapy for HD, seeks to alleviate disease symptoms but does not treat the actual problem: the gain of toxic function caused by mutant Htt.Recent studies suggest that transient neuronal knockdown of Htt mRNA can reverse disease progression without compromising normal cellular function in vivo.23,At present, RNA interference via siRNA or antisense oligonucleotide is one of the most promising therapeutic approaches for transient Htt mRNA silencing.We performed a screen of hsiRNAs targeting Htt mRNA and identified multiple functional compounds.We showed that primary neurons internalize hsiRNA added directly to the culture medium, with membrane saturation occurring by 1 hour.Direct uptake in neurons induces potent and long-lasting silencing of Htt mRNA for up to 3 weeks in vitro without major detectable effects on neuronal viability.Additionally, a single injection of unformulated Htt hsiRNA into mouse brain silences Htt mRNA with minimal neuronal toxicity.Efficient gene silencing in primary neurons and in vivo upon direct administration of unformulated hsiRNA represents a significant technical advance in the application of RNAi to neuroscience research, enabling technically achievable genetic manipulation in a native, biological context.hsiRNA is an asymmetric compound composed of a 15-nucleotide modified RNA duplex with a single-stranded 3′ extension on the guide strand.24,25,Pyrimidines in the hsiRNA are modified with 2′-O-methyl or 2′-fluoro to promote stability, and the 3′ end of the passenger strand is conjugated to a hydrophobic teg-Chol to promote membrane binding and association.26,The single-stranded tail contains hydrophobic phosphorothioate linkages and promotes cellular uptake by a mechanism similar to that of antisense oligonucleotides.27,The presence of phosphorothioates, ribose modifications, and a cholesterol conjugate contribute to overall hydrophobicity and are essential for compound stabilization and efficient cellular internalization.Previous studies have shown that hydrophobically modified siRNAs bind to a wide range of cells and is readily internalized without the requirement for a transfection reagent.26,28,29,Here, we evaluated whether asymmetric hydrophobically modified siRNAs are efficiently internalized by primary neurons.We found that, when added to the culture medium, Cy3-labeled hsiRNAs rapidly associated with primary cortical neurons.These Cy3-labeled hsiRNAs were observed in every cell in the culture, demonstrating efficient and uniform uptake.Initially, hsiRNAs mainly associate with neurites and, over time, accumulate in the cell bodies.Treatment of primary neurons with a previously identified hsiRNA targeting Ppib26,28 reduced target mRNA levels by 90%, further supporting that the observed compound internalization results in potent gene silencing."Robust uptake and efficacy observed with hsiRNAs in primary cortical neurons encouraged us to identify functional compounds that target Htt mRNA, the single gene responsible for the development of Huntington's disease. "The hsiRNA's extensive chemical scaffold26,28 is essential for stability, minimization of innate immune response,30,31 and cellular internalization but imposes significant restrictions on sequence space by potentially interfering with the compound's RISC-entering ability.To maximize the likelihood of identifying functional Htt hsiRNAs and to evaluate the hit rate for this type of chemistry, we designed and synthesized hsiRNAs targeting 94 sites across the human Htt mRNA.The panel of hsiRNAs was initially screened for efficacy in HeLa cells by adding hsiRNA directly to the culture medium to a final concentration of 1.5 µmol/l and evaluating impact on levels of Htt and housekeeping gene mRNA expression using the QuantiGene assay.At this concentration, 24 hsiRNAs reduced Htt mRNA levels to less than 50% of control levels, including 7 hsiRNAs that reduced Htt mRNA levels below 30% of control.Unlike unmodified siRNA libraries, creating a library with extensive 2’-O-methyl and 2’-fluoro modifications introduces additional constraints on sequence selection.As a result, hit rates for modified siRNA screens are lower than that seen for conventional unmodified siRNA.32,33,34,35,Functional hsiRNAs targeted sites distributed throughout the mRNA, except the distal end of the 3′ UTR, which later was shown to be part of the alternative Htt gene isoform36 not expressed in HeLa cells.Discounting the ∼32 hsiRNAs targeting long 3′ UTR sites absent from the Htt isoform in HeLa cells, almost 40% of hsiRNAs showed some level of activity at 1.5 µmol/l, demonstrating that the evaluated chemical scaffold is well tolerated by the RNAi machinery and a functional compound can be easily identified against a wide range of targets.Half-maximal inhibitory concentrations for passive uptake of hsiRNAs ranged from 82 to 766 nmol/l.In lipid-mediated delivery, eight of the most active hsiRNAs had IC50 values ranging from 4 to 91 pmol/l.The best clinically active siRNAs are usually characterized by IC50 values in the low pmol/l range.37,An ability to identify highly potent compounds with low picomolar IC50 values suggests that the hsiRNA chemical scaffold does not interfere with siRNA biological activity in selected compounds.The most potent hsiRNA targeting position, 10150, and an unmodified conventional siRNA version of HTT10150 showed similar IC50 values in lipid-mediated delivery, further confirming that the hsiRNA chemical scaffold does not interfere with RISC loading or function.Only the fully modified hsiRNA, and not the unmodified version, silenced Htt mRNA by passive uptake.Thus, the chemical scaffold described here does not interfere with RISC assembly and is sufficient to support unformulated compound uptake and efficacy.HTT10150 was used for subsequent studies.HTT10150 induced a concentration-dependent silencing at 72 hours and 1 week after unformulated addition to either primary cortical or primary striatal neurons isolated from FVB/NJ mice.At 1.25 µmol/l, HTT10150 induced maximal silencing, reducing both Htt mRNA levels and HTT protein levels by as much as 70 and 85%, respectively.HTT10150 hsiRNA did not affect the expression levels of housekeeping controls or the overall viability of primary neuronal cells, as measured by the alamarBlue assay, up to a 2 µmol/l concentration.Similar results were obtained with another hsiRNA targeting Htt mRNA, supporting that the observed phenomena is not unique to HTT10150.These experiments, in conjugation with the results seen from targeting Ppib, indicate that a diversity of genes and target sequences can be silenced by hsiRNAs in primary neurons simply upon direct addition of compounds into cellular media.Since loaded RISC has a typical half-life of weeks,38 silencing is expected to be long lasting in nondividing cells.To evaluate duration of silencing after a single HTT10150 treatment of primary cortical neurons, Htt mRNA levels were measured at 1-, 2-, and 3-week intervals.A single treatment with hsiRNA induced Htt silencing that persisted for at least 3 weeks, the longest time that primary cortical neurons can be maintained in culture.Together, these data demonstrate that hsiRNAs are a simple and straightforward approach for potent, specific, nontoxic, and long-term modulation of gene expression in primary neurons in vitro.Having shown that hsiRNAs effectively silence their targets in primary neurons in vitro, we sought to evaluate the ability of HTT10150 to silence Htt mRNA in the mouse brain in vivo.The distribution of HTT10150 was evaluated in perfused brain sections prepared 24 hours after intrastriatal injection with 12.5 µg Cy3-labeled hsiRNA in artificial cerebral spinal fluid.We observed a steep gradient of fluorescence emanating from the injection site and covering most of the ipsilateral striatum, while no fluorescence was visually detectable in the contralateral side of the brain.In high magnification images of the ipsilateral side, hsiRNAs appeared preferentially associated with the tissue matrix and fiber tracts.In addition, efficient internalization was observed in a majority of cell bodies.Consistent with in vitro studies, we observed Cy3-labeled hsiRNA in neuronal processes and as punctae in the perinuclear space of multiple cell types, including NeuN-positive neurons39,40.In summary, a single intrastriatal injection delivers hsiRNA to neurons in the striatum of the injected side.To measure HTT10150 efficacy in vivo, we performed dose-response studies in wild type FVB/NJ mice injected intrastriatally with 3.1, 6.3, 12.5, or 25 µg of HTT10150.As controls, we injected mice with a non-targeting control hsiRNA, ACSF, or PBS.In punch biopsies taken from the ipsilateral and contralateral striatum, HTT10150 reduced Htt mRNA levels in a dose-dependent manner.This experiment was repeated several times with similar results.The Htt mRNA is significantly reduced in the ipsilateral side of striatum in all experiments.We observed robust dose-dependent silencing with up to 77% reduction in Htt mRNA expression levels at the highest dose.Interestingly we observe statistically significant, but less pronounced silencing in the contralateral striatum and the cortex.The silencing reaches statistical significance with both one-way and two-way analysis of variance.While some level of fluorescence is detectable in these brain regions with high laser intensity, it is very close to the tissue auto-fluorescence and thus is not reported here.We will be investigating this phenomenon further, but it is clear that the level of silencing is at least correlative to the sharp gradient of diffusion from the injection site.Finally, Htt mRNA silencing is observed with HTT10150 but not with NTC or ACSF.In addition, the HTT10150 does not affect expression of several housekeeping genes.In combination, this is indicative of Htt mRNA silencing being caused by HTT10150 hsiRNA and not by off-target effects.Nucleic acids, including siRNAs, are potent stimulators of the innate immune response,41 but extensive chemical modifications, like 2′-O-methyl, are expected to suppress the immunostimulatory effects of siRNAs in vitro and in vivo.42,To assess innate immune response activation by hsiRNAs in vivo, we quantified IBA-1-positive microglial cells in brain sections from mice injected with 12.5 µg HT10150 or artificial CSF.IBA-1 is specific to microglial cells and is upregulated following injury to the brain, allowing us to distinguish between resting and activated microglia.43,44,45,In the case of a major innate immune response, an increase of 200–300% in total microglia is customary.46,Total microglia counts showed only a 25% increase in the ipsilateral striatum at 5 days post-injection indicating a lack of any major inflammatory response.Thus, the observed activation is relatively minor but reaches statistical significance, indicating some level of response.Levels of innate immune response might be more pronounced immediately following compound administration.To assess the level of stimulation in more detail, we separately evaluated the number of activated and resting microglia at both 6 hours and 5 days post-injection.At 6 hours post-injection, we observed a significant increase in the number of activated microglia in the injected side of the brain with both ACSF and HTT10150.The injection event itself causes trauma and induces a major increase in activated microglia compared to the contralateral side of the brain.12,47,In the presence of HTT10150, the number of activated microglia was additionally increased twofold compared to ACSF, indicating enhancement of trauma-related microglia activation in the presence of oligonucleotide, although the relative contribution of the oligonucleotide to the trauma-related induction is minor.HTT10150-treated mice also showed some elevation of activated microglia in the contralateral striatum 6 hours post-injection; however, after 5 days, all changes in number of microglia in the contralateral side of the brain disappeared, suggesting that HTT10150-dependent activation of microglia in the contralateral striatum is transient.Despite the mild immune stimulation in the brains of animals injected with HTT10150, we did not observe any overall significant reduction of DARPP-32, an established marker for striatal neuron viability48.The only observed effect was at a small area directly around the injection site in animals treated with 25 µg HTT10150.Taken together, our data show that a single intrastriatal injection of hsiRNA induces potent gene silencing with a mild immune response and minimal neuronal toxicity in vivo.Simple, effective, and nontoxic delivery of synthetic oligonucleotides to primary neurons and brain tissue represents a challenge to the use of RNAi as a research tool and therapeutic for neurodegenerative diseases like HD.7,We have shown that hsiRNAs elicit potent silencing in primary neurons in culture, without effect on housekeeping gene expression, and with minimal toxicity at effective doses Additionally, a non-targeting control hsiRNA did not silence any of the mRNAs tested, suggesting that these compounds are both sequence specific and on-target.Interestingly, the level of silencing is more pronounced on the protein level compared to the mRNA level.The mRNA plateau effect is reproducible and is specific to Htt mRNA, as housekeeping genes like Ppib can be silenced by 90%.One potential explanation is that some fraction of huntingtin mRNA is translationally inactive and poorly accessible by RNAi machinery.We are continuing to investigate this phenomenon.Silencing in primary neurons persists for multiple weeks after a single administration, consistent with the expected half-life of active RISC.49,Moreover, efficient intracellular delivery does not require the use of lipids or viral packaging.Currently, the most impressive in vivo modulation of Htt mRNA expression is demonstrated with 2′-O-methoxyethyl GapmeR antisense oligonucleotides.A single injection of 50 µg of antisense oligonucleotides or infusion of around 500 µg results in potent and specific Htt mRNA silencing and marked improvement in multiple phenotypic endpoints.23,50,51,52,However, 2′-O-methoxyethyl GapmeR antisense oligonucleotides are not readily commercially available making them inaccessible for the majority of academic labs.Here, we show Htt mRNA silencing in the ipsilateral striatum and cortex, two brain areas significantly affected in HD disease progression, with a single intrastriatal injection.As a considerably reduced level of silencing was observed on the contralateral side of the brain, bilateral injections might be necessary to promote equal gene silencing in both hemispheres.The limited distribution profile observed in vivo restricts immediate adoption of this technology for use in larger brains and eventually as a therapeutic for neurodegenerative disease.Tissue distribution can be improved by tailoring the chemical scaffold or by changing the conjugation moiety to promote receptor-mediated cellular internalization.Formulation of hsiRNA in exosomes, exosome-like liposomes, or shielding the compounds with polyethylene glycol may also provide an alternative strategy to improve tissue distribution.53,54,Here, we describe a class of self-delivering therapeutic oligonucleotides capable of targeted, nontoxic, and efficient Htt gene silencing in primary neurons and in vivo.This chemical scaffold can be specifically adapted to many different targets to facilitate the study of neuronal gene function in vitro and in vivo.The development of an accessible strategy for genetic manipulation in the context of a native, biological environment represents a technical advance for the study of neuronal biology and neurodegenerative disease.hsiRNA design.We designed and synthesized a panel of 94 hsiRNA compounds targeting the human huntingtin gene.These sequences span the gene and were selected to comply with standard siRNA design parameters24 including assessment of GC content, specificity and low seed compliment frequency,55 elimination of sequences containing miRNA seeds, and examination of thermodynamic bias.56,57,Oligonucleotide synthesis, deprotection, and purification.Oligonucleotides were synthesized using standard phosphoramidite, solid-phase synthesis conditions on a 0.2–1 µmole scale using a MerMade 12 and Expedite DNA/RNA synthesizer.Oligonucleotides with unmodified 3′ ends were synthesized on controlled pore glass functionalized with long-chain alkyl amine and a Unylinker terminus.Oligonucleotides with 3′-cholesterol modifications were synthesized on modified solid support.Phosphoramidite solutions were prepared at 0.15 mol/l in acetonitrile for 2′-TBDMS, 2′-O-methyl, and Cy3 modifications or 0.13 mol/l for 2′-fluoro modifications.Phosphoramidites were activated in 0.25 mol/l 4,5-dicyanoimidazole in acetonitrile.Detritylation was performed in 3% dichloroacetic acid in dichloromethane for 80 seconds.Capping was performed in 16% N-methylimidazole in tetrahydrofuran and acetic anhydride:pyridine:tetrahydrofuran, for 15 seconds.Oxidation was performed using 0.1 mol/l iodine in pyridine:water:tetrahydrofuran.The CPG was removed from the solid-phase column and placed in a polypropylene screw cap vial.Dimethylsulfoxide and 40% methylamine are added directly to the CPG and shaken gently at 65 °C for exactly 16 minutes.The vial was cooled on dry ice before the cap was removed.The supernatant was transferred to another polypropylene screw cap vial, and the CPG was rinsed with two 150 µl portions of dimethylsulfoxide, which were combined with original supernatant.Oligonucleotides without 2′-TBDMS-protecting groups were lyophilized.Oligonucleotides with 2′-TBDMS-protecting groups were desilylated by adding 375 µl triethylamine trihydrofluoride and incubated for exactly 16 minutes at 65 °C with gentle shaking.Samples were quenched by transferring to a 15 ml conical tube containing 2 ml of 2 mol/l triethylammonium acetate buffer.The sample was stored at −80 °C until high-performance liquid chromatography purification.Oligonucleotides were purified by reverse-phase high-performance liquid chromatography on a Hamilton PRP-C18 column using an Agilent Prostar 325 high-performance liquid chromatography.Buffer A 0.05 mol/l tetraethylammonium acetate with 5% acetonitrile, Buffer B 100% acetonitrile, with a gradient of 0% B to 35% B over 15 minutes at 30 ml/minutes.Purified oligonucleotides were lyophilized to dryness, reconstituted in water, and passed over a Hi-Trap cation exchange column to exchange the tetraethylammonium counter-ion with sodium.Cell culture."HeLa cells were maintained in Dulbecco's Modified Eagle's Medium supplemented with 10% fetal bovine serum and 100 U/ml penicillin/streptomycin and grown at 37 °C and 5% CO2.Cells were split every 2 to 5 days and discarded after 15 passages.Preparation of primary neurons.Primary cortical neurons were obtained from FVB/NJ mouse embryos at embryonic day 15.5.Pregnant FVB/NJ females were anesthetized by intraperitoneal injection of 250 mg Avertin per kg weight, followed by cervical dislocation."Embryos were removed and transferred into a Petri dish with ice-cold Dulbecco's Modified Eagle's Medium/F12 medium.Brains were removed, and meninges carefully detached.Cortices were isolated and transferred into a 1.5-ml tube with prewarmed papain solution for 25 minutes at 37 °C, 5% CO2, to dissolve tissue."Papain solution was prepared by suspending DNase I in 0.5 ml Hibernate E medium, and transferring 0.25 ml DNase I solution to papain dissolved in 2 ml Hibernate E medium and 1 ml Earle's balanced salt solution.After the 25-minute incubation, papain solution was replaced with 1 ml NbActiv4 medium supplemented with 2.5% FBS.Cortices were dissociated by repeated pipetting with a fire-polished, glass, Pasteur pipette.Cortical neurons were counted and plated at 1 × 106 cells per ml.For live-cell imaging, culture plates were precoated with poly-l-lysine, and 2 × 105 cells were added to the glass center of each dish.For silencing assays, neurons were plated on 96-well plates precoated with poly-l-lysine at 1 × 105 cells per well.After overnight incubation at 37 °C, 5% CO2, an equal volume of NbActiv4 supplemented with anti-mitotics, 0.484 µl/ml of UTP Na3, and 0.2402 µl/ml of FdUMP, was added to neuronal cultures to prevent growth of nonneuronal cells.Half of the media volume was replaced every 48 hours until the neurons were treated with siRNA.Once the cells were treated, media was not removed, only added.All subsequent media additions contained antimitotics.Direct delivery of oligonucleotides."Cells were plated in Dulbecco's Modified Eagle's Medium containing 6% FBS at 10,000 cells per well in 96-well tissue culture plates.hsiRNA was diluted to twice the final concentration in OptiMEM, and 50 μl diluted hsiRNA was added to 50 μl of cells, resulting in 3% FBS final.Cells were incubated for 72 hours at 37 °C and 5% CO2.Based on previous experience, we know that 1.5 µmol/l active hsiRNA supports efficient silencing without toxicity.The primary screen for active Htt siRNAs, therefore, was performed at 1.5 µmol/l compound, which also served as the maximal dose for in vitro dose–response assays.hsiRNA lipid-mediated delivery."Cells were plated in Dulbecco's Modified Eagle's Medium with 6% FBS at 10,000 cells per well in 96-well tissue culture–treated plates.hsiRNA was diluted to four times the final concentration in OptiMEM, and Lipofectamine RNAiMAX Transfection Reagent was diluted to four times the final concentration.RNAiMAX and hsiRNA solutions were mixed 1:1, and 50 µl of the transfection mixture was added to 50 µl of cells resulting in 3% FBS final.Cells were incubated for 72 hours at 37 °C and 5% CO2.mRNA quantification in cells and tissue punches.mRNA was quantified using the QuantiGene 2.0 Assay.Cells were lysed in 250 μl diluted lysis mixture composed of 1 part lysis mixture, 2 parts H2O, and 0.167 μg/μl proteinase K for 30 minutes at 55 °C.Cell lysates were mixed thoroughly, and 40 μl of each lysate was added per well to a capture plate with 40 μl diluted lysis mixture without proteinase K. Probe sets were diluted as specified in the Affymetrix protocol.For HeLa cells, 20 μl human HTT or PPIB probe set was added to appropriate wells for a final volume of 100 μl.For primary neurons, 20 μl of mouse Htt or Ppib probe set was used.Tissue punches were homogenized in 300 μl of Homogenizing Buffer containing 2 μg/μl proteinase K in 96-well plate format on a QIAGEN TissueLyser II, and 40 μl of each lysate was added to the capture plate.Probe sets were diluted as specified in the Affymetrix protocol, and 60 μl of Htt or Ppib probe set was added to each well of the capture plate for a final volume of 100 μl.Signal was amplified according to the Affymetrix protocol.Luminescence was detected on either a Veritas Luminometer or a Tecan M1000.Western blot.Cell lysates were separated by SDS–PAGE using 3–8% Tris-acetate gels and transferred to nitrocellulose using a TransBlot Turbo apparatus.Blots were blocked in 5% nonfat dry milk diluted in Tris-buffered saline with 0.1% Tween-20 for 1 hour at room temperature then incubated in N-terminal antihuntingtin antibody Ab158 diluted 1:2,000 in blocking solution overnight at 4 °C with agitation.After washing in TBST, blots were incubated in peroxidase-labeled antirabbit IgG diluted in blocking buffer for 1 hour at room temperature, washed in TBST, and proteins were detected using SuperSignal West Pico Chemiluminescent Substrate and Hyperfilm ECL.Blots were reprobed with anti-β tubulin antibody as a loading control.Films were scanned with a flatbed scanner, and densitometry was performed using NIH ImageJ software to determine total intensity of each band.The huntingtin signal was divided by the tubulin signal to normalize to protein content, and percent of untreated control was determined for each set of samples.Live cell imaging.To monitor live cell hsiRNA uptake, cells were plated at a density of 2 × 105 cells per 35-mm glass-bottom dish.Cell nuclei were stained with NucBlue as indicated by the manufacturer.Imaging was performed in phenol red-free NbActiv4.Cells were treated with 0.5 μmol/l Cy3-labeled hsiRNA, and live cell imaging was performed over time.All live cell confocal images were acquired with a Leica DM IRE2 confocal microscope using 63x oil immersion objective, and images were processed using ImageJ software.Stereotaxic injections.FVB/NJ mice were deeply anesthetized with 1.2% Avertin and microinjected by stereotactic placement into the right striatum.For both toxicity and efficacy studies, mice were injected with either PBS or artificial CSF, 12.5 μg of nontargeting hsiRNA, 25 μg of HTT10150 hsiRNA, 12.5 μg of HTT10150 hsiRNA, 6.3 μg of HTT10150 hsiRNA, or 3.1 μg of HTT10150 hsiRNA.For toxicity studies, n = 3 mice were injected per group, and for efficacy studies, n = 8 mice were injected per group.Mice were euthanized 5 days post-injection, brains were harvested, and three 300-μm coronal sections were prepared.From each section, a 2-mm punch was taken from each side and placed in RNAlater for 24 hours at 4 °C.Each punch was processed as an individual sample for Quantigene 2.0 assay analysis and averaged for a single animal point.All animal procedures were approved by the University of Massachusetts Medical School Institutional Animal Care and Use Committee.Immunohistochemistry/immunofluorescence.Mice were injected intrastriatally with 12.5 µg of Cy3-labeled hsiRNA.After 24 hours, mice were sacrificed and brains were removed, embedded in paraffin, and sliced into 4-μm sections that were mounted on glass slides.Sections were deparaffinized by incubating in Xylene twice for 8 minutes.Sections were rehydrated in serial ethanol dilutions for 4 minutes each, and then washed twice for 2 minutes with PBS.For NeuN staining,39,40 slides were boiled for 5 minutes in antigen retrieval buffer), incubated at room temperature for 20 minutes, and then washed for 5 minutes in PBS.Slides were blocked in 5% normal goat serum in PBS containing 0.05% Tween 20 for 1 hour and washed once with PBST for 5 minutes.Slides were incubated with primary antibody for 1 hour and washed three times with PBST for 5 minutes.Slides were then incubated with secondary antibody for 30 minutes in the dark and washed three times with PBST for 5 minutes each."Slides were then counterstained with 250 ng/ml 4,6-diamidino-2-phenylindole in PBS for 1 minute and washed three times with PBS for 1 minute.Slides were mounted with mounting medium and coverslips and dried overnight before imaging on a Leica DM5500 microscope fitted with a DFC365 FX fluorescence camera.For toxicity studies, injected brains were harvested after 5 days.For microglial activation studies, brains were harvested after 6 hours or 5 days.Extracted, perfused brains were sliced into 40-µm sections on the Leica 2000T Vibratome in ice-cold PBS.Every sixth section was incubated with DARPP-32 or IBA-1 antibody, for a total of nine sections per brain and eight images per section.IBA-1 sections were incubated in blocking solution for 1 hour, and then washed with PBS.Sections were incubated overnight at 4 °C in primary antibody, anti-Iba1.Sections were then stained with goat antirabbit secondary antibody, followed by a PBS wash, the Vectastain ABC Kit, and another PBS wash.IBA-1 was detected with the Metal Enhanced DAB Substrate Kit.For DARPP32 staining, sections were washed for 3 minutes in 3% hydrogen peroxide, followed by 20 minutes in 0.2% TritonX-100 and 4 hours in 1.5% normal goat serum in PBS.Sections were incubated overnight at 4 °C in DARPP32 primary antibody made up in 1.5% normal goat serum.Secondary antibody and detection steps were conducted as described for IBA-1 staining.DARPP-32 sections were mounted and visualized by light microscopy with 20× objective on a Nikon Eclipse E600 with a Nikon Digital Sight DSRi1 camera.The number of DARPP-32-positive neurons was quantified manually using the cell counter plug-in on ImageJ for tracking.Activated microglia were quantified by morphology of IBA-1-positive cells42,43,44,45 from the same number of sections captured with 40× objective.Counting of both IBA-1- and DARPP-32-positive cells was blinded.Coronal section images were taken with a Coolscan V-ED LS50 35-mm Film Scanner.Statistical analysis.Data were analyzed using GraphPad Prism 6 software.Concentration-dependent IC50 curves were fitted using a log versus response–variable slope.The lower limit of the curve was set at zero, and the upper limit of the curve was set at 100.For each independent mouse experiment, the level of knockdown at each dose was normalized to the mean of the control group."In vivo data were analyzed using a two-way repeated-measures analysis of variance with Tukey's multiple comparisons test for dose and side of brain.Differences in all comparisons were considered significant at P values less than 0.05 compared with the NTC- injected group.P values reported represent significance of the entire dose group relative to NTC and are not specific to the ipsilateral or contralateral side.For microglial activation, significance was calculated using a parametric, unpaired, two-tailed t-test for comparison between dose groups, and paired t-test for comparison between ipsilateral and contralateral hemispheres within the same dose group.
Applications of RNA interference for neuroscience research have been limited by a lack of simple and efficient methods to deliver oligonucleotides to primary neurons in culture and to the brain. Here, we show that primary neurons rapidly internalize hydrophobically modified siRNAs (hsiRNAs) added directly to the culture medium without lipid formulation. We identify functional hsiRNAs targeting the mRNA of huntingtin, the mutation of which is responsible for Huntington's disease, and show that direct uptake in neurons induces potent and specific silencing in vitro. Moreover, a single injection of unformulated hsiRNA into mouse brain silences Htt mRNA with minimal neuronal toxicity. Thus, hsiRNAs embody a class of therapeutic oligonucleotides that enable simple and straightforward functional studies of genes involved in neuronal biology and neurodegenerative disorders in a native biological context.
73
Trace metal distributions in sulfide scales of the seawater-dominated Reykjanes geothermal system: Constraints on sub-seafloor hydrothermal mineralizing processes and metal fluxes
Recent years have seen a rapid growth in demand for specific elements and metals that are commonly present in only trace quantities in mined ores and recovered only as by-products of other metals.Some of these metals are found in ancient volcanogenic massive sulfide deposits that are mined mostly for Cu, Zn and Pb, and in similar modern seafloor massive sulfide deposits that might be mined in the future.However, the abundance, distribution, and economic potential of many trace and rare metals in submarine hydrothermal systems is poorly understood due to the inherent difficulties of directly sampling active, deep high-temperature upflow.During convective hydrothermal circulation, reactions between heated seawater and volcanic rocks occur first at low temperatures as cold seawater is drawn into the oceanic crust in the down-flowing limb of the hydrothermal convection cell, then at much higher temperatures in the deepest parts of the circulation system, reaching 400 °C in a high-temperature “reaction zone” at ~2 km depth.Metals and reduced sulfur leached from the host rock reach maximum concentrations in the reaction zone where they become major constituents of end-member hydrothermal fluids.This superheated, metal-enriched fluid rises through vertically extensive fracture networks to be discharged at the seafloor.The compositions of the venting fluids have been extensively studied, but little is known about the fate of metals in the deep sub-seafloor as the fluids escape the reaction zone and rise to the surface.Except in a few cases where the underlying stockwork zones have been drilled, subseafloor geochemical profiles of the altered and mineralized crust of actively-forming massive sulfide deposits have been difficult to study.As a result, a number of questions remain about the transfer of metals from the deeper parts of the hydrothermal system to the seafloor.Numerous authors have attempted to reconstruct deep hydrothermal upflow zones from studies of alteration in ophiolite sections and in lower oceanic crust exposed by faults.However, the conditions under which the stockwork mineralization formed cannot always be clearly determined, especially in fossil hydrothermal systems.Mineral scales from actively discharging geothermal systems offer the unique opportunity to relate details of mineral precipitation to the pressure and temperature conditions in the upflow zone, in some cases including actual samples of the mineralizing fluids.In this study, we examine the trace metal distribution in high-temperature mineral scales from downhole pipes and surface pipes in the seawater-dominated, basalt-hosted Reykjanes geothermal system on Iceland.These scales represent a nearly complete profile of sub-seafloor mineralization from 2.7 km below surface to the silica-rich sinters deposited at low temperatures on the surface.Previous studies have focused on sulfide scaling in surface pipes and, where drill cuttings have been sampled, on the mineralogy and geochemistry of the altered wall rocks.We present the first comprehensive analysis of mineral precipitates within actively discharging wells from the bottom to the top of the hydrothermal system at well-constrained pressures and temperatures.We examine the trace metals in scales precipitated from the deep, pre-boiled reservoir liquids at ~2 km depth and boiling hydrothermal fluids through the entire upflow zone and in the surface pipes to the point of discharge.These results provide a unique picture of the behavior of the metal load in fluids traversing the whole of the hydrothermal system.The Reykjanes Peninsula is located on the southwestern tip of Iceland and is the subaerial continuation of the offshore slow-spreading Reykjanes Ridge.The exposed basement volcanic rocks are younger than 700 ka and a significant proportion of the peninsula is covered by post-glacial Holocene, highly permeable, mafic shield or fissure-fed lavas.The young lavas were erupted from five en-echelon NE-SW trending volcanic systems, from west to east: Reykjanes, Svartsengi, Krisuvik, Brennisteinsfjöll, and Hengill, along a 65 km segment of the peninsula.The last phase of volcanism on the peninsula, between c. 940 and c. 1340 AD, consisted of basaltic fissure-fed flows.Magmatic activity is now concentrated in submerged ridge segments offshore.The Reykjanes geothermal system is hosted within the young and highly permeable basaltic formations.Evidence from the chronology of recent eruptions in the area, which cover altered volcanic units, indicates that the geothermal system has been active at least since the last glacial maximum, 18,000–20,000 yr ago.The lowest stratigraphic units below 1 km are pillow basalts considered to have erupted in deep water.Successive units are stacked to shallower depths where the eruptive style changed to a more explosive mode.The upper 1 km is dominated by subaerial to marine Pleistocene hyaloclastite, breccia, dense phreatic tuffs and marine sediments acting as a semi-impermeable cap on the hydrologic system.Reworked sediments containing shallow water fossils indicate a coastal environment, and the most recent formations include sub-glacial hyaloclastite and subaerial lavas.The stratigraphic succession is intruded by dykes, which increase in abundance at depth.A low resistivity anomaly 10 km below the surface expression of the geothermal system has been interpreted as either a dense sheeted- dyke complex or a large cooling gabbroic intrusion, and likely represents the heat engine of the shallower hydrothermal system.Frequent, but generally small earthquakes maintain good permeability for hydrothermal upflow.The primary high-temperature upflow exploited by wells RN-10, RN-12, RN-21, RN-27 is controlled by the intersection of a NE-SW trending zone of normal faults and eruptive fissures, N-S trending fractures, and a NW-SE transform fault.The surface thermal manifestation covers ~1 km2, but a low resistivity zone at 800 m depth has an area of 11 km2 and likely represents the full areal extent of the hydrothermal upflow.The Reykjanes geothermal system is the only seawater-dominated, basalt-hosted geothermal energy production site in the world.Thirty-seven geothermal wells have been drilled into the field to a maximum depth of 3028 m, drawing on a high-temperature reservoir fluid of modified seawater, which is directly analogous to vent fluid in modern black smoker systems.The surface infrastructure lies less than 40 m above sea level and is surrounded by the Atlantic Ocean on three sides.Based on liquids sampled at the surface and corrected for vapor loss, and the compositions of deep liquid sampled downhole, a fluid of essentially constant composition is considered to be feeding all of the wells at Reykjanes.The chloride concentration is essentially that of seawater; other dissolved components such as K+, Ca2+, SiO2, CO2 and H2S are enriched due to the reaction of heated seawater with the surrounding basalt.SO42- and Mg are depleted relative to seawater.A pH of ~5.3 was calculated for the Reykjanes reservoir fluid by Hannington et al., and is a unit or more higher compared to typical black smoker MOR vent fluids with a pH at 25 °C of 3–4.Kadko et al. estimated that the crustal residence time of circulating hydrothermal fluids sampled from well RN-12 is less than 5 years, comparable to hydrothermal fluid residence times obtained from submarine hydrothermal systems.Surface meteoric waters locally penetrate the upper 30 m of the hydrothermal upflow.However, 87Sr/86Sr values shift significantly towards seawater values with increasing depth, confirming deep penetration of seawater into the Reykjanes system.There is no strong evidence for direct magmatic input in the hydrothermal fluids.Sulfur isotope values of sulfide mineral scales in surface pipes range from 2.3‰ to 4.2‰, similar to black smoker sulfide deposits.At the time of sampling in this study, the reservoir fluid was boiling in every well from the surface to maximum depths of ~1400 m, below which the system is liquid-dominated to a minimum depth of 2.5 km.The highest temperature directly recorded during production is 320 °C in the 2054 m deep well RN-10.At this temperature, the depth level of first boiling for seawater is ~1400 m depth below the water table.In the RN-10 upflow zone, fluid below about 1400 m depth will be liquid only, above this depth it will be two-phase.In the two-phase zone, temperatures are determined by hydrostatic head and follow the boiling curve.The temperature at well depths of 1–2.5 km is between 280° and 300 °C; however, temperatures up to 350 °C have been recorded in the inclined wells RN-17B and RN-30 at depths of 3077 m and 2869 m.Inflow feed zones occur at various depths from 800 to 1200 m and 1900 to 2300 m, the latter below the boiling zone.Since the introduction of a 100 MWe power plant in 2007, intense drawdown of fluid has resulted in reservoir pressures dropping by as much as 40 bars, and depth of boiling for some wells has deepened.A steam cap is also forming in the central part of the field.Drill cuttings of altered rocks from the wells show a zoned hydrothermal system at depth.Temperatures <200 °C are indicated by smectite-zeolite facies alteration down to 500 m depth; temperatures >200–250 °C are indicated by epidote-zeolite facies alteration to ~1.2 km.Below 1.2 km, epidote-actinolite is dominant, indicating temperatures of 250–350 °C, and below ~2.4–3.0 km amphibolite facies alteration indicates temperatures of >350 °C.However, at very shallow depths, high-temperature alteration phases are not consistent with current temperatures and are interpreted to reflect a higher pressure and temperatures during Pleistocene glaciation.The extent and mineralogy of the alteration generally have many similarities to alteration beneath actively-forming SMS deposits.Producing wells supply a two-phase fluid directly to the separator station, which has two steam separators, and then to two 50 MWe turbines in the power station.Wells are classified according to the pressure of fluids emerging at the wellhead, which generally reflects proximity to the primary upflow, where one bar is equal to 0.1 megapascals.High-pressure wells have wellhead fluid pressures of between 32 and 50 bar, medium-pressure wells between 28 and 32 bar, and low-pressure wells 25–28 bar.The two-phase fluid that emerges at the surface wellhead continues to boil in the surface pipes and as it passes through several control points.These control points are hereafter referred to as the OP and FFCV.The OP and FFCV serve as throttle points for well management, to maintain two-phase flow to the separator station at a constant pressure of ~22 bar, and also to help minimize silica supersaturation and precipitation in the surface pipes.Abrupt changes in pressure occur at the OP and FFCV, and so significant scaling can occur at these locations.A small amount of brine from the separator station is either sent to the venthouse to control separator pressure or discharged into the Grey Lagoon and, once cooled, released into the ocean.The steam phase, which is sent to the power station, and then out to a condenser and cooling tower, is mixed with brine and re-injected in distal recharge wells to partially mitigate the drawdown of liquid from the system.The deepest exploration well drilled to date is RN-17 where the bottom hole temperature at 3082 m depth is between 320° and 380 °C.In 2017, the Iceland Deep Drilling Project utilized the existing Reykjanes well RN-15 to drill to 4659 m depth.The project aimed to find supercritical fluids below the current production zone of the Reykjanes geothermal field.IDDP-2 successfully measured temperatures of 426 °C and fluid pressures of 340 bars corresponding to supercritical conditions within permeable layers at the bottom of the well.As the reservoir liquid is drawn up well pipes for extraction, artificially induced pressure and temperature changes cause boiling and precipitation of sulfides directly from the fluid as millimeter to centimeter thick scales in both downhole and surface pipes.Previous work at Reykjanes has shown that the mineralogy of the scales is directly comparable to black smokers at active seafloor hydrothermal vents but with spectacular enrichment of some elements, such as gold.As noted above, the geometry and the conditions of the deep reservoir zone are well known from downhole measurements and from studies of the host rocks and alteration, but a complete study of the mineral scales, including in the deep subsurface, has never been carried out.Sulfide and silica-rich deposits in the geothermal wells are especially abundant where large pressure changes occur and can severely limit the fluid flow and energy production.Artificially-induced pressure changes, including ‘flashing’, during well management cause sulfides to precipitate on the inside of downhole pipes throughout the entire system.Exsolution of gases during boiling destabilizes metal complexes and promotes sulfide deposition, removing a large proportion of the metal load in the hydrothermal fluid.Even in deep wells that extend far below the bottom of the boiling zone, minor sulfide scaling is also present near the bottoms of the wells.Mineral precipitation in the deep wells in the Reykjanes geothermal system is thought to be similar to that in the upflow zones and stockworks of boiling seafloor hydrothermal systems, with the two-phase, boiled fluid emerging at the wellhead being depleted in metals due to sub-surface precipitation.The sub-surface scaling is analogous to stockwork mineralization; scaling in the surface pipes through to low-temperature discharge in the Grey Lagoon represents lower-temperature deposition similar to what may occur near the seafloor.Unboiled reservoir liquids in the Reykjanes system correspond to the deep high-temperature fluids in seafloor hydrothermal systems.Mineral scales in shallow downhole pipes and in upstream surface pipes before the first orifice plate most closely resemble the mineral assemblages in high-temperature sulfide chimneys of seafloor hydrothermal systems.Between the wellhead and the first orifice plate, the two-phase fluid pressure is always above ~22 bar, which maintains high-temperatures in the surface pipes.Precipitation of high-temperature sulfides occurs on the walls of the upstream pipes and on the ‘upstream’ side of the orifice plate.Precipitation of Cu- and Cu-Fe-sulfides also occurs directly on the FFCV.Downstream from the FFCV, towards the separator station, sulfide precipitates are similar to lower-temperature seafloor massive sulfides.Closer to the separator station, scales are mainly amorphous silica with minor or trace sulfides.By the time the two-phase fluid reaches the separation station, virtually all metal has been deposited, and the solids precipitated from the brine released to the Grey Lagoon consist mostly of silica.These precipitates are analogous to low-temperature silica-rich deposits formed by diffuse hydrothermal venting in active seafloor hydrothermal systems.The scaling in the surface pipes has been extensively studied by Hardardóttir.In this paper, we focus on the downhole scales and especially the behavior of the trace metals from the top to the bottom of the geothermal system.Mineral scales examined in this study were collected by V. Hardardóttir during periodic well maintenance over an eleven-year period between 2003 and 2014 in cooperation with HS Orka HF, which operates the geothermal installation.Samples were collected from surface pipelines and the power station when the pipes were opened for cleaning and during turbine maintenance breaks.Scales inside the pipes could be accessed and the orifice plates and fluid flow control values could be removed.A total of 87 samples were collected between the different wellheads and the Grey Lagoon.Scales were photographed and described before removal, with the flow direction noted.Downhole scales were collected during periodic well workovers, either by rotary cleaning or by removal of the downhole liner following quenching of the well.In 2009, nine samples were collected from well RN-22 between 141 and 669 m depth during an attempt to remove sulfide scales from the casing and liner.In 2013, well RN-22 was quenched by injecting cold water, and the complete liner removed from the well.This provided a unique opportunity to sample scales from the liner to a depth of more than 1.6 km.The majority of scales, however, became dislodged during liner removal and fell to the bottom of the well.However, 8 samples were taken from scales still fixed to the liner wall from 1051 m to 1088 m and 1636 m to 1646 m.In 2014, 24 samples were collected between surface and 1832 m depth from well RN-10 during reaming of the well while in discharge.This required a special gland to seal around the drill pipes and divert the flow from the rig.A tricone drill bit removed the scaling from the interior of the pipes, and scales were transported to surface by two phase flow as fragmented chips.Scale cuttings in the fluids were collected at specified intervals using a wire sieve at surface.The depths from which the scales were dislodged and transported to the surface were recorded based on the depth of the drill bit.In 2011, a stainless steel apparatus for a fluid inclusion study was suspended by a wireline at 2700 m depth in well RN-17B for three weeks at reservoir conditions of ~170 bar and 330 °C.Sulfides that precipitated in holes in the outer housing of the apparatus were collected for analysis when it was returned to the surface.Eighty samples of sulfide-bearing scales from representative locations throughout the Reykjanes geothermal system were prepared for bulk geochemical analysis.To check for steel fragments from the well liner, a strong magnet was passed through samples to remove the magnetic component.This was examined under a binocular microscope and any remnant steel liner fragments removed before returning the magnetic mineral component back to the bulk sample.No bulk jaw mill crushing was required.Samples were pulverized in a 250-ml capacity agate mill with 5 x10 mm agate balls.Each sample was pulverized in 5-minute cycles, repeated as necessary up to 5 cycles to achieve a fineness of less than approximately 105 µm.Between samples the mill was cleaned with quartz sand in at least three 5-minute cycles each.Samples were analyzed for major elements by ICP-OES following a sodium peroxide fusion and total acid digestion at Activation Laboratories, Ancaster, Canada.Trace elements were analyzed by a combination of instrumental neutron activation analysis and by ICP-MS following sodium peroxide fusion and total digestion.Where samples contained high Cu, Zn, Pb, Cd, or Au, re-analysis was performed using a four-acid total digestion ICP-OES assay technique, and gravimetric fire assay.Fluorine was determined in a subset of samples by an ion-selective electrode technique.The accuracy and precision of the bulk analyses were monitored by repeat analysis of the CCRMP certified reference material CZN-4 and duplicates inserted evenly throughout the batch.The chemical data are summarized in Tables 3 and 4 and the complete data set is reported in Table A1.Data for an additional 35 surface scales, 5 samples from the Grey Lagoon and separator station, and 8 shallow downhole samples previously analyzed by ISOR are also included.Data for the deep sulfide scales in RN-17B are from Hardardóttir et al.Polished thin sections of all scales were prepared for transmitted and reflected light microscopy and mineral analysis.The downhole samples from RN-10 consist of small chips, which were prepared as grain mounts embedded in epoxy and polished for reflected light microscopy.Discrete mineral phases in the sampled scales were analyzed for Fe, Cu, Zn, Pb, Au, Ag, and S on a JEOL-8200 Superprobe equipped with five wavelength-dispersive spectrometers and one energy-dispersive detector at the GEOMAR Helmholtz Center for Ocean Research Kiel in Germany.Copper, Fe, and S were calibrated using a natural chalcopyrite standard, Zn by a synthetic sphalerite standard, Pb by a galena standard, and Au and Ag by an Au60-Ag40 alloy.Magnetite was analyzed for Ti, Fe, Al, Cr, Si, Ni, Mg, Mn using an ilmenite standard for Ti, Fe, and Mn, a chromite standard for Al, Cr, and Mg, an anorthite standard for Si, and a synthetic NiO standard for Ni.Operating conditions were 15 kV accelerating voltage and 50nA current with a 50 s count time.The full data sets are presented in Table B1 and Table C1.X-ray diffraction analysis was carried out on whole-rock powders of all downhole samples from RN-10 using a Philips X-Ray PW 1710 diffractomer and goniometer, equipped with a Co-tube and an automatic divergence slit and monochromator at GEOMAR Helmholtz Center for Ocean Research Kiel, Germany.Operating conditions were 40 kV and 35 mA, a 2-theta scanning angle of 4° to 75°, and a scan rate of 1 s per 0.02° step.The software package MacDiff was used to manually identify mineral phases.High-resolution field-emission SEM imaging at Fibics Incorporated in Ottawa, Canada, was performed on downhole samples from well RN-10: two samples from the boiling zone at 1098 m and 1099 m, and one from below the boiling zone at 1832 m depth.The instrument used was a Zeiss Sigma HD variable-pressure field-emission scanning-electron microscope, which was operated with an accelerating voltage of 15 kV, a beam current of 7nA, and a working distance of 10 mm.Images were taken using the backscattered electron detector.Downhole and surface scales were sampled from five different zones, each with well-defined conditions: I) below the boiling zone in wells RN-10, RN-22, and RN-17B; II) within the deep part of the boiling zone, from 1504 m to 1085 m in RN-10 and 1064 to 1088 m in RN-22; III) in the central part of the boiling zone from 904 m to the wellhead in RN-10, from 669 m to 141 m in RN-22, and then from the wellhead to the first orifice plate where pressure drops to ~22 bar in each of RN-10, 11, 12, 13B, 14, 14B, 15, 22, 23, 26, and 28; IV) in the surface boiling zone from the first OP and FFCV to ~40 m downstream in each of RN-10, 11, 12, 13, 13B, 14, 14B, 15, 18, 21, 22, 23, 24, 26, and 28; V) in distal surface pipes from the separation station to the Grey Lagoon.These samples represent mineral scales that precipitated directly from unboiled fluids at depth in the Reykjanes geothermal system.Fourteen composite samples were collected from three high-pressure wells beneath the boiling zone: between 1575 m and 1832 m depth in RN-10, between 1636 m and 1646 m depth in RN-22, and at 2700 m depth in RN-17B.Scales in RN-10 are at least 1 mm thick but the exact thickness at each sampling point is unknown due to the rotary drilling method of cleaning the wells and the highly fragmented nature of the scale samples.The deepest scales in RN-22 were no more than 2 mm thick.The most common sulfides below the boiling zone in both RN-10 and RN-22 are dark-red pyramidal wurtzite and dark brown to black sphalerite.They are equally abundant and exhibit a variety of textures, most commonly as lath-shaped crystals up to 3 mm in size and as fine-grained dendrites, often with a preferred alignment or growth direction, often with chalcopyrite.In RN-10, inclusions of chalcopyrite are abundant, often as aligned crystals within dendritic Zn-sulfides.The chalcopyrite is variably associated with bornite and/or covellite, which are complexly intergrown with the host Zn-sulfide at a micron-scale, giving the sphalerite and wurtzite a mottled appearance.The lath-shaped crystals of sphalerite and wurtzite are commonly skeletal and can have high porosity.Sphalerite locally forms ‘hopper’ crystals indicative of supersaturated solutions where sphalerite crystallized rapidly, leaving gaps between crystal domains and small inclusions of chalcopyrite aligned along growth planes.Wurtzite has two distinct habits enabling clear identification in many samples, and was confirmed by XRD.It has strong red internal reflections and contains variable amounts of chalcopyrite and associated minor bornite or covellite.At and below 1680 m in RN-10, some wurtzite has a distinctive bladed ‘sawtooth’ habit, commonly with abundant coarse monomineralic chalcopyrite.Chalcopyrite is present in nearly all scale samples below the boiling zone in RN-10, most commonly as rims on massive sphalerite/wurtzite, as blebs in massive sphalerite/wurtzite, and as coarse, monomineralic blades up to 5 mm in size.Coarse, bladed chalcopyrite increases in abundance downhole, particularly between 1737 m and 1800 m depth in RN-10.Chalcopyrite blebs within, and interstitial to Zn-sulfide dendrites and laths in RN-10 are associated with bornite, digenite, and covellite.Unlike RN-10, the deep scales in RN-22 contain very little to no Cu- or Cu-Fe-sulfides, although this may reflect a sampling bias.Chalcopyrite- and rare pyrite-bearing scales, rock fragments, and clay were found at the bottom of the well liner when pulled to surface.Other sulfides are found in trace quantities in Group I scales.Well-crystallized pyrrhotite, minor skeletal galena, and trace pyrite occurs in scales from 2700 m depth in well RN-17B, but no pyrrhotite or galena were seen in scales below the boiling zone in wells RN-10 or RN-22.XRD analysis indicates that isocubanite is present at 1800 m and 1832 m depths in RN-10, and covellite between 1575 and 1800 m.There is a thin coating of fine-grained covellite on many samples that is removed during polishing, indicating that it may be due to late oxidation of the samples.One coarse monomineralic pyrite scale was recovered from 1832 m depth in RN-10.Elsewhere, fine-grained euhedral to subhedral pyrite is only rarely found in Zn-sulfides or chalcopyrite; where seen it is commonly brecciated, overgrown and partially replaced.Coarse, monomineralic magnetite up to 5 mm in size is present at and above 1575 m in RN-10.Below this depth, and in RN-22, magnetite decreases in abundance and only occurs as 5–20 μm disseminations in fine-grained silica.Secondary hematite and goethite occur in samples recovered from 1636 m in RN-22.Free grains of gold were found within sulfide crystals captured on the stainless steel apparatus placed at 2700 m in well RN-17B as part of a downhole experiment.The gold is particularly associated with chalcopyrite and pyrrhotite.Dendrites of native silver are also common in RN-17B.XRD analyses indicate the deep scales from RN-10 and RN-22 also contain minor amounts of amorphous silica, chlorite, epidote, enstatite, diopside, amphibole, and clinozoisite.Smectite is present at 1504 m in RN-10, but is not found at greater depths.Sulfide scales formed above boiling onset were collected in 15 samples from 1085 to 1504 m in RN-10 and 1050 to 1088 m in RN-22.The scales are dominated by sphalerite and wurtzite, similar to the massive, dendritic, and skeletal Zn-sulfides from deeper in the wells.Wurtzite is present in all samples from this group in RN-10, recognized by its dark red-brown color and hemimorphic pyramidal habit.It is distinct from the lighter-colored wurtzite in the Group I scales and increases in abundance up the well above the onset of boiling.Hopper crystals of Zn-sulfide occur between 1245 m and 1138 m in RN-10.Colloform-banded sphalerite plus chalcopyrite was found in one sample recovered from 1099 m depth.The abundance of chalcopyrite increases upwards through the lower boiling zone in well RN-10, with massive coarse chalcopyrite observed in the samples from 1168 m to 1095 m. Chalcopyrite blebs within, and interstitial to, dendritic Zn-sulfide laths are common and occur with variable bornite, covellite and digenite.Bornite and digenite increase above 1136 m depth, and between 1136 m and 1085 m, they are more abundant than chalcopyrite.Compared to RN-10, Cu-bearing sulfides are relatively rare in RN-22.Electrum is abundant in Group II scales, between 1148 m and 1085 m depth in RN-10, and increases in grain size and abundance up the well from the bottom of the boiling zone.In all wells electrum is common in dendritic sphalerite and occurs with blebby chalcopyrite as <0.5–3 μm grains, typically at the grain boundaries between chalcopyrite and sphalerite/wurtzite and only rarely in wurtzite where chalcopyrite is absent.It is coarsest when associated with massive, coarse chalcopyrite, for example at 1098 m in RN-10.It is rarely associated with bornite or covellite.Electrum is also not present in the ‘hopper’ crystals of Zn-sulfides.Pyrite was observed as fractured crystals within chalcopyrite bands at downhole depths of 1098 m, 1198 m, and 1245 m in RN-10 but is otherwise uncommon in the lower boiling zone.Magnetite is common in Group II scales as isolated blocky grains with or without chalcopyrite inclusions, and locally with distinctive bands similar to Liesegang banding.Amorphous silica and Fe-Mg silicates are present in Group II scales.Other minerals identified by XRD include illite/montmorillonite, kaolinite, hornblende, enstatite, spinel, garnet, nontronite, saponite, and zoisite and are likely found in altered basalt chips.Thirteen samples of scales were collected from wells RN-10 and RN-22 between 904 m and the surface wellhead.Nineteen samples were also collected from the top of the boiling zone between the wellhead and the first throttle point.These samples were from high-pressure wells RN-10, 11, 14, 14B, 22, 23, and 28, a medium-pressure well RN-12, and two low-pressure wells RN-13B and 15.Sphalerite is the dominant sulfide, with minor wurtzite.It has distinctive dendritic and skeletal textures similar to those in Group I and II scales.This is interpreted to reflect early-stage nucleation and very rapid precipitation.Fine-grained sphalerite with high porosity was also observed.Pyrite and pyrrhotite are rare; trace pyrite is present at 660 m depth in RN-10 and at between 618 m and 141 m depths in RN-22.Pyrrhotite was also documented in RN-22 and in surface scales by Hardardóttir et al., but it is not directly associated with either pyrite or magnetite.Magnetite decreases in abundance up the well to ~660 m depth in both in RN-22 and RN-10.Hematite is more common in the shallow scales, particularly above 669 m in RN-22.Electrum is abundant at 660 m in RN-10, where it is associated with chalcopyrite at grain boundaries with sphalerite, as in the lower boiling zone.Fine-grained electrum also occurs in fractures in rare brecciated pyrite at 660 m depth but is not associated with pyrite in any other samples in this group.Scales from RN-22, between 669 m and 141 m, contain discrete <5 μm clusters of native silver intergrown with semi-massive sphalerite and bornite.Closer to the wellhead and first throttle point in the surface pipes, all sulfide minerals in Group III scales become more fine-grained and contain less chalcopyrite.Dendritic and lath-shaped sphalerite are intergrown with chalcopyrite, bornite, and trace digenite, often showing a preferred mineralizing direction.Mineral abundances vary between wells; the high-pressure well RN-23 has a higher abundance of well-crystallized sphalerite, chalcopyrite, and electrum grains.RN-12 and RN-14B contain less sphalerite but greater amounts of galena, and silver and secondary minerals filling late cooling fractures.Trace, brecciated pyrite is dispersed in chalcopyrite-rich bands in wellhead scales in RN-10 and up to the first orifice plate in RN-11, similar to downhole scales.Amorphous silica and Fe-Mg silicates are also present in the scales near the wellhead.Other minerals identified by XRD from scales in the upper part of the boiling zone in RN-10 and RN-22 include epidote, clinozoisite, amphibole, diopside, spinel, garnet, wollastonite, micas, and calcite.Similar to Group II scale samples, these minerals are likely sourced from altered basalt.Sixty one samples were collected downstream from the orifice plate in high-pressure wells, medium-pressure wells and low-pressure wells.These scales are dominated by sphalerite and minor wurtzite intergrown with digenite and bornite within fine-grained silica.The Zn-sulfides have textures that are similar to Group I, II, and III scales but are much finer-grained.Chalcopyrite is much less abundant, and bornite and digenite are the most common Cu-bearing sulfides, particularly on the OP and FFCV.Discrete mm-scale bands of the very fine-grained Cu- and Cu-Fe-sulfides are often intergrown with bands of similarly fine-grained Zn-sulfides.Where present, the sphalerite laths commonly have preferred orientations that indicate fluid flow direction, particularly on the FFCV.Abundant fine-grained galena is intergrown with bornite and digenite, particularly in samples from medium- and low-pressure wells.On the OP and FFCV and in samples collected farther downstream in low-pressure pipes, dense scales with abundant galena and chalcopyrite show distinctive cooling fractures that are filled with remobilized minerals and silver, similar to that described by Hardardóttir.Discrete grains of electrum occur with trace chalcopyrite on the FFCV; however, electrum is visible under the microscope only where Au concentrations exceed ~250 ppm.Late fine-grained covellite alters and coats the sulfides at and immediately after the OP in RN-12 and RN-23.Downstream from the OP and FFCV, Cu-bearing sulfide abundance decreases in all well types.The proportions of all sulfides in most wells decrease relative to silica immediately after the OP and FFCV and often show varying flow directions within the same samples.However, the scales from high-pressure well RN-23 still contain significant sphalerite and Cu-bearing sulfides ~32 m downstream.Group V samples are dominated by silica; sulfides are only present in trace amounts.Scales from the separator station include thin, alternating bands of dark grey or black amorphous silica with traces of disseminated sulfides.Scales from the venthouse are thicker and composed of similar < 1 mm to 1 cm layers of dark gray to black amorphous silica with traces of sphalerite alternating with nearly pure white amorphous silica layers.At the point of discharge into the Grey Lagoon, at atmospheric pressure and < 100 °C, amorphous silica is deposited in soft, unconsolidated layers in the pool.In the majority of scales, a natural paragenetic sequence is difficult to determine because mineral precipitation was artificially induced by changes in pressure and temperature during well management.However, some important relationships are evident.The coarsest sulfide grains formed at high pressures and temperatures at depth in the wells.Crystal aggregates of coarse-grained scales are also commonly observed as clasts or “chips” embedded in later scales, indicative of local rip-up and re-deposition.Some chips are several mm in length, often with rims of high-temperature minerals such as chalcopyrite, and they appear to have broken from the well linings in the fluid flow and re-deposited.In intact scales, complex banding and intergrowths of chalcopyrite and sphalerite, locally with well-defined sharp boundaries are common.The very fine-grained intergrowths and dendritic texture of Zn-sulfides with Cu- and Cu-Fe-sulfides are likely caused by rapid co-precipitation, commonly with chalcopyrite growing epitaxially on skeletal sphalerite or wurtzite.As the Zn-sulfide fully crystallized, blebs of chalcopyrite were preferentially aligned subparallel to crystal edges or laths of the Zn-sulfide and are typical of high degrees of supersaturation and precipitation conditions far from equilibrium.In many samples recovered from downhole in well RN-10, Cu- and Cu-Fe-bearing sulfides form an outer coating, indicating higher temperatures just prior to sampling.In strongly banded scales from well RN-10, a repeated sequence is observed of a thin, fine-grained silica layer followed by dendritic sphalerite, with or without chalcopyrite, and then a monomineralic, often coarse grained, chalcopyrite band.This suggests repeated cycles of abrupt cooling followed by progressive heating.In the majority of scales containing pyrite, the pyrite is earlier than chalcopyrite and sphalerite.Clasts of pyrite are either coated by fine-grained chalcopyrite and/or sphalerite and wurtzite, or contain these sulfides as later fracture fill.Electrum is associated with all of the primary sulfide phases, whereas native Ag and secondary minerals filling fractures are coeval with or later than galena.Iron contents of sphalerite and wurtzite range from 4.8 to 13.5 mol % FeS.Sphalerite and wurtzite from Group I scales in RN-10 collected at 1668 m and 1832 m depths have the highest Fe contents.Minor amounts of Cu in the sphalerite most likely reflect ultrafine inclusions of chalcopyrite.Sphalerite and wurtzite from Group II and III scales contain an average 6.1 mol % FeS and 7.0 mol % FeS, as well as significant Cu.Chalcopyrite in Group II and III scales from RN-10 was also analyzed.At 660 m, it contains an average of 7.1 wt.% Zn, and less Cu and Fe than stoichiometric chalcopyrite.The compositions likely represent a Cu-Zn-Fe-S intermediate solid-solution close to stoichiometric chalcopyrite, and similar to that reported in surface pipes by Hardardóttir et al.The unusual Cu-rich pyrrhotite in the upper part of the system may also be a product of an intermediate Fe-Cu-S solid solution or a metastable phase.Several analyses of bornite in RN-10, which is very fine-grained and therefore difficult to analyze, indicate a non-stoichiometric, Ag-rich composition, very similar to that reported from RN-22 by Hardardóttir et al.Despite the very high Ag concentrations of the bulk samples, no significant Ag was detected in microprobe analyses of chalcopyrite.Due to the abundance of ultrafine inclusions in the other minerals, high concentrations of certain trace elements were detected in many of the EPMA analyses.Analyses of Group II Zn-sulfides indicated the presence of locally significant Ag, Au, Pb, and Cu.Analyses of pyrite from Group I scales at 1832 m depth also indicated the presence of significant Cu, Zn, Pb, and Au, and in pyrite from Group II scales at 1098 m depth significant Pb and Au.Massive magnetite from 660 m in RN-10 contains significant Mn, Si, and Cr.There is an apparent increase in the Ag content of electrum from nearly pure Au at 2700 m depth to more Ag-rich electrum in Groups II and III, to native Ag in Group IV surface pipes.Electrum-bearing samples from RN-10 within the lower boiling zone and below the boiling zone were analyzed by HR-SEM.Fig. 7 shows examples of electrum at grain boundaries between chalcopyrite and sphalerite and within fractured sphalerite.In coarse, clean chalcopyrite, grains of electrum are commonly larger, and in backscatter are brighter, indicating higher Au and lower Ag.Free grains of gold in association with chalcopyrite and pyrrhotite were found within sulfide crystals from the downhole apparatus recovered from 2700 m in well RN-17B.The major and trace element geochemistry were determined for 129 scale samples from the deep sub-surface through to distal surface pipelines.The downhole variations in the bulk compositions of the scales are illustrated in Figs. 8 and 9.Silica: Bulk SiO2 concentrations are similar in scales from below and at the onset of boiling and increase slightly to the surface wellhead and upstream of OP/FFCV.Beyond the throttle point, SiO2 concentrations increase to a maximum of 84.0 wt.%.The scales in medium-pressure scales contain more SiO2 than those in high-pressure wells.Group V scales contain on average 65.7 wt.% SiO2.Scales from the venthouse are almost entirely SiO2.Iron: Iron concentrations in all scales are highest downhole at the onset of boiling and decrease to the top of the boiling zone.The variation from the bottoms to tops of the wells are highly dependent on the well pressure.Iron concentrations are highest in the deep scales of RN-17B and in the lower part of the boiling zone.Group III scales show a decrease in Fe concentrations up to the wellhead.There is a reversal at the OP and FFCV, particularly for medium-pressure wells, where Group IV scales contain up to 12.9 wt.% Fe.Iron concentrations in Group V scales are the lowest; however there is still significant Fe in scales at the separator station.The majority of the Fe in scales is contained in Fe-bearing sulfides such as chalcopyrite, but there is a significant amount of Fe in Fe-silicates from the wall rock, such as amphiboles and pyroxene, and silicates precipitated with the sulfides, such as chlorite, and clays such as saponite and nontronite.Manganese: Manganese shows very similar relative abundances as Fe.It is most abundant downhole and is particularly enriched in RN-22 and RN-17B.Group I scales from below the boiling zone in RN-10 have somewhat lower Mn.Manganese concentrations in scales from the lower boiling zone increase at the onset of boiling, but then are highly variable to the top of the boiling zone, including on the OP and FFCV, in all wells.Concentrations in the high-pressure and medium-pressure wells are similar, but they are lower in the low-pressure wells.Manganese concentrations increase in Group IV scales downstream from the control valves, and distal Group V scales have the lowest Mn concentrations.Average concentrations of 0.40 wt.% Mn in scales from the separator station show that there is still significant Mn in the system at the most distal discharge, although absolute concentrations decrease in the Grey Lagoon owing to dilution by silica.Manganese is present in magnetite but is also positively correlated with Al, particularly in low-pressure Group III scales, and therefore likely contained within Al-bearing minerals such as clays.Copper: Copper concentrations in the sulfide scales increase significantly at the onset of boiling all the way to the surface and especially at the FFCV.Copper concentrations in Group I scales average 1.50–1.42 wt.%, respectively, in RN-10 and RN-22, but are low in the deepest scales from 2700 m in RN-17B.Scales in the medium-pressure wells and low-pressure wells are particularly enriched in the upper part of the boiling zone compared to the sampled material from the high-pressure wells, probably reflecting the lower temperatures of the low-pressure wells that cause the precipitation of Cu-Fe-sulfides.There is an increase in Cu contents in scales from the high-pressure wells between the wellhead and OP.The highest concentrations of Cu in every well are at the OP and FFCV reaching a maximum of 27.1 wt.% on the FFCV.Copper concentrations decrease downstream, particularly in low-pressure wells, although samples from as much as 32 m downstream locally still contain up to 13.8 wt.% Cu.Distal scales contain significantly less Cu.Zinc: Zinc is the most abundant metal in all scales, especially below the boiling zone, decreasing slightly in Group II scales and then increasing again in the upper boiling zone and surface pipelines.Zinc concentrations in Group I scales reach 51.0 wt.% at 1575 m depth in RN-10 and are somewhat lower in RN-22.However, the minor scales from the deepest part of the system contain much less Zn.In the boiling zone, Zn concentrations average 17.1 wt.% to 39.4 wt.%, with the highest concentrations at the wellhead and on the upstream side of the FFCV.Scales in the high-pressure wells are particularly enriched in Zn in the upper part of the boiling zone compared to scales from the medium- and low-pressure wells.Group IV scales from high-pressure wells are still highly enriched in Zn even ~32 m downstream from the OP, with up to 44.9 wt.% Zn.Distal Group V samples contain significantly less Zn, and no Zn was detected in the silica-rich scales from the venthouse.Lead: Concentrations of Pb are low in the Group I scales from below the boiling zone and are highest in scales deposited by surface boiling.Group I scales contain an average of 0.02 wt.% Pb, slightly higher in RN-22 than in RN-10.The deepest scales from RN-17B contain no detectable Pb.Lead concentrations remain low in the high-temperature boiling zone but sharply increase at the wellhead where scales from medium-pressure wells contain an average of 5.75 wt.% Pb.Scales from the boiling zone of the low-pressure wells contain an average of 9.54 wt.% Pb, whereas scales from the high-pressure wells have an average of only 0.05 wt.%.Lead concentrations decrease sharply downstream of the OP in all well types; ~32 m downstream in RN-23, the scales contain only 0.64 wt.% Pb.Distal Group V scales have an average of 0.29 wt.% Pb; lower than in Group IV scales but significantly higher than in Group I and II scales.Lead is below detection in the silica-rich scales from the venthouse.Sulfur: Bulk sulfide concentrations increase from below the boiling zone, through the lower and upper boiling zones to the surface throttle point.Group I scales contain an average of 17.0 wt.% S; at the onset of boiling, scales in RN-10 contain less sulfide, but Group III scales have high concentrations.The most abundant sulfide is deposited upstream of the FFCV in the high-pressure wells.Scales from low- and medium-pressure wells have comparable bulk sulfide contents.Distal Group V scales contain significantly less sulfide.Gold: Group I scales below the boiling zone are all enriched in Au.Gold concentrations increase at the onset of boiling, and in the upper boiling zone Group III scales contain an average of 188.5 ppm Au.The highest Au concentrations downhole are in the high-pressure well RN-22, in which there is an overall increase in Au concentration towards the top of the boiling zone.At the wellhead Au is significantly enriched in both medium-pressure and high-pressure scales, but concentrations exceed 400 ppm Au in scales on the upstream side of the OP.Scales on the FFCV in high-pressure wells are particularly enriched.Group IV scales are still highly enriched in Au, particularly in medium- and low- pressure wells, even ~32 m downstream in RN-23 and in distal Group V scales.Silver: Bulk Ag concentrations in the sulfide scales from all wells closely track Au.Silver concentrations increase from below the boiling zone, through the lower boiling zone and in the surface pipelines.Group I scales from RN-22 contain an average of 122 ppm Ag, samples from RN-10 contain 66 ppm, and the deepest samples from RN-17B contain 24 ppm.Silver is most enriched in scales near the top of the boiling zone, averaging 6612 ppm and up to 1.77 wt.% on the FFCV.After the OP, Ag concentrations decrease in all well types, but even ~32 m from the OP in RN-23, Group IV scales still contain 1290 ppm Ag.Group V scales contain an average of 58 ppm Ag.Scales from the venthouse prior to fluid release to Grey Lagoon contain 10 ppm Ag, and the low-temperature precipitates in the Grey Lagoon contain a maximum of 0.15 ppm Ag.Antimony: Group I scales below the boiling zone in RN-10 contain no Sb, and only trace amounts are found in scales from RN-22 and RN-17B.At the onset of boiling, Sb concentrations increase gradually from the base of the lower boiling zone to the upper boiling zone.The highest concentrations are found in scales from low-pressure wells, similar to Pb.Scales from the boiling zone in the surface pipes have an average concentration of 17.3 ppm Sb, but the concentrations drop sharply in scales after the FFCV and OP and are below detection at ~32 m downstream.Distal Group V precipitates contain only 1 ppm Sb on average.Arsenic: Only minor As is present in scales from below the boiling zone and in the lower boiling zone.Arsenic concentrations are somewhat higher in Group II scales from RN-22 compared to RN-10.The concentrations increase upwards through the boiling zone, similar to Sb, reaching an average of 279 ppm in Group III scales and 3030 ppm on the FFCV in the high-pressure wells.Group IV scales from the high- and medium-pressure wells also contain significant As, but scales from the low-pressure wells, even on the FFCV, contain no detectable As.At ~32 m downstream, scales contain no more than 26 ppm As, and the average concentration in Group V scales is only 6.5 ppm.Mercury: Mercury was not detected in any Group I or Group II scales.However, Hg concentrations reach 34 ppm at 160 m depth in the upper boiling zone.Scales on the downstream side of the OP and FFCV also contain 14–32 ppm Hg.Mercury reaches a maximum of 46 ppm after the OP in the medium-pressure well RN-24 and tracks Cu, Au, and Se in the surface pipes.No Hg was detected in distal Group V scales.Cadmium: Below the boiling zone, Cd concentrations in the Group I scales are generally high, reflecting the abundance of Zn-sulfide.Within the boiling zone, the concentrations are variable, decreasing slightly in Group II scales and then increasing again in the upper boiling zone and surface pipelines, tracking Zn.The concentrations in Group I scales average 0.42 wt.%.Concentrations of Cd in scales from the upper boiling zone are similarly high, especially in the high- and medium-pressure wells and immediately upstream of the FFCV and OP.Cadmium concentrations in the surface boiling zone are lower and decrease rapidly right after the OP, but 32 m downstream, Group IV scales still contain up to 5220 ppm.Cadmium decreases in Group V scales but is still significant in scales from the separator station.Tin: The highest concentrations of Sn were found in Group II scales within below the boiling zone.Tin concentrations decrease to the top of the boiling zone and are sporadic in the surface boiling zone.Group I scales from RN-17B and RN-22 contain an average of 17.7 ppm and 13.0 ppm, respectively.Concentrations of Sn from scales in the boiling zone are uniformly ~17 ppm.Tin increases slightly near the top of the boiling zone in high-pressure wells, but it is below detection in medium-pressure wells.No Sn was detected in the surface boiling zone or distal Group V scales and appears to track the abundance of chalcopyrite in the system where it is most likely hosted.Bismuth: Concentrations of Bi are low in all samples.Bismuth is present only in trace quantities in Group I scales from RN-10 and RN-17B, and is not detected in any scales from the lower boiling zone.At the top of the boiling zone the scales in low-pressure wells contain only 0.4 ppm; somewhat higher concentrations are found in scales from the high-pressure wells on the OP and FFCV.Group V scales from the separator station contain only 2 ppm Bi on average, and Bi was not detected in precipitates from the Grey Lagoon.Cobalt: The highest Co concentrations are in Group I scales from RN-10.Concentrations of Co are high in all scales from below the boiling zone but decrease at the onset of boiling to an average of 170 ppm in Group II scales and 151 ppm in Group III scales.Cobalt concentrations increase again in scales immediately upstream of the wellhead, especially in high-pressure wells.The concentrations are 1–2 orders of magnitude higher in the high-pressure wells compared to the medium- and low-pressure wells.The concentrations drop after the FFCV and OP to an average of 60 ppm in Group IV scales.Thereafter, Co contents decrease rapidly through the surface pipes, and only trace Co is present in Group V scales from the separator station, Grey Lagoon and venthouse.Cobalt is associated with all sulfide phases, but appears to most closely track Fe and Cu abundance downhole.Nickel: High concentrations of Ni are found in scales from below the boiling zone, with the deepest scales at 2700 m depth in RN-17B containing 456 ppm.Nickel concentrations increase to an average of 231 ppm in Group II scales at the onset of boiling and then decrease towards the top of the boiling zone.Scales from the high-pressure wells contain an average of 80 ppm Ni in Group III scales, but no Ni was detected in the Group III scales from the medium-pressure wells and only minor Ni was found in the low-pressure wells.Scales on the FFCV and at the OP contain only trace Ni, although higher Ni was found in scales further downstream in high-pressure wells.Nickel is below detection in Group V scales.Selenium: High concentrations of Se are found in scales from below the boiling zone, with scales from RN-10 containing an average of 763 ppm, although the scales from 2700 m depth in RN-17B contain only 17.8 ppm Se.Selenium concentrations decrease to an average of 204 ppm in Group II scales at the onset of boiling and then increase again towards the top of the boiling zone.Group III scales from RN-22, between 669 m and 141 m, are significantly enriched in Se, reaching a maximum of 1600 ppm and then decreasing to the surface wellhead.Scales from the high-pressure wells have generally higher concentrations of Se than scales from medium-pressure wells.Selenium concentrations in scales from the surface boiling zone are comparable to those in Group I scales below the boiling zone.Scales on the FFCV average 865 ppm Se in the medium- and low-pressure wells, and then the concentrations drop sharply right after the OP.However, Se concentrations in the high-pressure wells still reach 497 ppm ~32 m downstream.There is also significant Se in scales from the distal parts of the system.No Se was detected in scales from the venthouse.Tellurium: Scales that formed below the boiling zone have high concentrations of Te, with the highest concentrations in scales from RN-10.However, the deepest scale sample from RN-17B contains only trace Te.Low concentrations were found in scales formed at the onset of boiling, and then concentrations increase towards the top of the boiling zone.Scales on the OP in medium-pressure wells contain a maximum of 63 ppm Te.Tellurium was not detected in any Group III or Group IV scales from low-pressure wells.Tellurium concentrations in scales formed by boiling in the surface pipes are 20–30 ppm, and concentrations decrease downstream rapidly away from the OP.Tellurium was not detected in Group V scales from the Grey Lagoon or venthouse; however, a single sample with 67 ppm Te was collected from the separator station.Molybdenum: Group I scales contain an average of 38.5 ppm Mo, with up to 69.9 ppm in scales from 2700 m depth in RN-17B, although Group I scales from RN-10 have low Mo.Molybdenum concentrations increase slightly at the onset of boiling in Group II scales, with average concentrations of 62.8 ppm, and then decreases to the top of the upper boiling zone in both high- and low- pressure scales.Mo was not detected in the upper boiling zone of medium-pressure wells, and only traces of Mo are present ~32 m downstream of the OP.However, Group V scales from the separator station contain an average of 22.3 ppm Mo.Gallium: Group I scales below the boiling zone contain an average of 12 ppm Ga, with 13.9 ppm in RN-17B and 19.6 ppm in RN-22.Gallium concentrations increase in Group II scales from the bottom to the top of the boiling zone and then decrease in the surface boiling zone.Gallium concentrations are generally higher in scales from low-pressure wells.Group V scales are notably enriched in Ga, and samples from the venthouse are most enriched.Samples from the separator station and Grey Lagoon also contain significant Ga.Gallium is not strongly correlated with the sulfide content of the scales and the positive correlation of Ga and Al indicates that Ga is likely hosted in Al-bearing non-sulfide phases such as clays, behavior which is similar to Mn.Vanadium: Group I scales below the boiling zone contain significant amounts of V, notably in scales from RN-22.At the onset of boiling, the concentrations in Group II scales are 98 ppm, on average; they vary in the lower boiling zone but then increase in the upper boiling zone to an average of 208 ppm V. Concentrations of V in scales from the surface boiling zone average 35.9 ppm and are higher, on average, in the medium- and low-pressure wells.At 32 m downstream there is still significant V in Group IV scales, but Group V scales from the separator station contain only 21.5 ppm V. Samples from the Grey Lagoon contain an average of 11 ppm, and no V was detected in samples from the venthouse.Chromium: Chromium is most abundant in scales from below the onset of boiling and decreases to the top of the boiling zone.Chromium concentrations increase again in the distal Group V scales.Group I scales contain an average of 435 ppm Cr, with 754 ppm in scales from RN-17B and an average of 371 ppm in scales from RN-22, although the scales from RN-10 contain no detectable Cr.All Group II scales contain an average of 281 ppm Cr, with an average of 699 ppm in scales from RN-22 and 176 ppm in RN-10.Scales in the upper boiling zone also contain high Cr up to the wellhead.Chromium concentrations then increase again downstream in scales from all well types, especially after the OP in low-pressure wells and in Group V scales from the separator station.Samples from the Grey Lagoon contain 95.5 ppm Cr, but no Cr was detected in the scales from the venthouse.Tungsten: Concentrations of W are low in most scales.Group I scales below the boiling zone contain an average of 7.1 ppm W. Group II scales from the onset of boiling contain notable W, averaging 86 ppm.Concentrations in scales from the upper boiling zone average less than 5 ppm, and the concentrations decrease sharply to the wellheads.Limited amounts of W were found in scales from the surface pipes in high- and low- pressure wells, including on the FFCV, and no W was detected in medium- pressure wells.Samples from the venthouse contain up to 8 ppm W, but W is below detection in other Group V scales.Germanium, Tl, In, U, and Th are present in trace quantities in all scales.Samples from the upper boiling zone contain up to 15 ppm Ge and 1.4 ppm Tl.Scales from the surface boiling zone contain up to 1.3 ppm In, 0.7 ppm U, and 0.7 ppm Th.Calcium: Scales from below the boiling zone contain an average of 2.5 wt.% Ca, with the highest concentrations in scales from RN-22.Most of the calcium is likely present in Al-bearing smectites such as saponite, chlorites, actinolite and epidote.Calcium concentrations in the scales from the boiling zone are variable; here, a higher proportion of Ca is likely contained in carbonate alteration minerals.However, higher concentrations are found in the scales on the OP and FFCV, and there is significant Ca in distal Group V precipitates, particularly at the separator station.As carbonates were not identified in these samples, the Ca is most likely present in clays.Barium: Group 1 scales in RN-17B contain 42 ppm Ba; in RN-22 the average concentration is 34 ppm and somewhat lower in RN-10.Similar concentrations of Ba are found in scales from the lower boiling zone, decreasing to the surface wellhead and in the surface pipes.Barium concentrations average 55 ppm in scales on the FFCV and OP but are generally lower in the low-pressure wells.The highest concentrations of Ba are in scales from distal Group V, with a maximum of 75 ppm in scales from the separator station.Barium is potentially hosted by Ba-bearing minerals such as barite and adularia similar to that observed by Libbey and Williams-Jones.This is supported by a strong positive correlation between Ba and Al in Group I and Group II scales, even though barite or adularia concentrations were potentially not high enough to be detected during XRD analysis.Strontium: Concentrations of Sr are highest in Group I and II scales and decrease in Group III scales where concentrations are uniformly low up to the Group IV scales.Strontium concentrations increase slightly in distal Group V scales, particularly from the venthouse.Strontium is very strongly correlated with Ca and likely substitutes for Ca in minerals such as calcite or other Ca-bearing silicate phases confirmed by XRD data.Aluminum: Aluminum is most abundant in Group I scales, decreasing only slightly in Group II scales at the onset of boiling.Concentrations decrease further in Group III scales and then drop significantly in Group IV scales.One anomalous sample from the venthouse contains 7.90 wt.% Al2O3.The XRD analyses confirm that Al2O3 is mainly present in aluminosilicate minerals and clays.Magnesium: Similar to Al, Group I scales contain the most Mg and concentrations decrease only slightly above the onset of boiling.Magnesium contents are lower in Group III scales and decrease significantly across the OP and FFCV.The venthouse sample contains 9.73 wt.% MgO.Magnesium is mostly present in Mg-rich silicate minerals such as enstatite and diopside, and clay minerals in distal surface scales.Bromine: The sulfide scales in the lower part of the wells contain traces of Br, possibly from evaporated brine.Scales on the OP in the high-pressure wells contain 22 ppm Br on average; distal Group V scales contain 159 ppm, and samples from the Grey Lagoon contain up to 429 ppm, following the brine into the discharge pool.Lithium: Lithium was detected in only three Group I scales and was not detected in Group III scales until relatively shallow depths between 141 m and 669 m in RN-22.The highest measured Li was 27.2 ppm, in a Group III scale from RN-10.A small cluster of data above detection occurs in Group IV scales on the downstream side of the OP in the high-pressure well RN-22.One sample from the venthouse contained 15.3 ppm.A weak correlation with Br in the same samples suggests that most of the Li is related to evaporated brine.Fluorine: All samples analyzed for fluorine were below the minimum limit of detection.Boron: Boron was not detected in Group I scales and only one analysis was above detection in Group II scales from 1168 m depth in RN-10.Boron was detected in Group III scales from RN-22 at 669 m, 449 m, 350 m, and 270 m depths.The highest measured B was 100 ppm in an upstream scale from the high-pressure well RN-10.Two downstream Group IV scales from RN-10 contained 40 ppm B, all other scale were below detection.Boron was above detection limit at 30 ppm in only one Group V sample from the Grey Lagoon.Likely due to scarcity of samples above detection limits, B has no obvious correlations with any other element.Rare Earth Elements: Total REE concentrations are highest in Group I and Group II scales, decreasing in Group III.The Group IV scales are slightly enriched in HREE, but all values are low and close to detection limits.Group V scales are depleted in REE compared to all other samples; all but La, Ce, Nd, and Dy are below detection.Fig. 11 shows REE plot profiles for each type of scale.Overall REE concentrations are extremely low and the profiles irregular because many elements are close to the detection limit; however, the weak negative Ce anomaly and positive Eu anomaly in some of the scales are similar to chimney samples from seafloor hydrothermal systems.Fowler et al. showed that REE concentrations in boiled hydrothermal fluids from the Reykjanes geothermal system are indeed lower than those in any high-temperature submarine hydrothermal fluid encountered thus far, and appear to have a similar weak negative Ce anomaly and positive Eu anomaly similar to the chondrite-normalized bulk geochemical data in this study.Downhole scales from the high-temperature Reykjanes geothermal system have been analyzed for the first time in this study, including samples from below the boiling zone.The scales were collected during routine maintenance over several years.Deep downhole samples from RN-22 have good depth control determined directly from the retrieved well liner, but do not form a continuous downhole profile.Scales from RN-10 were collected in situ, and provide a more continuous profile with depth.The distribution of the major minerals can be calculated from the mineral compositions and the bulk geochemistry.Table 6 lists the abundances for the sulfide fraction only, normalized to zero SiO2 to remove effects of variable dilution.For this calculation, all Pb was assumed to be in galena and all Zn equally distributed between sphalerite and wurtzite.Iron and Cu were attributed first to chalcopyrite, then bornite, and then the Cu-bearing sulfides until all Cu was consumed.A portion of the remaining Fe and S was then assigned to Zn-sulfides, taking into account the Fe contents from the microprobe analyses.The remaining Fe balance was assigned to pyrite, pyrrhotite and magnetite based on relative abundances estimated from petrography.The Zn-sulfides account for 60–70 wt.% of the total sulfide from below the boiling zone to the surface.Chalcopyrite accounts for ~20 wt.% in the deep scales and lowermost boiling zone, whereas the sulfides in the upper part of the system contain roughly equal amounts of chalcopyrite and bornite, by weight.Covellite and digenite account for < 5 wt.% of the sulfides in the upper part of the boiling zone, and galena about 6 wt.%.The compositions of the scales can be divided into two main groups: those enriched in elements deposited at high pressures and temperatures at depth in the wells, and those enriched in elements deposited at lower temperatures and pressures in the surface pipes.Consistent enrichments and depletions of the trace elements, according to temperature-dependent solubilities, are similar to those observed in seafloor chimneys and in ancient volcanogenic massive sulfide deposits.Scales formed at the highest temperatures below the boiling zone are particularly enriched in Fe, Mn, Co, and Ni, as well as Zn, Cd, and Sn.The latter are also highly enriched in scales throughout the boiling zone and on the FFCV and OP.Copper, Se, and Te are mostly enriched in scales from the lower part of the boiling zone, but also where flashing has been induced in the surface pipes at the FFCV.Lead, Ag, Sb, are mostly enriched in the upper part of the boiling zone and immediately downstream of the FFCV.Arsenic and Hg are enriched in scales even farther downstream.Other trace elements, such as Mo, Ga, V, and Cr are enriched in the deepest scales and also in silica-rich precipitates in the surface pipes, but are not present in abundance in the boiling zone.This behavior presumably reflects different aqueous complexing of these elements at different temperatures and precipitation in different mineral phases in different parts of the system.Scales at depth and in the high-pressure wells have low silica contents compared to the surface pipes, owing to the high temperatures.Silica deposition occurs mainly in the surface boiling zone and especially where the fluids have cooled conductively in the surface pipelines.The presence of amorphous silica even prior to boiling is consistent with the higher pH of the Reykjanes fluids compared to MOR vents.Brecciated samples indicate the possibility of clastic transport in the wells; however, the consistent downhole enrichments and depletions argue strongly against random contamination of the scales by remobilized material, or electrostatic scavenging of metals by the reducing steel liner and casing, although we have only analyzed scale material that is uncontaminated by well liners.For almost all samples, contamination from fragments of the steel pipes and drilling equipment also can be ruled out.Fowler and Zierenberg suggested that drill cuttings in well RN-17B were contaminated by Cu, Ni, Ta, and Nb from drilling equipment.In the absence of magnetite or other spinels in drill cuttings, they also attributed high Cr concentrations to Cr-rich alloys commonly used in drill bits, stabilizers, and drill collars.However, we found no consistent correlation between Cr, Cu, Ni, Ta, or Nb in the analyzed scales.One likely reason is that most of the scales in this study were removed from the pipes without cutting, whereas the wall rock samples described by Fowler and Zierenberg were obtained by drilling.An exception may be the high Cr in some scale samples from RN-10, which were removed by a rotary drilling method used to clean the well.However, metal geothermal pipe shards present in some of our samples were removed during sample preparation using a strong magnet, and there are significant amounts of naturally occurring magnetite, minor rutile, and trace spinel in the samples that can account for all or most of the Cr and Ti.Niobium and tantalum were not detected where Cr is high, ruling out the contamination suggested by Fowler and Zierenberg, and Cr is positively correlated with other elements that show a strong temperature dependence in the wells.Several samples that do contain elevated W, Nb and Ta do appear to have been contaminated by pipe shards.These elements were removed from the dataset for this sample and for four other samples from RN-10.Importantly, scales from RN-17B were not collected from the linings of the wells but rather are scales precipitated directly from the geothermal fluid onto the housing of a downhole experiment, and were then carefully removed in the laboratory.These scales contain magnetite and high Ni, Mo, Mn, Sn, W, and Cr, but low Ti and no Nb or Ta.Inter-element plots of selected metals and metalloids are shown in Fig. 14, and the Pearson correlation coefficients for element pairs listed in Table 7.All element abundances are strongly influenced by dilution of the sulfide component by SiO2, therefore data have been normalized to zero SiO2 to eliminate this effect.Zinc concentrations are strongly correlated with elements commonly incorporated in Zn-sulfides as observed in SMS deposits.The Zn:Cd ratio of ~100:1 is consistent with a uniformly high Cd concentration in sphalerite of 0.61 wt.%.Selenium has a robust positive correlation with Zn in the majority of scales reflecting primary incorporation into sphalerite.Positive correlations of Cu, Zn, Se and the prevalence of Zn- sulfide mineralization at high temperature has previously been hypothesized to in part reflect relatively high pH of higher temperature vent fluids versus most MOR vent fluids.Selenium is strongly correlated with Fe and Cd in downhole Group I and II scales, reflecting deposition in high-temperature Fe-rich sphalerite and wurtzite.Selenium is also positively correlated with Cu and Hg in Group IV scales, and this may reflect the similar behavior of Se, Cu, and Hg as minor volatile species in the hydrothermal fluids.A weak positive correlation of Se with Pb in low-pressure Group IV scales, likely reflects Se incorporation into Se-bearing galena or trace clausthalite in surface pipe scaling.Copper does not correlate with Fe, except in samples where only one Cu--sulfide dominates, as in the bornite-rich scales.Copper correlates well with Zn in scales from low-pressure wells and from the upper boiling zones, but it does not correlate with elements such as Te or Co, as in many SMS deposits.Instead, in many scales, Cu correlates most strongly with Ag.Cobalt correlates positively with Fe, Zn, and Cd in all scales, and is particularly enriched in Group I scales below the boiling zone.Cobalt and Se are positively correlated in higher-temperature scales below the boiling zone and in the lower boiling zone.The unexpected correlation of Co with Cd and Zn may reflect the substitution of Co2+ for Zn2+ in ZnS, especially in high-temperature wurtzite.Gold and Ag, which are enriched in scales throughout the Reykjanes system, are variably correlated with Cu, Cd, Te, Bi, and Pb in some but not all samples, Although no Au-tellurides have been observed, Te is correlated with Au throughout the Reykjanes system.In Group I scales, Au shows strong positive correlations with Cd, Zn, Se, and S and negative correlations with Pb, As, Fe, Mn, and SiO2.In Group II scales, Au is correlated with Cd and Ag.In Group II, II, and IV scales, Au is correlated with Hg.The latter reflects lower temperatures of deposition and is consistent with the strong correlation observed between Ag and Pb in samples from the top of the boiling zone in medium- and low-pressure wells.In these samples Ag is also positively correlated with both Sb and As and Ag is likely hosted by Sb-bearing galena in high-pressure wells.Silver is strongly associated with Cu in Group IV scales dominated by bornite.Iron is most strongly correlated with Mn, Ni, Cr, Mo, and Sn.The correlation with Cr, in particular, is consistent with these elements being hosted by both silicate and oxide phases, and both elements are weakly correlated with Ca and Ti.Manganese has a positive correlation with Fe and Mo, but is not correlated with Ti.This observation is consistent with Libbey and Williams-Jones who reported that hematite in the Reykjanes host rock is enriched in Mn relative to sulfide phases.The occurrence of Ni is enigmatic and even though it is enriched, Ni does not directly correspond to any major metals or sulfide mineral throughout the system and is likely primarily associated with a non-sulfide phase.The non-sulfide phases of the scales dictate the behavior of other elements, such as Ga and V. Gallium and V are positively correlated with each other and with SiO2 and negatively correlated with Cu, Zn and S, confirming a fundamentally different non-chalcophile behavior for these elements.In the lower temperature distal parts of the system, both Ga and V are positively correlated with strongly lithophile elements, the lanthanide REE, and all major oxides except for MnO.Thus, they are most likely present in silicates or clays, although Hardardóttir also noted increased V concentrations in association with maghemite in scales from RN-22.The behavior of Ga and V contrasts with that in seafloor sulfide deposits, where these elements are generally correlated with Cu and Zn.A principal components analysis of the bulk geochemical data on the scales is shown in Table 8, which shows the loadings on the first six factors accounting for 60% of the total variance in the dataset.Large positive loadings on Factor 1 for Fe, Ni, Cr, Mo, Sn, Ga, Ca, Ti, and SiO2 reflect the co-enrichment of these elements in the oxide and silicate phases.Negative loadings for Zn, Se, S, Ag, Cu, and Pb reflect their co-enrichment in sulfides both below the boiling zone and at the OP in the upper boiling zone.Positive loadings on Factor 2 for Co, Se, and Cd reflect the association of these elements with the Zn-sulfides in all scales.Positive loadings on Factor 3 for SiO2, Na2O correspond to low-temperature precipitation in distal fluids from which most of the metals have already been lost.High positive loadings for Mo, As, and Sb on Factor 4 reflect metal deposition immediately downstream of the OP in the surface boiling zone.The trace metal associations evident in the sulfide scales are similar to those previously reported for the altered and mineralized host rocks sampled in drill cuttings.The Reykjanes system can be viewed as mostly a closed system as there is limited or no influx of cold seawater reacting with the hot hydrothermal fluid.Scaling is caused by pressure decrease due to boiling during power production.The main ore minerals are precipitated due to abrupt changes in temperature and pH during phase separation, with the acid-generating gases CO2 and H2S partitioning into the vapor phase.Cooling in response to boiling is an effective depositional mechanism for the major metal sulfides.Zinc is precipitated as the hexagonal polymorph wurtzite at higher temperatures and mainly as the cubic polymorph sphalerite at and below ~250 °C.Chalcopyrite precipitation occurs mainly at temperatures between 280° and 320 °C, coincident with the onset of boiling in the Reykjanes system.In the high-pressure wells, more of the Cu remains in solution up to the FFCV where the largest pressure and temperature decrease in the boiling zone occurs, and producing scales with the highest Cu concentrations.In many wells, unordered intermediate Cu-Fe-S solid solutions are produced as a result of rapid quenching, and in many cases, the Cu-and Cu-Fe-sulfides appear to have formed by exsolution from an original Cu-Fe-S solid solution.These microscopic intergrowths are particularly characteristic of Group III and IV surface scales.The formation of optically visible exsolution products has been ascribed to coalescing of sub-microscopic domains initially formed during the quenching process.Conversely, galena and other Pb-bearing sulfides are most abundant in medium-pressure wells immediately upstream from the OP, and somewhat further downhole in lower-pressure wells.Abundant galena is first observed in the scales at the wellhead.Because the Au is likely transported as aqueous sulfur complexes, any process that cause a loss of reduced sulfur, such as sulfide precipitation, boiling, or fluid mixing will destabilize the Au complexes.Boiling is the dominant precipitation mechanism, and the abundance of Au in the wells increases dramatically with the onset of boiling.Whereas Ag-chloride complexes predominate at aquifer temperatures, at boiling temperatures, Ag may also be partly transported as aqueous sulfur complexes, like Au.In the deeper scales, Ag is mainly present with Au in electrum, rather than as native Ag, and commonly in association with chalcopyrite.Native Ag becomes increasingly abundant at shallow depths in Group III scales in the upper boiling zone.The native Ag filling fractures in scales from RN-21 may have exsolved from Ag-rich Cu- and Cu-Fe-sulfide phases and precipitated in fractures that formed during cooling.The skeletal habit of the native Ag suggests that precipitation was a response to rapid changes in hydrothermal fluid conditions.The spectacular enrichment of Ag at the top of the boiling zone and the presence of visible native silver must reflect processes related to flash boiling.However, native silver is also common in late fractures in the scales on the OP and FFCV and immediately upstream or downstream.One interpretation is that Ag-bearing Cu- and Zn-sulfides first precipitated in the scales, and the Ag was remobilized into the cooling fractures.The distribution of the major and trace metals is mainly due to the temperature-dependence of their aqueous complexes.Zinc, Cd, Pb, and Mn are highly soluble as neutral to weakly-charged chloride complexes in high-temperature saline hydrothermal fluids above 300 °C, and therefore are not expected in the high-temperature scales.However, fluctuations in pressure during artificial well management may have caused the precipitation of these elements even in the deepest parts of the system.At depths deeper than the boiling zone, Co and Sn, which are transported mainly as chloride complexes in high-temperature, acidic, reduced, saline hydrothermal fluids above 300 °C, are consistently enriched in the Group I scales.Libbey and Williams-Jones also noted increased Co concentrations in drill cuttings of mineralized rocks from below the boiling zone.The concentrations decrease with decreasing temperature towards the top of the boiling zone.Iron, Mn, and Ni are transported to lower temperatures than Co and Sn reflecting the relative stabilities of their aqueous complexes.Elevated concentrations of Ag, Au, As, Cd, Pb, Sb, and Zn occur at and above the boiling zone, and this was also observed in drill cuttings of altered rocks.In general, higher temperatures result in Cu-rich scales with low Ag but some As, whereas lower temperatures, especially in the low-pressure wells, produce Pb-rich scales enriched in Ag in surface pipes.Virtually all elements are enriched in Group II scales at the onset of boiling, in response to volatile loss from the liquid, pH change, and the cooling caused by heat loss required for vaporization.The nearly quantitative deposition of Ag and Cu in all Group IV scales, on the downstream side of the OP and FFCV, may reflect the behavior of their aqueous sulfur complexes destabilized by rapid changes in pressure and temperature, compared to chloride complexes of these metals, which may be dominant in the deeper liquids.Abrupt pressure and temperature decrease at the OP to ~22 bar and 220 °C also promote significant deposition of Bi, Se, As, and Sb at the transition from Group III scales in the upper boiling zone to Group IV scales in the surface pipes.Arsenic, Sb, and Bi, which are all likely transported as neutral hydroxide complexes are only deposited on the OP and FFCV.After the wellhead, a high proportion of the fluid in the surface pipes is steam.Hardardóttir described distinctive blue Cu-rich scales in the upper half of upstream high-pressure pipes, which were interpreted as deposition from the vapor phase.This is where the steam phase and any volatile metals would be expected to accumulate.Samples of this type of scale from as much as 32 m downstream still contain up to 13.8 wt.% Cu.The strong association of Se and Hg with Cu in these scales supports the inference that all three were in the vapor phase in these pipes, in contrast to downhole scales, where Se is negatively correlated with Hg.These findings are consistent with the suggestion that Cu can partition into the vapor during phase separation as HS--bearing complexes in the presence of significant concentrations of sulfur.High concentrations of Se and Te in the higher-temperature sulfides in Group I scales are consistent with their behavior in seafloor sulfide deposits.Selenium enrichment to 100’s of ppm throughout the system, particularly in the high-pressure wells, reflects the strong temperature- and pH-dependence of the dissociation of H2Se, resulting in near complete removal of Se from the fluid phase above 300 °C.Selenium is also enriched in distal Group IV and V scales where it is positively correlated with Hg and Cu, as noted above.Tellurium, which closely tracks Se in the deep scales of high-pressure wells, likely reflects the similar behavior of H2Te and H2Se.The single sample with 67 ppm Te collected from the separator station could also reflect the volatility of Te.Several elements such as Mo and W, which are found in both the deep high-temperature scales but also in the most distal pipes, are transported as oxyanions and hydroxide complexes.Their unusual bimodal distribution is controlled by higher-temperature Cu-rich sulfides at depth and oxides and silicates at the surface.Manganese, which is highly mobile, remains in solution and is deposited in the silica-rich scales at the end of the pipelines.Gallium is transported by neutral to weakly-charged hydroxide complexes3o; Wood and Samson, 2006), and therefore its behavior is most similar to elements such as Mo.Similarly, there appears to be a strong redox control on the precipitation of V.High Ba in the separator station may reflect partitioning of the Ba into the chloride-rich brine during phase separation.The behavior of the major and trace elements in the sulfide scales of the Reykjanes geothermal system has many similarities to other basalt-hosted seafloor hydrothermal systems, although there is a range of trace metal distributions in MOR vent fluids and deposits, reflecting highly variable reaction zone temperature, pressure, rock composition, and possible impact of sediment/organic material.Conversely, the scales are pyrite-poor, Zn- and Si-rich, and contain far more magnetite than most mid-ocean ridge systems.The TAG massive sulfide deposit at 26°N on the Mid-Atlantic Ridge is a possible analog.The TAG deposit is the only sediment-free basalt-hosted active, high-temperature seafloor hydrothermal complex that has been drilled from the seafloor to the bottom of its stockwork zone.However, unlike the Reykjanes geothermal system, the TAG upflow zone is completely open to seawater and therefore has been affected by mixing, and the high pressures at 3400–3600 m water depth prevent extensive boiling of the hydrothermal fluids in the sub-seafloor.The compositions of the deepest scales at Reykjanes are similar to the highest temperature sub-seafloor stockwork mineralization at TAG, which is enriched in Cu, Co, Se, Bi, Sn, and Ni compared to the seafloor sulfides.The behavior of Se, however, which is locally enriched in the surface pipes, contrasts with typical seafloor hydrothermal systems, in which Se is only found in the highest-temperature sulfide assemblages.An atypical lack of correlation in Reykjanes scales versus typical seafloor hydrothermal systems is also observed for Zn and Ag due to the high pH of the Reykjanes reservoir fluids.In many/most MOR systems, wurtzite and sphalerite are saturated at significantly higher temperatures at a higher fluid pH. Since Ag is dominantly associated with lower temperature portions of deposits, under these conditions of high temperatures and an elevated fluid pH, there is a lack of correlation between Ag and Zn in the Reykjanes system.The abundance of Zn-sulfides at depth, compared to Cu--sulfides, mainly reflects the bulk compositions of the deep fluids, which have a high Zn/Cu ratio compared to black smoker vents.Similarly high Zn/Cu ratios have been observed in some MOR and back-arc systems with lower reaction zone temperatures than EPR systems.The influence of a higher pH is also reflected by precipitation of Zn-sulfide minerals either first or together with Cu--sulfides in the deep scales.If the pH is high, under these conditions sphalerite is precipitated at a much higher temperature than normally expected for MOR vents.Lower-temperature parts of the seafloor mound at TAG are enriched in Zn, Cd, In, Pb, Ag, Sb, and Tl, similar to the lower-temperature scales in the surface pipes at Reykjanes.For the most part, Mn behaves conservatively in the TAG deposit, similar to its behavior in the Reykjanes system.Local enrichment of Mn in the deepest scales at Reykjanes likely reflects processes similar to the Mn enrichment in non-sulfide phases in the alteration zones of some seafloor hydrothermal systems.Silica deposition at Reykjanes mainly is a consequence of conductive cooling of the brine phase in the surface pipelines, a process that is similar to the modeled deposition of silica by conductive cooling in the TAG deposit.A key difference between TAG and Reykjanes as noted above is the lack of seawater entrainment in the Reykjanes system.The higher pH at Reykjanes however, likely explains the elevated abundances of amorphous silica in scales compared to most mid-ocean ridge systems.Fluids in the TAG hydrothermal systems have only boiled at very high temperatures, owing to the pressure of the overlying water column.However, boiling hydrothermal vents have now been documented widely and at a range of water depths comparable to the Reykjanes system.Trace metal enrichments in the boiling chimneys are very similar to those documented in the scales at Reykjanes, with different metal associations at high- and lower-temperature boiling.Fig. 20 compares the average trace element contents of Reykjanes scales to the bulk compositions of analogous high-temperature black smoker chimney samples from elsewhere along on the Mid-Atlantic Ridge.The majority of siderophile elements are enriched in the Reykjanes system compared to black smokers, in part because of dilution by silica and sulfate in the seafloor vents.However, Fe, Co, Ga, Ge, In, and Mo are relatively depleted, implying a significant difference in fluid or source-rock concentrations.Copper, Se, and Te concentrations are similar.Gold, and the chalcophile elements Ag and Pb, are extremely enriched compared to MAR black smokers.Elements such as U, Tl, and Ba are depleted in the Reykjanes scales compared to seafloor black smokers because of the lack of seawater mixing in the wells.The major differences in the trace metal concentrations between the Reykjanes scales and black smokers reflect the important role of boiling as a depositional mechanism; however, this cannot account for some of the high metal concentrations, as previously observed by Hardardóttir et al.Hannington et al. recently suggested that some metal enrichment may be due to accumulation of the metals in the deep geothermal reservoir prior to discharge into the hydrothermal system – a process that has not yet been observed in active seafloor hydrothermal systems.This is supported by the orders of magnitude differences in the ratios of Au, Ag, and Pb to the major elements for Reykjanes scales versus MAR black smokers.Average Ag concentrations in the deep scales are typical of what has been observed in black smokers and in drilled stockworks; by contrast, it seems likely that most of the Au and Pb in seafloor hydrothermal systems are lost to hydrothermal venting.The mass accumulation of metals in the downhole scales of the high-pressure well RN-10 between 2009 and 2014 can be estimated by assuming a uniform thickness of 0.5 cm from the wellhead to 1832 m depth.In the 5 years between cleaning of the well, approximately 24 tonnes of scales had been deposited, consisting of 15.1 t sphalerite, 5.3 t chalcopyrite, 3.2 t bornite, 95 kg galena, and 44 kg electrum.Hardardóttir estimated that ~1 t of sulfide scale per year are deposited in a single high-temperature surface well pipe; combined with the downhole scales, this corresponds to an estimated total mass accumulation rate of ~5.7 t/yr for RN-10.A sulfide accumulation rate of ~5.7 t/yr in RN-10 corresponds to metal fluxes of 1.7 t/yr Zn, 0.3 t/yr Cu, 22.5 kg/yr Pb, 4.1 kg/yr Ag, and 0.5 kg/yr Au.Hardardóttir et al. calculated metal fluxes for the whole of the Reykjanes system from the metal concentrations in the deep fluids and estimated fluxes of ~47 t/yr Cu, 47 t/yr Zn, ~740 kg/yr Pb, 180 kg/yr Ag, and ~9 kg Au.The data presented from RN-10 indicates that relative to the metal contents of deep fluids feeding the entire geothermal system, the metal contents of Cu in particular in RN-10 scaling may be less than would be expected.However, across the entire system, accumulation of Zn, Pb, Ag, and particularly Au may be roughly comparable.By comparison to a large SMS deposit, the calculated mass accumulation rates for the TAG active mound are 7.6–15.2 t/yr Cu, 1.5–3.0 t/yr Zn, 38–76 kg/yr Pb, 5.3–10.6 kg/yr Ag, and 0.19–0.38 kg/yr Au, assuming metal contents of 76 kt of Cu, 1.5 kt of Zn, <380 t Pb, 53.2 t Ag, and 1.9 t of Au accumulated over the estimated 5000–10,000 years which the mound has been active.Calculated metal fluxes for one high-temperature Reykjanes production well show higher fluxes of Au compared to the entire TAG active mound, comparable fluxes for Zn, Pb, and Ag, and a much lower flux for Cu.Table 9 shows the calculated amount of potential metal accumulation in well RN-10 during the life of the Reykjanes system, if we assume continuous deposition of scales and no limitation on the space that could be filled with sulfides.The mass accumulation and metal flux calculations for only one production well indicate that the Reykjanes system is highly enriched in the trace elements Pb, Au, and Ag compared to the mature TAG mound, similar to that shown in Fig. 20.The calculated mass accumulation rates in Table 9 are also reasonable for mid-ocean ridge seafloor hydrothermal systems.There are currently 16 wells in operation, and assuming all deposited the same amount of scales, ~91 t of sulfide is deposited in the geothermal wells at Reykjanes every year.Jamieson et al. calculated sulfide mass accumulation rates ranging between 1 and 794 t/yr for selected hydrothermal fields; a rate of ~91 t/yr for the whole Reykjanes system is geologically reasonable and is comparable to estimates for numerous high-temperature basalt-hosted seafloor hydrothermal systems.Direct observations from this study show that at least three quarters of the metal budget of a similar boiling seafloor hydrothermal system may be deposited at depth or in the upflow zone, before ever reaching the surface.A comparison of published data by Hannington et al. on the composition of the deep liquids with the surface discharge shows that Fe, Zn, and Ni are nearly quantitatively precipitated in the sulfides downhole, 70–90% of the Mg is deposited, mainly as clay minerals, but only 30% of the As is deposited.This is consistent with the relative mass accumulations for these metals in downhole pipes versus surface pipes in Table 9.The discovery of a supercritical fluid reservoir at 4.5 km depth at Reykjanes, and the accumulation of Au and potentially Ag and Pb in deep reservoirs further highlights the potential for metal enrichment and accumulation in the deep parts of the geothermal system.The deposition and residence of metals within the lower oceanic crust is supported by mass balance calculations in other deep oceanic crust profiles such as Hess Deep, Pito Deep, and Hole 1256D.Metal trapping efficiencies ranging from 4 to 37% were estimated by Patten et al. in the Troodos ophiolite between lower sheeted dyke sections and associated VMS deposits.Deep trapping processes may be much more common in the subseafloor than previously thought.Indeed, ODP drilling at the sedimented Middle Valley deposit found deep Cu-rich zones with grades between 8.0 and 16.6 wt.% below well-developed stringer zones; these concentrations fall within the range of Cu grades of Reykjanes scales.This paper focuses on observations of geochemical enrichments and depletion within the upflow zone of the Reykjanes geothermal system, a system directly analogous to basalt-hosted seafloor hydrothermal systems.Sulfide-rich scales in the geothermal wells provide a snapshot of metal precipitation from the hydrothermal fluids, in the absence of significant mixing with seawater as occurs in active seafloor hydrothermal systems.Well-constrained conditions of formation enable a rigorous interpretation of the behavior of trace elements, with Co, Se, Cu, and Sn deposited in the highest-temperature scales; Mo, As, Ni, and Te in high- to intermediate-temperature scales, and Ag, Pb, Sb, Zn, Cd, and Mn in lower-temperature scales.Distal hydrothermal precipitates, downstream of the OP and FFCV and in the separator station are enriched in Ga, V, Br, and SiO2.These majority of these element associations show many similarities to trace element distribution in sub-seafloor SMS mineralization, as observed in the large, actively-forming TAG deposit.However mineralogical and trace element associations also show several differences compared to known SMS sub-seafloor mineralization: i) fluid pH is slightly acidic but high compared to most MOR vent fluids, and this high pH may explain the high abundances of Zn-sulfide minerals and amorphous silica in the scales; iii) high Cu, Zn, and Ag concentrations in sulfide scales likely in part reflect a combination of the relatively high Cu, Zn, and Ag concentrations and lower Fe concentrations of the Reykjanes fluids compared to most MOR fluids but also the efficiency of deposition of the metals and the lack of dilution by minerals precipitated during mixing with seawater; iv) Fe is surprisingly scarce; and v) Cu and Se which normally demarcate the highest-temperature mineralization, are also present in appreciable quantities in lower-temperature scales.Additionally, and of greater significance, is that the Reykjanes scales show the significant influence that boiling has on the “subseafloor” deposition of the majority of trace elements.At least three quarters of the Reykjanes metal budget is deposited at depth or in the upflow zones of the boiling system.Deposition of a significant proportion of the metal budget deep in submarine hydrothermal systems has profound implications for metal enrichment and accumulation at depth and supports previous observations in ophiolites.The spectacular concentrations of Au in the scales from all parts of the boiling zone and throughout the surface pipelines reflect the efficiency of gold deposition due to boiling but also high concentrations of Au that have accumulated in the reservoir liquids.A calculated mass accumulation rate of ~91 t/yr for the Reykjanes geothermal system is comparable to other large, high-temperature basalt-hosted seafloor hydrothermal systems elsewhere on the Mid-Atlantic Ridge.Estimates of the total metal accumulations over the 20,000 year lifetime of the Reykjanes system indicate significant enrichment of Zn, Pb, Au, and Ag relative to metal contents of both modern and ancient mafic-dominated seafloor massive sulfide deposits.
Mineral precipitation in the seawater-dominated Reykjanes geothermal system on the Mid-Atlantic Ridge, Iceland is caused by abrupt, artificially induced, pressure and temperature changes as deep high-temperature liquids are drawn from reservoir rocks up through the geothermal wells. Sulfide scales within these wells represent a complete profile of mineral precipitation through a seafloor hydrothermal system, from the deep reservoir to the low-temperature silica-rich surface discharge. Mineral scales have formed under a range of conditions from high pressures and temperatures at depth (>2 km) to boiling conditions in the upflow zone and at the surface. Consistent trace element enrichments, similar to those in black smoker chimneys, are documented: Cu, Zn, Cd, Co, Te, V, Ni, Mo, W, Sn, Fe and S are enriched at higher pressures and temperatures in the deepest scales, Zn and Cu, Bi, Pb, Ag, As, Sb, Ga, Hg, Tl, U, and Th are enriched at lower temperatures and pressures nearer to the surface. A number of elements (e.g., Co, Se, Cd, Zn, Cu, and Au) are deposited in both high- and low-pressure scales, but are hosted by distinctly different minerals. Other trace elements, such as Pb, Ag, and Ga, are strongly partitioned into low-temperature minerals, such as galena (Pb, Ag) and clays (Ga). Boiling and destabilization of metal-bearing aqueous complexes are the dominant control on the deposition of most metals (particularly Au). Other metals (e.g., Cu and Se) may also have been transported in the vapor phase. Very large enrichments of Au, Ag and Pb in the scales (e.g., 948 ppm Au, 23,200 ppm Ag, and 18.8 wt.% Pb) versus average concentrations in black smoker chimneys likely reflect that some elements are preferentially deposited in boiling systems. A mass accumulation of 5.7 t/yr of massive sulfide was calculated for one high-temperature production well, equating to metal fluxes of 1.7 t/yr Zn, 0.3 t/yr Cu, 23 kg/yr Pb, 4.1 kg/yr Ag, and 0.5 kg/yr Au. At least three quarters of the major and trace element load is precipitated within the well before reaching the surface. We suggest that a similar proportion of metals may be deposited below the seafloor in submarine hydrothermal systems where significant boiling has occurred. Mass accumulation estimations over the lifetime of the Reykjanes system may indicate significant enrichment of Zn, Pb, Au, and Ag relative to both modern and ancient mafic-dominated seafloor massive sulfide deposits, and highlights the potential for metal enrichment and accumulation in the deep parts of geothermal systems.
74
Innovative technologies to manage aflatoxins in foods and feeds and the profitability of application – A review
Food security is effectually achieved when the food pillars, including food availability, food access, food utilization, and food stability are at levels that allow all people at all times to have physical and economic access to affordable, safe, and nutritious food to meet the requirement for an active and a healthy life.When one of these four pillars weakens, then a society undermines its food security.Factors related to food insecurity and malnutrition not only influence human health and welfare, but also affect social, economic, and political aspects of society.With regards to the previous points, pre- and post-harvest losses due to mycotoxin contamination are documented as one of the driving factors of food insecurity since these substances occur along most food chains from farm to fork.Among the different type of mycotoxins, aflatoxins are widespread in major food crops such as maize, groundnuts, tree nuts, and dried fruits and spices as well as milk and meat products.When animal feeds are infected with AF-producing fungi, AFs are introduced into animal source food chain.AFs are toxic metabolites produced via a polyketide pathway by various species and by unnamed strains of Aspergillus section Flavi, which includes A. flavus, A. parasiticus, A. parvisclerotegenus, A. minisclerotigenes, Strain SBG, and less commonly A. nomius.Normally, A. flavus produces only B-type aflatoxins, whereas the other Aspergillus species produce both B- and G-type aflatoxins.The relative proportions and level of AF contamination depends on Aspergillus species, growing and storage conditions, and additional factors.For instance, genotype, water or heat stress, soil conditions, moisture deficit, and insect infestations are influential in determining the frequency and severity of contamination.For M-type aflatoxins, these compounds are normally not found on crops, but their metabolites are found in both the meat and milk of animals whose feedstuffs have been contaminated by AF-B1 and AF-B2.Recently, emphasis on the health risks associated with consumption of AFs in food and feedstuffs has increased considerably.As a result of this, many experimental, clinical, and epidemiological studies have been conducted showing adverse health effects in humans and animals exposed to AFs contamination, depending on exposure.High-dose exposure of the contaminant can result in vomiting, abdominal pain, and even possible death, while small quantities of chronic exposure may lead to liver cancer.The International Agency for Research on Cancer has classified both B- and G-type aflatoxins as Group 1 mutagens, whereas AF-M1 is classified in Group 2B.Furthermore, AFs may contribute to alter and impair child growth.Together with other mycotoxins, AFs are commonly suspected to play a role in development of edema in malnourished people as well as in the pathogenesis of kwashiorkor in malnourished children.Moreover, AF contamination negatively impacts crop and animal production leading not only to natural resource waste, but also decreased market value that causes significant economic losses.Due to these effects, different countries and some international organizations have established strict regulations in order to control AF contamination in food and feeds and also to prohibit trade of contaminated products.The regulations on “acceptable health risk” usually depend on a country’s level of economic development, extent of consumption of high-risk crops, and the susceptibility to contamination of crops to be regulated.Indeed, the established safe limit of AFs for human consumption ranges 4–30 μg/kg.The EU has set the strictest standards, which establishes that any product for direct human consumption cannot be marketed with a concentration of AF-B1 and total AFs greater than 2 μg/kg and 4 μg/kg, respectively.Likewise, US regulations have specified the maximum acceptable limit for AFs at 20 μg/kg.However, if the EU aflatoxin standard is adopted worldwide, lower-income countries such as those in Asia and Sub-Saharan Africa will face both economic losses and additional costs related to meeting those standards.This situation requires alternative technologies at pre- and post-harvest levels aimed to minimize contamination of commercial foods and feeds, at least to ensure that AF levels remain below safe limits.Implementation of innovative technologies is invaluable to address the challenges related to AFs and their effects.Reduction of AF contamination through knowledge of pre- and post-harvest managements is one of the first steps towards an appropriate strategy to improve of agricultural productivity in a sustainable way.This has direct positive effects on enhancing the quality and nutritional value of foods, conserving natural resources, as well as advancing local and international trade by increasing competitiveness.It is important to identify and document available technologies that can effectively control and minimize aflatoxin contamination to sustain healthy living and socioeconomic development.There exists ample literature on tools for AF control and their benefits.Therefore, this review compiles data on innovative pre- and post-harvest technologies developed that can manage AF contamination in foods.The benefits of these technologies are also discussed in terms of food security, human health, and economic value.Finally, implications for research and management policies addressing AF issues are highlighted.A wide range of AF management options exist in literature.Depending on the “type” or mode of application, management has been classified in this review as pre-harvest stage, specifically biological control, while sorting technology, treatments with electromagnetic radiation, ozone fumigation, chemical control agents, biological control agents, and packaging material are grouped as post-harvest stage.Each of these groups of control/management options are discussed in this section.Non-aflatoxin forming strains of A. flavus have been used as a biological control for long-term crop protection against AF contamination under field conditions.Cotty stated that when the spore number of nontoxigenic strains in the soil is high, they will compete with other strains, both toxigenic and other atoxigenic, for the infection sites and essential nutrients needed for growth.Moreover, soil inoculation with nontoxigenic strains has a carryover effect, which protects crops from contamination during storage.The ability of fungus to compete with closely related strains depends on several factors such as pH and soil type as well as the availability of nitrogen, carbon, water, and minerals.The International Institute of Tropical Agriculture and the United States Department of Agriculture - Agriculture Research Service together with other partners have been researching in Africa on non-toxigenic biocontrol fungi that act through competitive exclusion strategy.They have successfully developed several country-specific indigenous aflatoxin biocontrol products generically named as Aflasafe™, which can be used on maize and groundnut.This product is an eco-friendly innovative biocontrol technology that utilizes native non-toxigenic strains of A. flavus to naturally out-compete their aflatoxin-producing cousins.Aflasafe™ has been shown to consistently reduce aflatoxin contamination in maize and groundnut by 80–99% during crop development, post-harvest storage, and throughout the value chain in several countries across Africa.Aflasafe products have been registered for commercial use in Kenya, Nigeria, Senegal and Gambia, while products are under development in seven other African nations.Each Aflasafe™ product contains four unique atoxigenic strains of A. flavus widely distributed naturally in the country where it is to be applied.Another study on biological control has been reported by Anjaiah, Thakur, and Koedam who found that inoculation of antagonistic strains of fluorescent Pseudomonas, Bacillus and Trichoderma spp. on peanuts resulted in significant reduction of pre-harvest seed infection by A. flavus.Garcia, Ramos, Sanchis, and Marin also demonstrated that the extract of Equisetum arvense and a mixture 1:1 of Equisetum arvense and Stevia rebaudiana is effective against growth of A. flavus and subsequent production of aflatoxin under pre-harvest conditions.Alaniz Zanon, Chiotta, Giaj-Merlera, Barros, and Chulze also observed 71% reduction in AF contamination in soils and in groundnuts when an AF competitive exclusion strain of A. flavus AFCHG2 was applied to Argentinian groundnuts.Similarly, Weaver et al. showed that non-toxigenic strains of A. flavus mitigated AF contaminations in maize through pre-harvest field application.Furthermore, Accinelli, Abbas, Vicari, and Shier evaluated the efficacy of a bioplastic-based formulation for controlling AFs in maize.The results showed that bio-control granules inoculated with A. flavus NRRL 30797 or NRRL 21882 reduced AF contaminations up to 90% in both non-Bt and Bt hybrids.Sorting processes seek to eliminate agricultural products with substandard quality.Normally sorting, especially for grains, can be achieved based on differentiation of physical properties such as colour, size, shape, and density as well as visible identification of fungal growth in affected crops.By rejecting damaged and discoloured samples, sorting operations reduce the presence of AFs as well as contaminating materials in food and feed.Phillips, Clement, and Park mentioned that floating and density separation could reduce AFs in stored groundnut kernels by up to 95%.In another report, Dickens and Whitaker and Zovico et al. reported that AF-contaminated groundnuts were eliminated by colour sorting processes, while fluorescence sorting was effective to reduce levels of AF contamination in pecan and pistachio nuts.These observations were validated through a recent study, which showed that AFs contamination in pistachio nuts is more than 95% reduced by colour sorting.Nonetheless, such physical methods are often laborious, inefficient, and impractical for in-line measurements.The application of computer-based image processing techniques is one of the most promising methods for large-scale screening of fungal and toxin contaminations in food and feed.Grains and other agricultural products contain various nutritional substances that are degraded by fungal growth, which in turn influence absorbance spectra of the material.For instance, Pearson, Wicklow, Maghirang, Xie, and Dowell reported that scattering and absorbance characteristics are influenced by the presence of A. flavus in the kernel since fungal development causes the endosperm to become powdery.Berardo et al. also showed that it was possible to quantify fungal infection and metabolites such as mycotoxins produced in maize grain by Fusarium verticillioides using Near Infrared Spectroscopy.Wicklow and Pearson found that NIRS successfully identified kernels contaminated with AFs.Moreover, Fernández-Ibañez, Soldada, Martínez- Fernández, and de la Roza-Delgado highlighted NIRS technique as a fast and non-destructive tool for detecting mycotoxins such as AF-B1 in maize and barley at a level of 20 ppb.Nevertheless, NIRS only produces an average spectrum, which lacks in spatial information from the sample with respect to distribution of the chemical composition.Hyperspectral imaging is another method that can be employed to monitor both the distribution and composition of mycotoxins in contaminated food samples, especially grains.This method can produce both localized information and a complete NIR spectrum in each pixel.Yao et al. used hyperspectral imaging techniques to estimate AF contamination in maize kernels inoculated with A. flavus spores.Wang et al. also demonstrated the potential HSI based in the Vis/NIR range for quantitative identification and distinction of AFs in inoculated maize kernels.Pearson et al. mentioned that the spectral reflectance ratio 735/1005 nm, which is located in the transition between Vis and NIR, can be analysed to identify highly contaminated AF corn kernels from those contaminated lower than 10 ppb.The observation was in agreement with other studies by Del Fiore et al. and Singh, Jayas, Paliwal, and White.They reported that Aspergillus fungi in maize and wheat were detectable by analysing the HSI in 400–1000 nm or the fusion of HSI and digital images.Another image based sorting technology has been proposed by Özlüoymak, who reported that approximately 98% of the AFs in contaminated figs were successfully detected and separated by a UV light coupled with colour detection system.This method used the viability of bright greenish-yellow fluorescence, which is produced by A. flavus via the oxidative action of peroxidases in living plant tissue as an image screening technique for the classification of AF contaminated crops.Gamma radiation has been considered as an effective tool for preserving and maintaining quality of agricultural and food products.Very high-energy photons generated by a gamma source such as cobalt-60 are used to destroy pathogenic and spoilage microorganisms by causing direct damage to DNA in microbial cells.An additional effect of γ-irradiation is the interaction of energy with water molecules present in substrates or foods, producing free radicals and ions that attack the DNA of microorganisms.However, the efficiency of γ-irradiation depends on many factors, namely the number and type of fungal strain, radiation dose, composition of food, and air humidity.Several studies have reported that γ-irradiation can be performed to decrease AF contamination as exhibited in Table 1.The results on the potential of γ-irradiation for AF mitigation are somewhat conflicting.Some authors reported that AF content could be reduced even with a low-dose γ-irradiation.For example, Mahrous observed that using 5 kGy of γ-irradiation is sufficient to inhibit the growth of A. flavus and production of AF-B1 in soybean seeds over 60 days of storage without any noticeable changes in chemical composition.Similarly, Iqbal et al. mentioned that a dose of 6 kGy reduced total AFs and AF-B1 content by more than 80% in red chilies.However, some claimed that such reductions can be achieved only using high-dose γ-irradiation.Kanapitsas, Batrinou, Aravantinos, and Markaki for instance showed that the γ-irradiation at dose of 10 kGy led to an approximately 65% decrease of the initial AF-B1 accumulation in raisins samples inoculated by A. parasiticus, compared to the non-irradiated sample on the same day.The experiments done on naturally contaminated maize samples by Markov et al. also indicated that the irradiation with a 10 kGy dose can be used to reduce the amount of AF-B1 to an acceptable level without compromising animal and human health.Nevertheless, some authors argued that even more than 20 kGy of γ-irradiation is not effective in reducing AFs.The efficacy of γ-irradiation at high doses to decontaminate black and white peppers from AF-B1, AF-B2, AF-G1, and AF-G2 was reported by Jalili et al.They mentioned that a gamma irradiation of 30 kGy in samples at 18% moisture content was not sufficient to completely eradicate AFs.Some reports can be found in literature about the application of ultraviolet irradiation as a non-thermal, economical technology for AF destruction in different food products.Atalla, Hassanein, El-Beih, and Youssef showed that AF-B1 and AF-G1 in wheat grain were completely eliminated after UV short wave and long wave was applied for 30 min, while AF-B2 was decreased by 50 and 74% when exposed to UV short wave and long wave for 120 min, respectively.A study of UV-C irradiation on groundnut, almond, and pistachio was performed by Jubeen, Bhatti, Khan, Hassan, and Shahid.After treatment with UV-C at 265 nm for 15 min, all nut samples showed 100% degradation of AF-G2, while the complete elimination of AF-G1 was observed only in almond and pistachio.The level of AF-B1 was reduced by approximately 97% after UV-C irradiation for 45 min.García-Cela, Marin, Sanchis, Crespo-Sempere, and Ramos showed the potential of UV-A and UV-B irradiation, which can be used to reduce mycotoxin production from A. carbonarius and A. parasiticus in grape and pistachio media.Another non-thermal technology called pulsed light has also been used in AF reduction.Normally, PL generates short, high-intensity flashes of broad-spectrum white light.The synergy between full spectra of ultraviolet, visible, and infrared light destroys both the cell wall and nucleic acid structure of microorganisms present on the surface of either food or packaging materials in a few seconds.Wang et al. investigated the effect of PL treatment of 0.52 J cm−2 pulse−1 on the production of AF-B1 and AF-B2 in rough rice inoculated with A. flavus.Application of PL treatment for 80 s reduced AF-B1 and AF-B2 in rough rice by 75 and 39%, respectively.Additionally, the mutagenic activity of AF-B1 and AF-B2 was completely eliminated by PL treatment, while the toxicity of these two aflatoxins decreased significantly.Dielectric processes of radio frequency and microwave are additional alternative methods for controlling AFs contamination in agricultural products.Vearasilp, Thobunluepop, Thanapornpoonpong, Pawelzik, and von Hörsten used the RF to reduce AF-B1 in Perilla frutescens L. highland oil seed.They revealed that A. niger, A. flavus, and AF-B1 in seeds with an initial moisture content of 18% w.b. were highly inhibited by RF heat treatment at 90 °C for 7 min.For microwave application, 2.45 GHz MW was applied directly to hazelnuts contaminated with A. parasiticus by Basaran and Akhan, who then documented MW effects on post-harvest safety and quality of the product.The results showed that MW treatment for 120 s was able to reduce fungal count of A. parasiticus on in-shell hazelnut without any noticeable change in the nutritional and organoleptic properties.Unlike microbial inhibition, MW treatment was not effective to decrease AFs in hazelnuts.Perez-Flores, Moreno-Martinez, and Mendez-Albores tested the effect of MW application during alkaline-cooking of AF contaminated maize.A 36% reduction of AF-B1 and 58% reduction of AF-B2 were observed after the maize was treated at 1650 W power output and 2450 MHz operating frequency for 5.5 min.In addition, the effectiveness of MW heating on the reduction of AF contamination in groundnuts and respective products was evaluated by Mobeen, Aftab, Asif, and Zuzzer.Samples heated with MW up to 92 °C for 5 min resulted in a maximum AF-B1 reduction of 51.1–100%.Ozone, the triatomic form of oxygen, is one of the most powerful disinfectants and sanitizing agents.It has been approved as Generally Recognized as Safe meaning it can be directly applied as an antimicrobial agent in the food industry.Normally, ozone can be produced by several methods such as electrical discharge in oxygen, electrolysis of water, photochemical, and radiochemical.A primary attractive aspect of ozone is that, after reaching its half-life, decomposition products do not represent any hazard for the treated materials.In post-harvest treatment, gaseous and aqueous ozone phases are applied to inactivate bacterial growth, prevent fungal decay, destroy pesticides and chemical residues, control storage pests, and degrade AFs.The mechanisms of ozone to inhibit microbial populations in food occur via the progressive oxidation of vital cellular components.Ozone oxidizes polyunsaturated fatty acids or sulfhydryl group and amino acids of enzymes, peptides, and proteins to shorter molecular fragments.In addition, ozone degrades the cell wall envelope of unsaturated lipids resulting in cell disruption and subsequent leakage of cellular contents.The mechanism of ozone on the degradation of AF-B1 and AF-G1 involves an electrophilic reaction on the C8-C9 double bond of the furan ring causing the formation of ozonide.These compounds are then rearranged into monozonide derivatives such as aldehydes, ketones, acids, and carbon dioxide.Since there is no C8-C9 double bond in the structure, AF-B2 and AF-G2 are more resistant to ozonisation than AF-B1 and AF-G1.Even though the efficiency of ozone as a chemical detoxifier is high, a greater concentration is required to kill fungi or contaminated surfaces, while low concentration of ozone and short fumigation time is generally considered necessary in order to preserve product properties like colour, flavour, aroma, and vitamins.Ozone detoxification has been found by some studies to be useful to reduce AFs in food commodities as summarized in Table 2.Inan et al. observed that ozone treatment degraded AF-B1 in red peppers, while no significant variation in colour quality was found.Zorlugenç et al. investigated the effectiveness of gaseous ozone against microbial flora and AF-B1 content in dried figs. The results exhibited that Escherichia coli, mould, and AF-B1 were inactivated after ozone application.Using groundnut samples, de Alencar, Faroni, Soares Nde, da Silva, and Carvalho demonstrated the efficacy of the fungicidal and detoxifying effects of ozone against total AFs and AF-B1.In their study, ozone could control potential aflatoxin producing species, A. flavus and A. parasiticus, in groundnuts.The concentration of total AFs and AF-B1 was also reduced.A study conducted by Diao et al. showed that AF-B1 levels in groundnuts tend to decrease with ozone application, however the ozonolysis efficiency on AF-B1 was not further improved after 60 h. Moreover, in the sub-chronic toxicity experiment, they also found that ozone did not show any toxic effects in male and female rats.Chen et al. treated groundnut samples with ozone and observed that the detoxification rate of AFs increased.In addition, the results demonstrated that ozone application did not influence the contents of polyphenols, resveratrol, acids, and peroxide in treated samples.Luo et al. examined the effect of ozone treatment on the degradation of AF-B1 in maize and found that the toxicity of AF-B1 contaminated maize was diminished by ozone treatment.A number of studies have determined the effect of synthetic and natural food additives on AF reduction in food products.A prime example of this effect is citric acid on AF-B1 and AF-B2 degradation in extruded sorghum.Jalili and Jinap investigated the effect of sodium hydrosulphite and pressure on the reduction of AFs in black pepper.The study reported that the application of 2% Na2S2O4 under high pressure resulted in a greater percentage reduction of AF-B1, AF-B2, AF-G1, and AF-G2, without damage to the outer layer of black pepper.Nevertheless, AF-B2 was found to be the most resistant against the applied treatment.Apart from that, it is evident that respiration from insects increases the temperature and moisture content of grains providing favourable conditions for fungal growth.For this reason, Barra, Etcheverry, and Nesci evaluated the efficacy of 2, 6-di-p-cresol and the entomopathogenic fungus Purpureocillium lilacinum on the accumulation of AF-B1 in stored maize.The results clearly showed that the highest reduction of AF-B1 in stored maize occurred with the combination of BHT and Purpureocillium lilacinum.In addition, the effects of organic acids during soaking process on the reduction of AFs in soybean media were studied by Lee, Her, and Lee.The highest reduction rate of AF-B1 was obtained from tartaric acid followed by citric acid, lactic acid, and succinic acid, respectively.These acid treatments convert AF-B1 to β-keto acid that subsequently transforms to AF-D1, which has less toxicity than that of AF-B1.Zhang, Xiong, Tatsumi, Li, and Liu reported another novel technology that has been applied to inhibit AF contamination called acidic electrolyzed oxidizing water, which is an electrolyte solution prepared using an electrolysis apparatus with an ion-exchange membrane, used to decontaminate AF-B1 from naturally contaminated groundnut samples.The content of AF-B1 in groundnuts decreased about 85% after soaking in the solution.Remarkably, the nutritional content and colour of the groundnuts did not significantly change after treatment.To overcome the development of fungal resistance as well as residual toxicity posed by synthetic additives, the actions of some plant-based preservatives toward AF reduction have been studied in various food products.Hontanaya, Meca, Luciano, Mañes, and Font evaluated the effect of isothiocyanates, generated by enzymatic hydrolysis of glucosinolates, contained in oriental mustard flour.The findings showed that isothiocyanates reduced A. parasiticus growth in groundnut samples, whereas the AF-B1, AF-B2, AF-G1, and AF-G2 reduction ranged between 65 and 100%.Similar results were obtained by Saladino et al., who reported the inhibition of AFs by isothiocyanates derived from oriental and yellow mustard flours in piadina contaminated with A. parasiticus.These results can be explained by the electrophilic property of isothiocyanates, which can bind to thiol and amino groups of amino acids, peptides, and proteins, forming conjugates, dithiocarbamate, and thiourea structures leading to enzyme inhibition and subsequently to cell death.However, it is worth noting that ρ-hydroxybenzyl isothiocyanate, which is formed in yellow mustard flour, is less stable than allyl isothiocyanate from oriental mustard.In substitution of common commercial preservatives, Quiles, Manyes, Luciano, Mañes, and Meca also applied active packaging devices containing allyl isothiocyanate to avoid the growth of A. parasiticus and AF production in fresh pizza crust after 30 days.Another study used neem leaves to inhibit the growth of AFs in wheat, maize, and rice during storage for 9 months.Due to fungicidal and anti-aflatoxigenic properties of neem leaves, the application of 20% neem powder fully inhibited all types of aflatoxins synthesis for 4 months in wheat and for 2 months in maize, whereas the inhibition of AF-B2, AF-G1, and AF-G2 was observed for 3 months in rice.Essential oils of different aromatic plants have been also used as food preservatives due to their antimicrobial properties.However, the antibiotic functions of essential oils are not yet clearly understood.Bluma and Etcheverry stated that the anti-aflatoxigenic activity of essential oils may be related to inhibition of ternary steps in AF biosynthesis involving lipid peroxidation and oxygenation.Komala, Ratnavathi, Vijay Kumar, and Das determined the antifungal potential use of eugenol, a compound derived from essential oils, against AF-B1 production in stored sorghum grain.Prakash et al. presented the efficacy of Piper betle L. essential oil against the AF-B1 production in some dried fruits, spices, and areca nut.Kohiyama et al. showed the inhibiting effect of thyme essential oil against fungal development and AF production on A. flavus cultures.Likewise, Salas, Pok, Resnik, Pacin, and Munitz reported the possible utilization of flavanones obtained as by-products from the citrus industry to inhibit the production of AFs from A. flavus.Overall, few studies exist about chemical control of AFs in milk and dairy products.Firmin, Morgavi, Yiannikouris, and Boudra investigated the effect of a modified yeast cell wall extract on the excretion of AF-B1 and AF-M1 in faeces, urine, and milk.They observed that feed supplementation with modified extract cell walls of yeasts reduced the absorption of AF-B1, and decreased the concentration of AF-B1 and AF-M1 in ewe faeces.The results indicated that this organic material could be used to protect ruminants from chronic exposure to AFs present in feeds.Another study by Maki et al. examined the effect of calcium montmorillonite clay in dairy feed on dry matter intake, milk yield, milk composition, vitamin A, riboflavin, and AF-M1.The calcium montmorillonite clay was found to reduce AF-M1 content in milk samples without affecting milk production and nutrition qualities.Similarly, Awuor et al. suggested that inclusion in the human diet of calcium silicate 100, a calcium montmorillonite clay, may reduce aflatoxin bioavailability and potentially decrease the risk of aflatoxicosis in aflatoxin-prone areas such as in Kenya.These results can be explained by the fact that calcium montmorillonite clay binds tightly to AFs in the gastrointestinal tract, therefore reducing AFs bioavailability and distribution to the blood, liver, and other affected organs.Physical and chemical detoxification methods have some disadvantages, such as loss of nutritional value, altered organoleptic properties, and undesirable effects in the product as well as high cost of equipment and practical difficulties making them infeasible, particularly for lower-income countries.However, biological methods based on competitive exclusion by non-toxigenic fungal strains have been reported as a promising approach for mitigating formation of mycotoxins and preventing their absorption into the human body.Among various microorganisms, lactic acid bacteria namely Lactobacillus, Bifidobacterium, Propionibacterium, and Lactococcus are reported to be active in terms of binding AF-B1 and AF-M1.The binding is most likely a surface phenomenon with a significant involvement of lactic acid and other metabolites such as phenolic compounds, hydroxyl fatty acids, hydrogen peroxide, reuterin, and proteinaceous compounds produced by LAB.Ahlberg et al. reported that AF binding seems to be strongly related to several factors such as LAB strain, matrix, temperature, pH, and incubation time.Elsanhoty, Ramadan, El-Gohery, Abol-Ela, and Azeke found that Lactobacillus rhamnosus was the best strain with the ability to bind to AF-B1 in contaminated wheat flour during bread-making process.Similar results were observed in yogurt cultured with 50% Staphylococcus thermophiles and Lactobacillus bulgaricus and 50% Lactobacillus plantrium with the greatest AF-M1 reduction observed at the end of storage.Asurmendi, Pascual, Dalcero, and Barberis mentioned that LAB could inhibit AF-B1 production in brewer’s grains used as raw material for pig feed.More recently, Saladino, Luz, Manyes, Fernández-Franzón, and Meca investigated the effect of LAB against AF development in bread with the results showing that AF content was reduced 84–100% allowing up to 4 days of additional shelf life.Other microorganisms have also been reported to bind or degrade aflatoxins in foods and feeds.Shetty, Hald, and Jespersen tested the AF-B1 binding abilities of Saccharomyces cerevisiae strains in vitro in indigenous fermented foods from Ghana.The results indicated that some strains of Saccharomyces cerevisiae have high AF-B1 binding capacity.These binding properties could be useful for the selection of starter cultures to prevent high AF contamination levels in relevant fermented foods.Topcu, Bulat, Wishah, and Boyaci showed that 20–38% of AF-B1 was eliminated using probiotic culture of Enterococcus faecium.A study by Fan et al. also reported the protective effect of Bacillus subtilis ANSB060 on meat quality due to its ability to prevent AF residue absorption in the livers of broilers fed with naturally mouldy groundnut meal.Moreover, some bacteria such as Rhodococcus erythropolis, Bacillus sp., Stenotrophomonas maltophilia, Mycobacterium fluoranthenivorans, and Nocardia corynebacterioides have been found to degrade AF-B1.Even though, many Bacillus species are still avoided due to their nature of producing toxic compounds.Farzaneh et al. recently showed that the non-toxic enzymes produced by Bacillus subtilis strain UTBSP1 can be used to reduce AF-B1 from contaminated substrates.In post-harvest management, packaging materials are frequently considered as the final step of product development in order to extend the preservation of food and feed products.During storage and distribution, food commodities can be affected by a range of environmental conditions, such as temperature and humidity as well as light and oxygen exposure.Overall, these factors have been reported to facilitate various physicochemical changes such as nutritional degradation and browning reactions with the latter causing undesirable colour changes.The interaction of these factors can also elevate the risks of fungal development and subsequent AF contamination.Many smallholder farmers in lower-income countries traditionally store agricultural products such as grains in containers typically made from wood, bamboo, thatch, or mud placed and covered with thatch or metal roofing sheets.Recently, metal or cement bins have been introduced as alternatives to traditional storage methods, but their high costs and difficulties with accessibility make adoption by small-scale farms limited.Hell, Cardwell, Setamou, and Poehling stated that even though polypropylene bags are currently used for grains storage, they are still contaminated by fungal and AFs especially when those reused bags contain A. flavus spores.Several studies have reported the application of Purdue Improved Crop Storage bags to mitigate fungal growth and resulting AF contamination.Williams, Baributsa, and Woloshuk indicated that the PICS bags successfully suppressed the development of A. flavus and resulting AF contamination in maize across the wide range of moisture contents in comparison to non-hermetic containers.These results correspond with Njoroge et al. who mentioned that grains stored in PICS bags absorbed less moisture than grains stored in woven polypropylene bags.This could be a result of PICS bag construction consisting of triple bagging hermetic technology with two inner liners made of high-density polyethylene and an outer layer woven PP.In addition, PICS bags reduced the oxygen influx and limited the escape of carbon dioxide, which can prevent the development of insects in stored grain.In Benin, Ghana, Burkina Faso, and Nigeria, Baoua, Amadou, Ousmane, Baributsa, and Murdock used PICS bags to store locally infested maize.Although 53% of maize had AF levels above 20 ppm, samples from PICS bags tended to have less accumulation than those from woven bags.Sudini et al. also evaluated the efficacy of PICS bags for protecting groundnuts from quality deterioration and aflatoxin contamination caused by A. flavus and found that there was less toxin production in PICS bags compared to cloth bags under similar conditions.Many innovative management strategies that can potentially reduce AF contamination in food and feed chains have been identified by this review.These strategies have the potential to mitigate adverse effects of AF contamination on food security, public health, and economic development.An understanding of these benefits can motivate policy makers and value chain actors to explore effective ways of managing AFs during pre- and post-production processes.The quantity and quality of agricultural products are degraded by the presence of AFs, while the opposite is true when AF contamination is effectively prevented.The use of biocontrol methods for instance has been shown to reduce contamination up to 90%, which potentially reduces complete loss of harvested or stored crops.As mentioned earlier the use of the PICS technology for grain storage can reduce AF contamination due to the controlled environment in the hermitic bags.For subsistent households, such measures can potentially increase availability of harvested food crop for family consumption.Farmers can even afford to sell their excess produce and use the proceeds to purchase other food ingredients they do not produce themselves.Moreover, applications of innovative control technologies can ensure that products are safer to consume, thereby improving utilization efficiency.By reducing significant losses during storage, the control of AF can certify that the foodstuffs are available over extended periods of time, thereby ensuring consistent food availability.Effective control of AF contamination therefore has the potential to enhance food availability, food access, food utilization, and food stability.AFs are a serious risk to public health, especially in low-income countries where most people consume relatively large quantities of susceptible crops such as maize or groundnuts.According to the estimation of the US Center for Disease Control and Prevention, about 4.5 billion people are chronically exposed to mycotoxins.Prolonged exposure to even low levels of AF contamination in crops could lead to liver damage or cancer as well as to immune disorders.In children, stunted growth and Kwashiorkor pathogenesis are caused by breast milk consumption or direct ingestion of AF-contaminated foods.Controlling AF contamination through the application of effective technologies could potentially avoid such health risks and have significant benefits in a number of ways.First chronic diseases can be prevented to minimize pressure on the health facilities of an economy due to savings on cost of medication and treatment.People will have access to good quality food ingredients for health living and making work efficient labour force available for the economy.The economic benefits of AF reduction are observed through both domestic and high-value international trade markets.At domestic and regional levels, markets might not reward reduced AF in crops, but avoiding contamination could allow, in ideal cases, to increase the volume of sales, which would lead to elevated incomes as well as greater returns to investments for producers.Farmers who successfully inhibit AF contamination can also benefit from increased income due to greater product acceptance, higher market value, or access to high-value markets.In reality, there are numerous factors that have to be enhanced in order to create premium class products such as aflatoxin control, consumer awareness, marketing channels, aflatoxin testing, and stricter enforcement of production and market regulations.When such enabling conditions are met, it has been shown that aflatoxin-conscious market can pay a premium for aflatoxin safe products even in the domestic market in Africa.Moreover, the control of AF contamination could reduce costs the associated with consequent effects on humans, such as medical treatments, primarily of individuals suffering from liver cancer, as well as indirect costs such as pain and suffering, anxiety, and reduction in quality of life associated with exposure to AFs.At the international level, many developed countries have established regulations to limit exposure to AFs.Some countries have different limits depending on the intended use, the strictest on human consumption, exports, and industrial products.Despite that stringent measures that makes phytosanitary standards seemingly more expensive, once suppliers internalize the economic costs of compliance in reality, greater economic benefits for society can be achieved.This is due to access to larger and more stable markets, and less incidence of disease.Controlling AF contamination in exportable agricultural commodities could maintain or even increase trade volumes and foreign earnings for exporting economies.Furthermore, the savings from such control measures could be channelled or invested in other economic sectors in order to generate additional income and propel growth and development.AFs are a critical problem for food safety in many lower-income countries where AF formation in key staple crops causes significant post-harvest losses and negative impacts on human life.Currently, several innovative AF control technologies have shown potential to improve health and economic factors for farmers and other actors in commodity value chains.However, the efficacy, safety, and quality of these technologies must be verified prior to adoption.The feasibility of using biocontrol products depends not only on safety regulations in each individual country, but also on the accessibility of such biocontrol tools like Aflasafe™ to smallholder farmers.The ability to develop and maintain biocontrol strains from local resources, particularly in the production of Aflasafe™, are highly cost-effective and facilitate availability.Meanwhile, non-profit governmental or non-governmental organizations can also promote such products, which are particularly suitable for sustainable development.Bandyopadhyay and Cotty have mentioned that application of biocontrol technologies in conjunction with other AF management tools can profitably link farmers to markets, improve human and animal health, and increase food safety.However, biocontrol adoption still requires a flexible system that allows the use of biopesticides together with a favorable policy and institutional supports.Furthermore, other techniques have been developed such as sorting technologies that offer numerous advantages including rapid, real-time product information via non-destructive measurement, reduction of laborious and destructive analytical methods, continuous monitoring, and integrating into existing processing lines for control and automation.However, investment costs are usually the main factor determining whether such technologies are adopted or not.For simplicity, development of cheap and portable diagnostics techniques that are adaptable to different field networks is imperative.In addition, future research should still be conducted in cooperation with final users to achieve full adoption potential.Despite technological advances, hand sorting may still be more suitable in lower-income countries where access to equipment is limited.The culls from sorting must be disposed in a manner that they do not enter the food chain, particularly of economically vulnerable populations.Still, with regulatory approval, irradiation and ozone fumigation could effectively reduce aflatoxin levels in crops, but these interventions are less applicable due to higher costs and safety concerns.Moreover, naturally infected grains have both internal and external colonization.While external contamination can be decontaminated, ozone cannot penetrate the internal sites of colonization and AF formation.Therefore, large ozone doses for a long time might be required for effective ozone decontamination.The application of chemical and biological control agents has been shown to reduce AF contamination both in animals and humans.Nonetheless, little information is available regarding the effective doses and frequencies as well as costs and efficacies.Generally, individual countries with their own specific cultural context, especially those with higher risks of AF, face public aversion to these technologies.Regarding storage methods, there is evidence that suggests hermetic technologies like PICS triple layer bags could be cost effective against key grain storage pests.They may also provide an improved alternative for insecticide-free, long-term storage of grains with minimal grain damage.However, these PICS bags may not be suitable and affordable for small-scale farmers over very large areas.The technology is also limited to cereals and grain crops.Although there are many initiatives that aim to reduce AF contamination in lower-income countries, the lack of regulation enforcement, or even definition of acceptable limits, does not allow for their full development and implementation.In order to reduce AF contamination, it is necessary to have policies focused on: raising awareness of public health impacts associated with AF contamination to all actors along the entire value chain, including families, farmers, consumers, processors, and traders; estimating the lifespan of each technology and calculating their respective social and economic costs of diminishing the contamination risk at different intervention points; reducing the harmful effects of AFs by implementing the appropriate pre- and post-harvest technologies; investing in infrastructure with such capacity that allows to support further activities both in order to reduce AFs and to monitor contamination levels in different agricultural products; establishing of reliable and effective low-cost testing methods to monitor AF contamination levels in rural areas; and providing the required data and risk management tools for driven policy reforms, which create an effective regulatory environment to ensure domestic food safety in rural and urban areas and also facilitates trade opportunities in the region.Finally, governments need to solve the issue of how agricultural businesses can be enabled to operate profitably while complying with existing standards and limits of AF contamination.This review has focused on different scientific research results regarding AF control in food and feeds at pre- and post-harvest levels.It is clear that high AF levels pose human health risks and also represent a barrier to expand trade in both domestic and international contexts.Overall, it is necessary to tackle existing global food insecurity issues by adopting and implementing cutting-edge technologies.Biocontrol technologies, in conjunction with other aflatoxin-management tools such as sorting technologies, storage, irradiation, ozone fumigation, chemical and biological control along with improved packaging materials have the potential to link farmers to markets, enhance international trade, improve health conditions of people and animals, and increase food safety and security.However, multidisciplinary and comprehensive research is still required to assess the potential benefits of these technologies.Overall, AF control interventions should be considered in order to improve food security, raise public health awareness, increase economic benefits, and reduce related costs for all actors in commodity value chains.
Aflatoxins are mainly produced by certain strains of Aspergillus flavus, which are found in diverse agricultural crops. In many lower-income countries, aflatoxins pose serious public health issues since the occurrence of these toxins can be considerably common and even extreme. Aflatoxins can negatively affect health of livestock and poultry due to contaminated feeds. Additionally, they significantly limit the development of international trade as a result of strict regulation in high-value markets. Due to their high stability, aflatoxins are not only a problem during cropping, but also during storage, transport, processing, and handling steps. Consequently, innovative evidence-based technologies are urgently required to minimize aflatoxin exposure. Thus far, biological control has been developed as the most innovative potential technology of controlling aflatoxin contamination in crops, which uses competitive exclusion of toxigenic strains by non-toxigenic ones. This technology is commercially applied in groundnuts maize, cottonseed, and pistachios during pre-harvest stages. Some other effective technologies such as irradiation, ozone fumigation, chemical and biological control agents, and improved packaging materials can also minimize post-harvest aflatoxins contamination in agricultural products. However, integrated adoption of these pre- and post-harvest technologies is still required for sustainable solutions to reduce aflatoxins contamination, which enhances food security, alleviates malnutrition, and strengthens economic sustainability.
75
Cutaneous Nod2 Expression Regulates the Skin Microbiome and Wound Healing in a Murine Model
Skin is colonized by diverse microorganisms, collectively termed the skin microbiome.Recent methodological advances in high-throughput sequencing have shown the complexity of microorganisms associated with skin and have begun to directly implicate a microbial imbalance, a so-called dysbiosis, in skin health and disease.Our skin is also routinely exposed to potentially pathogenic microorganisms, such as Staphylococcus aureus and Pseudomonas and Enterobacter species, and has therefore evolved a tightly regulated innate immune response to actively manage the interactions with the skin microbiome.After injury, it is essential that the skin repairs itself effectively and rapidly.Exposed subcutaneous tissue provides a perfect niche for adventitious pathogens to override the natural microbiome, colonizing the wound.Skin cells respond to bacterial invasion via cutaneous pattern recognition receptors, including toll-like receptors and the NOD leucine-rich repeat-containing receptors.PRRs recognize and bind to conserved, pathogen-associated molecular patterns, which ultimately leads to induction of proinflammatory cytokines and secretion of antimicrobial peptides.NOD2 is an intracellular receptor that recognizes the muramyl dipeptide motif from bacterial peptidoglycans of both Gram-positive and Gram-negative bacteria.Mutations in the leucine-rich region of the NOD2/CARD15 gene are associated with the pathogenesis of several chronic inflammatory diseases of barrier organs including Crohn’s disease, asthma, and Blau syndrome.Recognition of muramyl dipeptide via NOD2 leads to the activation of the NF-κB pathway, inducing a variety of inflammatory and antibacterial factors.Although a number of studies have highlighted roles for PRRs during cutaneous repair, including members of the toll-like receptor and NOD-like receptor families, the role of PRRs modulating the wound microbiome during repair remains unclear.Although key studies have provided insight into the regulation of the host-microbiome axis, what we now must understand is how cutaneous microorganisms interact with the host and their impact on wound repair.Our previous work showed a previously unreported intrinsic role for murine NOD2 in cutaneous wound healing.NOD2 has also been implicated in the regulation of the gut microbiome.Given the potential importance of host microbiota/skin interactions during tissue repair, we hypothesized a major link between the NOD2 delayed healing phenotype and the role of NOD2 in cutaneous bacteria modulation.Using a NOD2 null murine model, we show fundamental insights into the role of the innate host response in modulating skin bacteria, with direct effects on tissue repair.To investigate the role of the PRR Nod2 in the skin, we used the murine Nod2–/– model.Histologically, the skin of Nod2–/– mice was comparable to that of wild-type mice.However, through density gradient gel electrophoresis, we observed major differences in the Nod2–/– skin microbiome from birth through to adulthood.16S rDNA sequencing data of differentially expressed bands indicated enrichment in Pseudomonas species, and this was confirmed by quantitative real-time PCR, which showed increased relative abundance of Pseudomonas aeruginosa in Nod2–/– skin and a trend toward reduced commensal species such as Staphylococcus epidermidis.P. aeruginosa is a Gram-negative opportunistic pathogen.Histological Gram staining of skin sections showed no significant difference in the total number of bacteria visualized in the epidermis or dermis.There was, however, a trend toward increased overall numbers of bacterial cells in the dermis of Nod2–/– skin and a corresponding propensity toward increased abundance of Gram-negative bacteria.We next addressed the potential contribution of altered skin microbiome to the healing delay observed in Nod2–/– mice.Injury increased the total eubacterial abundance in Nod-–/– but not WT mice.Fluorescence in situ hybridization confirmed this increased total eubacterial DNA abundance in Nod2–/– mouse wounds.Despite this increase, the bacterial diversity induced by injury was less pronounced in Nod2–/– than WT mice, which agrees with recent observations from Loesche et al. that wound microbiota stability is associated with delayed healing.Thus, in the absence of Nod2, injury leads to increased relative bacterial abundance, but reduced injury induced changes in bacterial profile.qPCR showed that specific pathogenic species, such as P. aeruginosa and Propionibacterium acnes, were increased in Nod2–/– mouse wounds, which was confirmed by 16S rDNA sequencing.Opportunistic pathogenic species of Pseudomonas are linked to chronic inflammation and wound infection and are thus clear candidates to confer delayed wound healing.A key component of the antimicrobial host response is the production of AMPs, predominately members of the defensin family.Studies in Crohn’s disease patients and Nod2-deficient mice showed reduced α-defensin expression in the intestinal mucosa.Although α-defensins are absent in the skin, specific AMPs including β-defensins are strongly induced in response to cutaneous injury.Unwounded skin of newborn Nod2–/– mice had greater expression of both mBD-1 and mBD-14 than matched WT mice.Adult injury-induced changes in defensin expression also differed between genotypes, with mBD-1 significantly up-regulated in WT wounds, whereas Nod2–/– wounds displayed abnormal induction of mBD-3 and mBD-14 in response to injury.IL-22, a known regulator of mBD-14 expression, was strongly increased in Nod2–/– wounds.Finally, we confirmed increased mBD-14 at the protein level in vivo, showing a greater extent of keratinocyte expression and an increased number of mBD-14–positive dermal inflammatory cells in adult wound tissue.Altered expression of AMPs in the absence of Nod2 may contribute to an altered microbial community, but equally it may reflect the host response to changes in the composition of the skin microbial community, overall bacterial burden, or the cutaneous location of microbes in the tissue.In experiments analyzing mice born by cesarean, the data showed that cutaneous defensin expression was similar between WT and Nod2–/– mice, suggesting that defensin profiles change in response to microbial challenge.As Nod2-/- mice had an altered microbiome, an important question was then whether skin dysbiosis was sufficient to alter healing outcome and whether this phenotype could be transferrable.To address causation and to investigate a potential link between bacterial dysbiosis and healing outcome, we mixed newborn WT and Nod2–/– mice litters from birth with a Nod2–/– mother.WT mice reared in this mixed environment displayed a clear healing delay with significantly increased wound area and increased local immune cell recruitment.The reverse experiment was performed whereby newborn WT and Nod2–/– litters were co-housed with WT mothers, and although there was no rescue of delayed healing in Nod2–/– mice, the WT mice had a variable response, with five mice out of eight having delayed healing but all showing significantly greater inflammation, suggesting that the maternal microbiome contribution mediated a partial rescue effect in WT mice.Next we analyzed the microbial communities in wound tissue from the co-housing experiment using 16S rRNA Illumina high-throughput sequencing.Nonmetric multidimensional analysis showed statistically significant segregation based on environment, that is, separately housed mice versus co-housed mice.There was also a trend toward reduced alpha diversity between each group when compared with WT, as calculated by the Shannon-Wiener index.When focusing on specific skin microbiota at the phylum level, again using the Shannon-Wiener index, there was a significant change in the diversity of Bacteroidetes species between environment: separately housed WT versus separately housed Nod2–/– and separately housed Nod2–/– versus the co-housed mice.Furthermore, phylum and genus level taxonomic classification of the wound microbiome is depicted and showed a significantly altered microbial community in separately housed versus co-housed mice, including common skin-associated taxa such as Corynebacterium and Brevibacterium.Moreover, the microbial community compositions varied between WT and Nod2–/– mice, including the genera Actinobacillus and Campylobacter.The taxonomic information for all mapped reads at the genus level can be found in the Supplementary Materials online.Finally, to confirm these differences we also performed DGGE, which showed that mixing of pups resulted in a major shift in the skin microbiome of both genotypes, establishing an intermediate skin bacterial profile.qPCR confirmed that mixed WT wounds acquired increased abundance of specific bacterial species characteristic of Nod2–/– mice such as P. aeruginosa, accompanied by an overall increase in total eubacterial abundance.Thus, these data provide compelling experimental evidence that skin microbiome directly influences healing outcome.Although we report wide-ranging changes in bacteria in Nod2–/– mice, a common theme across experiments was increased relative abundance of Pseudomonas species.To confirm a direct role for Pseudomonas species in wound repair, we treated full-thickness excisional wounds in WT mice with P. aeruginosa biofilms and assessed subsequent healing.Significantly delayed healing was observed after direct application of P. aeruginosa to mouse wounds versus nontreated controls.Treated wounds were larger, with delayed re-epithelialization and increased local inflammation.These data confirmed that the presence of pathogenic bacteria, similar to wound infection, directly delays murine wound healing and establishes a link to the Nod2–/– phenotype, where a delay in wound repair is associated with an increased cutaneous presence of the genus Pseudomonas.A wealth of literature has characterized the role of the host response in regulating gut microbiome, with wide-ranging implications for normal physiology and disease.By contrast, comparatively few studies have addressed the role of the cutaneous host response-microbiome axis in skin physiology and pathology.We hypothesized that the skin microbiome plays an important role in the cutaneous healing response.Our results show that skin bacterial profiles profoundly influence wound healing outcome.Direct experimental manipulation of the Nod2 gene leads to bacterial dysbiosis associated with local changes in AMPs and ultimately delays healing.Moreover, when WT mice were co-housed from birth with mice lacking Nod2, they acquired an altered microbiome and developed delayed healing.Cutaneous dysbiosis, as shown by eubacterial DNA profiling, 16S high-throughput sequencing, and qPCR, implicated the genus Pseudomonas in murine delayed wound repair, and WT mice infected with P. aeruginosa biofilms confirmed this.These results suggest that microbial therapy directed at bacterial manipulation of the genus Pseudomonas, in addition to other bacterial species previously identified, causes a delay in wound repair, including S. aureus and S. epidermidis, might be an effective strategy to treat wound healing in the future.A growing body of literature links NOD2/CARD15 polymorphisms with a dysregulated innate immune response and susceptibility to diseases, including Crohn’s disease, Blau syndrome, early-onset sarcoidosis, and graft-versus-host disease.In the gut, NOD2 has a well-characterized role in host recognition of bacteria and muramyl dipeptide, which is widely expressed by a variety of commensal and pathogenic gut bacteria.Studies in patients with Crohn’s disease and Nod2-deficient mice showed that intestinal changes in bacterial composition are associated with altered α-defensin expression in the intestinal mucosa.α-Defensins are not expressed in skin; however, the cutaneous effects of NOD2 are associated with altered β-defensins, yet the exact role these AMPs are playing in our Nod2-null mice remain to be elucidated.Changes in skin β-defensins have previously been linked to skin infection and skin disease.Thus, a picture is emerging across multiple epithelial tissues whereby a loss of NOD2-mediated surveillance activity inhibits local host responses to pathogenic challenge, resulting in aberrant inflammation and bacterial dysbiosis.All wounds will be rapidly colonized by resident bacteria, but only some wounds will become “infected.,Considerable recent interest has been focused on the potential ability of these colonizing bacteria to form and exist as highly-AMP resistant polymicrobial biofilms.A number of bacterial genera/species, such as Streptococcus, Enterococcus, S. aureus, and P. aeruginosa, have already been linked to infected chronic wounds.However, the clinical diagnosis for wound infection is based on the basic criteria of heat, odor and appearance.In this study we show that similarly appearing murine acute wounds display differences in wound microbiota profile that clearly influence healing outcome.Arguably the most important finding in this study comes from the newborn mouse co-housing experiments, where passive transfer of skin bacteria from Nod2-null to WT mice conferred a “NOD2-like” delayed healing phenotype.The concept of transferring signature bacterial profiles to closely related individuals has now been established clinically.For example, unaffected relatives of Crohn’s disease patients reportedly share some features of the disease-associated microbiome composition.Fecal transplant, also referred to as gut microbiome transplant, a procedure in which fecal bacteria from a healthy donor are transplanted into a patient, has shown promise in the treatment of Crohn’s disease and ulcerative colitis.Similarly, cross-strain murine relocation/uterine implantation studies showed that environmental influences dominate the gastrointestinal tract microbiome.Our data strongly suggest that the cutaneous microbiome is also highly susceptible to environmental influences, with clear functional consequences.Finally, our data suggest a potential therapeutic opportunity for the treatment of cutaneous dysbiosis in relation to wound repair via microbial manipulation of the skin microbiome.Indeed, mounting research suggests the profound benefits of probiotic supplementation for gut microbiota in health and disease.These may now be extended to other epithelia, including the skin.All animal studies were performed in accordance with UK Home Office Regulations.All mice used in this study were bred in the same room under the same conditions at the University of Manchester’s Biological Services Facility, where they have been housed for 10 or more generations.Mice were housed in isolator cages with ad libitum food and water.The room was maintained at a constant temperature of 21 °C, with 45–65% humidity on a 12-hour light-dark cycle.Nod2-null mice were bred from homozygous matings and have been described previously.WT mice were bred from WT × WT matings onsite to generate controls for experiments.Eight-week-old female mice were anesthetized and wounded following our established protocol.Briefly, two equidistant 6-mm full-thickness excisional wounds were made through both skin and panniculus carnosus muscle and left to heal by secondary intention.For co-housing experiments, mice were marked by tattooing, and then 2 or 3 tattooed pups of one genotype were placed in the same cage with 2 or 3 tattooed pups of the other genotype and fostered onto WT or Nod2–/– mothers for at least 5 weeks before separation.After weaning, only mice of the same sex were housed together before wounding at 6 weeks.Bacterial DNA and/or total RNA was isolated from frozen skin or wound tissue as previously described or by homogenizing in Trizol reagent using the Purelink RNA kit according to the manufacturer’s instructions.cDNA was transcribed from 1 μg of RNA and AMVreverse transcriptase, and qPCR performed using the SYBR Green I core kit and an Opticon quantitative PCR thermal cycler.The primer sequences for real-time qPCR are listed in Supplementary Table S2.Histological sections were prepared from normal skin and wound tissue fixed in 10% buffered formalin saline and embedded in paraffin.5-μmol/L sections were stained with hematoxylin and eosin or subjected to immunohistochemical analysis using the following antibodies: rat anti-neutrophil polyclonal and chicken anti-BD-14 polyclonal.Primary antibody was detected using the appropriate biotinylated secondary antibody followed by ABC-peroxidase reagent with NovaRed substrate and counterstaining with hematoxylin.Images were captured using an Eclipse E600 microscope and a SPOT camera.Total cell numbers, bacterial counts, granulation tissue wound area, and re-epithelialization were quantified using Image Pro Plus software.The deparaffinized tissue sections were systematically analyzed by fluorescence in situ hybridization using peptide nucleic acid probes.A mixture of a CY3-labelled universal bacterium peptide nucleic acid probe in hybridization solution was added to each section and hybridized in a peptide nucleic acid fluorescence in situ hybridization workstation at 55 °C for 90 minutes.Slides were washed for 30 minutes at 55 °C in wash solution,mounted in DAPI-containing mountant, and stored in the dark at –20 °C.Slides were visualized using a DMLB 100s Leica Microsystems microscope attached to a Leica Microsystems fluorescence system.Images were captured using a RS Phototmetrics Coolsnap camera and overlaid using Adobe Photoshop Elements version 6.5.Samples were processed as previously described, with the exception that 4% paraformaldehyde and 2 mmol/L CaCl2 were used in the primary fixative and 2% OsO4 in the secondary fixative.Images were acquired using the Orius CDD SC1000 camera.All data are presented as mean + standard error of the mean.Normal distribution and statistical comparisons between groups were determined using Shapiro-Wilk test, Student t test, or two-way analysis of variance with Bonferroni posttest where appropriate using GraphPad Prism 7 version 7.01 as indicated in the figure legends.For all statistical tests, the variance between each group was determined, and probability values of P less than 0.05 were considered statistically significant.An overnight broth culture of P. aeruginosa was diluted to turbidity equivalent to 0.5 McFarland standard in Mueller-Hinton broth.A total of 50 μl of the diluted culture was applied to 6-mm–diameter sterile 0.2-μm filter membranes placed on Mueller-Hinton agar plates.These were then incubated at 37 °C for 72 hours, with transfer to a new agar plate every 24 hours.The resultant biofilms were applied to 6-mm excisional wounds and covered with a nonwoven Sawabond 4383 dressing.Excisional wounds were harvested at 1, 3, and 5 days after wounding and bisected, with one half placed on dry ice for DGGE analysis or fixed in formalin for histology and the remaining half snap-frozen in liquid nitrogen and stored at –80 °C.Skin swabs from an area of intact contralateral skin were also collected using sterile Dual Amies transport swabs and inoculated into 1 ml of transport medium and processed within 3 hours of collection.All biological specimens were incubated in enzymatic lysis buffer and lysozyme for 30 minutes at 37 °C.DNA was extracted using a Qiagen DNeasy blood and tissue kit in accordance with the manufacturer’s instructions but with the added step of using 0.1-mm sterile zirconia/silica beads to homogenize the samples.The V3 variable region of the 16S rRNA gene was amplified from purified DNA by PCR using GC-rich eubacterium-specific primers P3_GC-341F and 518R as previously described using a PTC-100 DNA Engine thermal cycler.Samples were purified using a Qiagen MinElute purification kit,in accordance with manufacturer’s instructions.Polyacrylamide electrophoresis was performed using the D-CODE Universal Mutation Detection System according to the manufacturer’s instructions for perpendicular DGGE.Denaturing gradient gels of 10% acrylamide-bisacrylamide were made containing a 30–70% linear gradient of denaturants, increasing in the direction of electrophoresis as described previously.DGGE gel images were aligned and analyzed with BioNumerics software version 4.6 in a multistep procedure following the manufacturer’s instructions.After normalizations of the gels, individual bands in each lane of the gel were detected automatically, allowing matching profiles to be generated and used to produce an unweighted pair group method with arithmetic mean dendrogram.Selected bands were sterilely excised from the gel under UV illumination in 20 μl Nanopure H2O in nuclease-free tubes.PCR products were purified using QIAquick PCR purification kit,and re-amplified using the reverse 518R primer.Sequencing was performed using BigDye terminator sequencing on an ABI 3730 genetic analyzer for Sanger sequencing.Sequences obtained were compared with those in the European Molecular Biology Laboratory nucleotide sequence database using Basic Local Alignment Search Tool searches to identify closely related gene sequences.16S amplicon sequencing targeting the V3 and V4 variable region of the 16S rRNA gene was performed on the Illumina MiSeq platform.The raw amplicon data were further processed using quantitative insights into microbial ecology version 1.9.0 and R version 3.3.1.The non-metric multidimensional scaling plot and the Shannon-Weiner index were created using the isoMDS function in the MASS package in R, and the statistical analysis was performed using the Adonis function in the vegan package in R.The Hucker-Twort Gram stain was used to distinguish Gram-positive and Gram-negative bacteria in formalin-fixed tissue.Tissue was flooded with crystal violet stain for 3 minutes and rinsed with running H2O.Gram’s iodine was added for 3 minutes and washed with H2O.After differentiation in preheated acetic alcohol at 56 °C, tissue was immersed with Twort’s stain for 5 minutes and washed with H2O.Slides were rinsed in alcohol, cleared in xylene, and mounted with DPX mountant; the slides were imaged using a 3D Histech Pannoramic 250 Flash Slide Scanner.The authors state no conflict of interest.
The skin microbiome exists in dynamic equilibrium with the host, but when the skin is compromised, bacteria can colonize the wound and impair wound healing. Thus, the interplay between normal skin microbial interactions versus pathogenic microbial interactions in wound repair is important. Bacteria are recognized by innate host pattern recognition receptors, and we previously showed an important role for the pattern recognition receptor NOD2 in skin wound repair. NOD2 is implicated in changes in the composition of the intestinal microbiota in Crohn's disease, but its role on skin microbiota is unknown. Nod2-deficient (Nod2–/–) mice had an inherently altered skin microbiome compared with wild-type controls. Furthermore, we found that Nod2–/– skin microbiome dominated and caused impaired healing, shown in cross-fostering experiments of wild-type pups with Nod2–/– pups, which then acquired altered cutaneous bacteria and delayed healing. High-throughput sequencing and quantitative real-time PCR showed a significant compositional shift, specifically in the genus Pseudomonas in Nod2–/– mice. To confirm whether Pseudomonas species directly impair wound healing, wild-type mice were infected with Pseudomonas aeruginosa biofilms and, akin to Nod2–/– mice, were found to exhibit a significant delay in wound repair. Collectively, these studies show the importance of the microbial communities in skin wound healing outcome.
76
Memtein: The fundamental unit of membrane-protein structure and function
Membrane protein structural and functional integrity depends on a layer of lipids that surround much of their surface.However, available membrane protein structures typically lack this biological lipid layer, despite its requirement for their correct operation.In fact, there is no word for the biologically intact unit formed by complexes of the lipid-coated proteins in a membrane.This gap persists despite the growing evidence that direct interactions with structured lipids are widespread and integral to membrane protein folding, structure, stability, ligand binding, activation, signal transduction and trafficking.Our language has not caught up with the tremendous science being done in the field of membrane structural biology.The lack of an ability to name the concept not only creates unnecessary ambiguity and confusion, but also limits our ability to focus on the important challenges and tasks ahead.Recombinant technology cannot necessarily solve these problems, as memteins cannot necessarily be formed by recombining the isolated parts.Instead memteins are assembled through the stepwise trafficking and processing of lipids and proteins in a series of subcellular compartments that are difficult to recapitulate.Here we propose the word memtein to describe the functionally intact unit of a membrane protein bound to a continuous layer of biologically relevant, structured lipids.As such, memteins represent minimal and stable operational states of membrane proteins packed against a perimeter of endogenous lipids that engage as they would in vivo.We argue that the stereospecifically packed and dynamically restricted lipid headgroups and tails are as integral to these entities as a protein’s residues, being key determinants of physiologically relevant folding, stability, specificity, dynamics, structure and function as well as being required for moving towards rational drug design.The development of styrene maleic acid polymers for making native nanodiscs has allowed memteins to be isolated directly from membranes, cells and tissues without conventional detergents.Such detergents strip away endogenous lipids and destabilize membrane proteins, thus leading to a variety of documented artifacts in membrane protein structures.SMA copolymers are designed to bypass these problems and provide a simpler and less expensive approach to memtein solubilization, and have led to an expanding set of membrane types being incorporated into nanodiscs.Generally applicable methods are being developed by an open innovation community known as the SMALP Network, and show compatibility with most biophysical tools and biochemical methods, although limitations with pH ranges, polyvalent cations and absorbance have been identified and are being overcome with new formulations.These developments are revealing new membrane complexed structures including large ligand-bound machines and post-translational modifications that could not be resolved with any other technology, suggesting an avalanche of potential impacts and applications.However, this field is still young.Further acceleration of the field rests on the design of new polymer chemistries that improve performance and progress our understanding of how native nanodiscs work.The drive to further refine the materials and tools results from several observations and known limitations.These are described below, and are considered in light of the properties of the SMALP system and allied technologies.Membranes are clearly integral to the development and behavior of cells, and may be reduced to fundamental functional units known as memteins which accurately mimic their properties for detailed analysis.Bilayer thickness varies to complement protein interfaces, and lipids segregate to proteins to facilitate hydrophobic matching of their long tails and polar headgroups with transmembrane regions in a manner that can dramatically influence activity.Retaining such a pool of closely packed lipids within an adaptable nanoparticle of variable dimensions would be desirable, given the diversity of protein and lipid sizes and shapes.The lateral pressure profile across a bilayer depends on the mix of contained molecules, which can be included when SMA polymers wrap around a section of membrane.This pressure differential exerts significant forces across the membrane, influencing the stability and conformational equilibria of contained proteins and lipids.Membrane proteins with between 1 and 48 transmembrane helices have now been isolated via SMALPs from many eukaryotic and prokaryotic cell types.Such memteins maintain their native-like stability and ligand binding activity as well as a surrounding layer of lipids, while those in detergent micelles are typically destabilized to a significant degree and have lost their endogenous lipid content.Hence, SMALPs appear to be advantageous for the isolation of intact memteins, especially those that are unstable or depend on having a lipid bilayer that assumes the correct thickness and pressure profile.The orientation of protein elements at the membrane-water interface is inherently dependent on the local distribution of the surrounding lipids.Belts of Tyr and Trp residues offer aromatic groups that engage lipid carbonyls at membrane-water interfaces.Positively charged sidechains of Arg and Lys residues snorkle out of the hydrophobic layer to contact negatively charged phosphate groups.These interactions continue around the perimeter on each side of transmembrane proteins, providing balanced anchoring within the bilayer.The presence of this continuous shell of lipid in nanodiscs allows such interactions to be preserved as the system moves and works.This can be seen by the packing of the aromatic residues of the ActE subunit of the respiratory Alternative Complex III into the lipid bilayer of a SMALP, as resolved by cEM.This ex situ structure shows how a native-state complex composed of six subunits, bound cofactors and attached lipids mediates electron transport.Its purification and structure determination had previously proven elusive but could be accomplished with either of two different SMA types that yielded more stable and active forms than could be prepared with detergents such as dodecylmaltoside.Structural interactions of a triacylated cysteine residue are evident, showing how post-translational modifications can be resolved in native nanodiscs.This would be very difficult, if not impossible, to re-assemble from isolated component parts.Hence it is best to directly isolate memteins from biological sources, which can be readily and cost-effectively accomplished in mass quantities using SMA.Preparing large amounts of many diverse memtein types with their amino acid residues engaging a network of bound biological lipids, co-factors and post-translationally modified subunits may currently only be possible with SMALP methods.It is hoped that this nanodisc technology will lead to a much wider universe of complex membrane machineries becoming experimentally accessible.The folding of outer membrane proteins depends on local disordering and thinning of the lipid bilayer due to hydrophobic mismatch with the β-barrel machine structure.When the BamA subunit is present in membrane mimetics such as micelles, bicelles or nanodiscs bounded by membrane-scaffolding protein it is unable to correctly fold its juxtamembrane domains, suggesting a need for further accessory factors.Indeed, BAM-mediated assembly requires phosphatidylethanolamine to correctly assemble outer membrane proteins into their folded structures, which are otherwise kinetically trapped.Hence biologically relevant lipids such as PE are required to guide membrane proteins through their folding pathways into native states, and are integral to BAM memteins.Lipids have chiral carbon centers, and are made as various enantiomeric and diastereomeric forms with important biological significance that would ideally be resolvable.The phospholipids from biological sources are typically a single stereoisomer with different acyl chains and unsaturated groups, and differ from racemic mixtures or synthetic versions with identical chains.This results in established differences in membrane ordering and lipid bilayer structure as well as a variety of effects on membrane protein activity.Membrane-associated enzyme function depends on stereospecific lipid interactions, as is evident in the cases of phosphatidylinositol-specific phospholipase C and the phosphatases PP1 and PP2A.There are also clear effects of different lipid bilayer systems on the influenza A M2 protein and phospholipase A2.Thus lipids are not innocent bystanders in the membrane.Rather their stereospecific interactions make vital contributions to signaling and metabolic processes.Membrane protein structures are approaching the atomic resolution needed to show how stereospecific lipid recognition occurs.Further optimization of chemical properties and dispersity of SMA polymers, as well as covalent circularization of membrane scaffolds would enhance the structural resolution of memtine nanodiscs.Once removed, sets of natural lipid ligands cannot typically be replaced into their in situ positions in memteins.Multiple lipid molecules dynamically associate with inducible pockets and dynamic surfaces that are found within unique protein conformers and multimer interfaces, and rely on the membrane’s lateral pressure profile.Lipids behave as cooperative groups that both exhibit repulsive and attractive forces between headgroups and acyl chains.They can be stereospecifically layered onto proteins in non-random and non-uniform ways.The only way to see such cooperative lipid-protein assemblies at high resolution may well be to isolate the entire memtein directly without removing the lipids via detergent extraction or any other chemical or physical means.The identities of lipids bound to proteins can be discerned by ion mobility MS in the gas phase.For example, aquaporin Z is stabilized by lipids including cardiolipin, which modulates its ability to transport water molecules.Crystals of aquaporin AQPO show nine nonspecifically attached lipids lying in grooves on each monomer’s hydrophobic surface, while simulations indicate a single dynamic layer of ∼70 lipid molecules nonspecifically associated with these tetrameric channels.The ammonia channel AmtB from E. coli binds phosphatidylglycerol molecules, 8 of which are seen to decorate the periplasmic leaflet of the 2.3 Å resolution trimeric structure.The lipids induce conformational changes by structuring the binding loops that would normally engage the local bilayer.The mechanosensitive channel of large conductance from Mycobacterium tuberculosis interacts non-selectively with lipids such as PI’s, which stabilize it and may play functional roles.The E. coli MscS channel associates most tightly with PE lipids, and its 3.0 Å resolution structure reveals interhelical packing of bound lipid-like acyl chains which decrease in number upon channel opening and can be exchange with the bilayer.The structure of TRAAK potassium channel suggests that the binding of lipid-like acyl chains induce structural changes which regulate the channel.Current structural tools have not yet revealed the stereospecific contacts of individual lipids with proteins let alone complete bilayer shells, and usually cannot even determine whether a bound molecule is a detergent or particular lipid species.Hence retention of a layer of lipids and concomitant avoidance of harsh detergents are needed in order to see and understand memteins.The studies described below consistently demonstrate that SMA polymers efficiently convert diverse types of membranes into nanodiscs without requiring any conventional detergent at any step.This differs fundamentally from other methods such as those based on amphipols or membrane scaffold proteins, in which lipid must be re-introduced and thus cannot prepare memteins ex situ.Decades of study of SMA polymers attached to anticancer agents have demonstrated their biocompatibility.These SMA conjugates can penetrate into tumours to deliver their payload.They are inexpensive to produce at scale, thus allowing potentially large amounts of plant, yeast, mammalian or bacterial biomass to be turned into nanoparticles for industrial applications.The discovery that SMA polymers could spontaneously liberate functionally intact membrane proteins into nanodiscs was reported in 2009.Since then hundreds of groups have used the SMALP system and over 115 publications have described how SMA can be used to prepare nanodiscs of proteins, liposomes and membranes.The interested scientific community coalesced into the SMALP network in order to share tools and resources, and are pushing through remaining technical limitations.This open innovation approach effort has learned from the foundation laid by decades of progress on MSP-based nanodiscs, bicelles and amphipols.This builds on a century of experience with conventional detergents, with SMA offering a fundamental advantage by incurring a much lower free-energy penalty for membrane dissolution.Although each system offers unique strengths and weaknesses, all can be used to generate high resolution structures of stably overexpressed membrane proteins.However, only the SMA approach offers a detergent-free and scalable way to isolate and resolve memteins from any cell, tissue or organism.Different types of SMA can be used to isolate memteins from various environments by inserting into and removing only the unstructured lipids.These copolymers all containnon-alternating sequences of styrene and maleic acid residues which appear in statistically defined patterns within their linear chains.The most widely used ratios of S to MA groups are between 2:1 and 3:1.This pattern offers enough hydrophobicity to allow rapid insertion into lipid bilayers as well as sufficient polarity and dynamics to fragment membranes at critical polymer concentrations of around 1%.The MA residues ensure pH-dependent solubility due to their single negative charge at neutral pH values.The SMA polymers are synthesized commercially by conventional radical copolymerization, which provides a narrow chemical composition distribution as well as a chain length dispersity of about 2.The SMA polymerization reaction can also be carried out by reversible addition fragmentation chain transfer methods, which reduce chain length dispersity but result in co-monomer gradients along the chain.Acid hydrolysis is used to form the charged maleic acid copolymer which is water soluble and able to insert into membranes.They spontaneously form nanodiscs when a copolymer solution and membrane suspension are mixed to yield a clear emulsion.The resulting SMALPs are very stable in aqueous solutions, and can be freeze-dried and resuspended.The first wave of papers in the SMALP field utilized hydrolyzed versions of SMA2000 and SMA3000.These have 2:1 and 3:1 styrene to maleic acid ratios, respectively, as do related Lipodisq™ reagents.Polyscope offers XIRAN 30010 and 25010 reagents to the research community, and these have similar activities and S:MA ratios of 2.3:1 and 3:1, respectively, while lacking cumene endgroups and offering a wider distribution of distinct molecular sizes.Comparison of many studies indicates that the commercially available XIRAN reagents and SMA2000 versions are the most useful for solubilizing diverse membrane proteins, including those that contain single transmembrane helices to multimers with large bundles of 36 or 48 helices, to which the polymers can adapt to form larger disc-like particles.Comparative studies have optimized solubilization of different membrane proteins after overexpression in E. coli, Sf9 insect cells and H69AR cancer cells.The proteins include ZipA, which spans the membrane once, the homodimeric BmrA multidrug efflux pump with six transmembrane helices, and the LeuT symporter which contains 12 transmembrane helices.Approximately ∼55% of total protein is recovered into nanodiscs, which is comparable to DDM and better than octyl glucoside detergents, with SMA giving the highest yields, purities, and activities.Smaller discs with 5 nm diameters were found using SMA polymers.Divalent cations are better tolerated by the SMA-based nanodiscs, with precipitation observed at calcium concentrations over 4 mM.Such studies conclude that the SMA copolymer having an average molecular mass of 7.5–10 kDa generally offers the best performance.The photoreaction center can be isolated from of the Rhodobacter sphaeroides membranes along with ∼150 CL, PC, PE, PG, and sulphoquinovosyl diacylglycerol lipids using SMA2000.This contrasts with DDM and lauryldimethylamine N-oxide, which have destabilizing effects and strip away the lipids.The SMA treatment results in elliptical particles with 12–15 nm diameters containing stably folded memteins which can be recognized by gold nanoparticles and seen by negative stain transmission EM.A survey of eight SMA polymers shows that 10 kDa range polymers with S:MA ratios of 2:1 and 3:1 perform best, while larger oligomers or tighter packing of proteins reduces solubilization efficiency.Yields can be increased by fusing cellular membranes with synthetic or biological source lipid, and lower concentrations of SMA can increase solubilization of large oligomers into discs having diameters of 50–100 nm.A limitation of the original SMA series is their restricted pH range.The optimal pH varies for each SMA type, but is generally between 7 to 9.Lower pH levels lead to polymer aggregation, although keeping salt conditions low helps maintain SMA solubility.SMA polymers with lower relative styrene content are less perturbing of membranes upon insertion.Lipids can exchange through diffusion of monomeric lipids between nanodiscs, particularly if their acyl chains are short.The molecular exchange also occurs via disc collisions, especially at high disc concentrations.Due to their inherent dynamics, lipids can be introduced or removed from a SMALP, and proteins can also be moved back into a liposome or membrane.The lipid interactions of various SMAs are promiscuous, showing comparable solubilization activities with different bilayer compositions and various biological source material.This is a critical advantage for unbiased preparation of memteins from the diversity membranes that proteins traffic through in cells.There is, however, a clear preference for fluid phases, which are more loosely packed and allow easier insertion of polymer groups.The rates of SMA-mediated solubilization differ for lipid bilayers exhibiting short, unsaturated or cylindrical acyl chains, which facilitate insertion.The fragmentation of plasma and intracellular membranes occurs at discrete rates due to different levels of order.The plasma membrane perforates first when XIRAN 30010 is added to HeLa cell cultures, followed by intracellular membranes, which then release contained fluorescent test proteins.All organelles in the cell are equally vulnerable, although more fluid membranes are dispersed sooner.This indicates that SMA first binds and then penetrates into fluid sections of the outer bilayer, allowing leakage of cytosolic contents, and affording entry to intracellular membranes.The rate of memtein release then presumably depends on whether the surrounding lipid is disordered or ordered.Solubilization of highly ordered memteins can require fluidizing lipids such as dimyristoyl phosphatidylcholine or elevated temperatures, thus providing methods for selective protein release from various compartments.Further handles are provided by derivatized SMA.A thiol group can be added to make SMA-SH to which fluorescent dyes and molecular tags can be attached.Such Alexa Fluor 488 groups on the polymer allow measurement of distances to lipids labelled with an acceptor based on the transfer of excitation energy from a donor fluorophore to an acceptor chromophore by Förster Resonance Energy Transfer methods.The biotin tags allow SMALPs to be affinity-purified, as well as enabling measurement of intermolecular distances.Memteins in rafts can be purified from cells using SMA, obviating the concerns with detergent-based extraction.Fragments of T cell membranes formed by adding 1% concentrations of SMA can be immunoprecipitated to collect those with glycosylphosphatidylinositol-anchored proteins and Src family kinases.Being over 250 nm in diameter, they are the largest SMALPs seen to date.Their lipids are more ordered and enriched in cholesterol, PS, sphingomyelins, ceramides and monohexosylceramides than smaller particles of under 20 nm, which have more PI, PG, PC, PE, and their ether-linked lipids.Thus SMA appears to provide a detergent-free approach to solubilize even large native rafts and structured nanodomains of lipids and receptors.More homogeneous disc populations with diameters of approximately 28 and 10 nm can be made using reduced polydispersity SMA versions having 2:1 and 3:1 subunit ratios.Increasing the relative amount of styrene compromises aqueous solubility, while decreasing styrene content compromises solubilization activity.In this case a 3:1 subunit ratio of S:MA appears preferable for the formation of discs from liposomes, while 2:1 ratio more effectively solubilizes larger protein complexes.In this study, the length of the polymer does not affect disc size significantly.Steep gradients of styrene and maleic acid monomers in a short SMA polymers increase solubilization efficiency.Zwitterionic SMA polymers offer utility under a broader range of buffer, pH and polycation ranges, and have solubilization activities comparable to regular SMAs.These PC lipid-headgroup containing polymers are designed for analysis of proteins which require calcium or magnesium or extreme pH values.The sizes of the resulting nanoparticles are relatively uniform with diameters that range from 10 to 30 nm depending on the length of the polymer used, thus allowing small or large proteins to be contained.Nanodiscs with diameters between 10 and 50 nm can be generated using a short SMA polymer derivative bearing ethanolamines on their polar sidechains.The resulting SMA-EA nanodiscs are stable under a wide range of pH values, temperatures, and divalent cation and salt concentrations, yielding resolvable solution NMR signals of 15N-labelled, folded cytochrome b5 protein.Larger SMA-EA nanodiscs can be aligned in magnetic fields through addition of lanthanide ions such as Yb3+, allowing transmembrane helix tilt angles to be measured by solid state NMR method.In particular, characteristics like pattern of resonances are observed in the 2D PISEMA spectra of 15N-labeled cytochrome b5 that predict similar angles of helices oriented in bicelles or SMALPs.Adding ethylenediamine groups to this SMA yields a zwitterionic copolymer, SMA-ED, which solubilizes multilamellar vesicles outside the pH range of 5–7.Upon dehydration, this polymer, which is termed SMAd-A, solubilizes DMPC vesicles at pH values under 6.Both of these latter derivatives tolerate high salt and divalent cation levels.A positively charged derivative with a quaternary ammonium group termed SMA-QA can also form nanodiscs.This copolymer turns vesicles into nanodiscs with 10 to 30 nm diameters at pH values from 2 to 10 with metal ions present at up to 200 mM.Another positively charged dimethylaminopropyl sidechain has been incorporated within the styrene maleimide to form “SMI” coploymers at pH values under 7.8.The resulting nanodiscs have diameters of 6 nm, are stable up to 80 °C, and do not bind divalent cations.The E. coli cell division protein ZipA can be isolated with SMI albeit at lower yields than SMA, while the human adenosine A2A and V1a vasopressin receptors purified from human embryonic kidney 293 cells with SMI remain able to specifically bind their ligands.Thus, the charge state of SMA-related copolymers can be varied, thus allowing a broader range of polar groups, solution conditions and disc sizes to be explored, while avoiding any nonspecific electrostatic interactions.The aromatic groups of SMA can be replaced by aliphatic chains while maintaining solubilization capability, thus further increasing the chemical universe available for nanodisc production.Using alternating diisobutylene and maleic acid sidechains result in a “DIBMA” polymer that is milder and offers advantages for isolating large labile protein complexes with native nanodiscs having diameters of 12–29 nm.Advantages of DIBMA include its transparency in ultraviolet and circular dichroism spectra, and its compatibility with higher levels of divalent cations.A polymethacrylate random copolymer series has been developed as an alternative to SMA and converts DMPC liposomes into ∼17 nm discs.This polymer lacks a light absorbing aromatic group and hence works well in circular dichroism, UV/vis, and fluorescence spectroscopy experiments.Nanodiscs made of PMA as well as DMPC and DMPG in a ratio of 9:1 stabilize a helical structural intermediate state of the islet amyloid polypeptide, rather than allowing the formation of beta amyloid fibrils.Together this constitutes a growing family of amphipathic polymers for solubilizing virtually any memtein under physiological conditions into nanodiscs of various sizes.SMALPs are designed to co-purify any memtein-associated molecules, with the short styrene sidechains intended to be nonspecific solubilizers of bound hydrophobic material.The retained lipid species can be identified by MS. For example, SMALPs containing the equilibrative nucleoside transporter-1 overexpressed in insect cells show that it holds 16 PC and 2 PE molecules but does not bind polyunsaturated lipids based on electrospray ionization MS analysis.Approximately 50 PG, PE, and CL molecules of different degrees of saturation and chain length associate with each monomer of the GlpG rhomboid protease, but vary by cell type and temperature, as detected by electrospray ionization and collision-induced dissociation MS.The 13C signals of lipid substrates and products of PagP could be resolved by solution state although the 1H NMR signals of SMA polymer and contained proteins are broadened due to polydispersity and complex sizes.The conversion of liposomes into lipid nanodiscs during polymer titrations can be readily tracked by NMR, as conversion into rapidly tumbling nanodiscs sharpens lipid 31P resonances to reveal the critical polymer concentrations.Recent developments in cEM technology have led to an avalanche of high resolution structures, including of memteins in SMALPs.The structure of the 464 kDa ACIII supercomplex, discussed in Section 2 and Fig. 2, represents a landmark in the field.In particular this study demonstrates how SMA can be used to prepare active complexes of native state transmembrane protein assemblies for determination of atomic resolution structures of post-translationally modified subunits bound to multiple lipids and cofactors.This was preceded by the 8.8 Å cEM structure of AcrB in SMA nanodisc, with its soluble domain showing higher resolution than its transmembrane portion.Eukaryotic ATP-binding-cassette transporters in SMALP also exhibit improved activity, purity and stability than those in traditional detergents, and reveal the molecular envelope of the dimeric P-glycoprotein.The trimeric AcrB multidrug transporter can be purified in SMALPS via a His8 tag and low salt to prevent undesirable particle association.Dimers and trimers are apparent by sedimentation velocity, and negative stain EM and 3D reconstructions show an annulus of > 40 lipid molecules and polymer encircling the inner vestibule of the protein.The first 3D structure of a protein isolated using SMA was resolved by XRC with a resolution of 2.0 Å, which is superior to that of comparable crystals prepared using detergents.The process involved using of DMPC to liberate the protein from tightly packed membrane into XIRAN 25010-based SMALPs, and inclusion of a pair of histidine tags for improved purification.The seven transmembrane helices of the microbial rhodopsin bind trans-retinal and form a trimer.Their hydrophobic surfaces are shown to bind to monoolein molecules, which were used to form lipidic cubic phases for in meso crystallization.Bound bacterial lipids could not be resolved, presumably due to their displacement by the excess monoolein molecules.Nanodiscs constructed with SMA typically have diameters of 10 nm but can range from 6 to 30 nm, depending on the polymer and method.Use of XIRAN 25010 polymer and synthetic lipids can yield smaller nanodiscs.An inner section of the bilayer is surrounded by an annulus of polymer, which can be contrasted by using hydrogenated and deuterated lipids in small angle neutron scattering experiments.The styrene groups pack against lipid acyl chains based on distances detected by NMR.The integral membrane protein KCNE1 solubilized in SMA nanodiscs has been studied by electron paramagnetic resonance experiments.These EPR studies show that sidechains of the spin-labeled residues which located in the aqueous phase are more mobile than those which are located within the bilayer.Such studies indicate that SMA boosts the relative signal to noise and phase memory time in double electron–electron resonance spectra, providing more precise measurements of distances.NMR can be used to characterize structures, dynamics and interactions of transmembrane proteins in SMALPs.A stable helical structure is formed by the Pf1 bacteriophage coat protein in magnetically-oriented -30 nm discs composed of XIRAN 25010, DMPC and DMPG.The 15N-resolved solid state NMR signals are sharper than that can be obtained in bicelles or peptide-based discs, and indicate larger order parameters.The solid state NMR spectra of the magnetically aligned 15N‐labeled cytochrome b5 protein in ∼50 nm discs formed by SMA-EA and DMPC indicate a stable transmembrane helix while the signals of the soluble domain are motionally averaged out.The bacterial zinc diffusion facilitator CzcD isa 34 kDa protein solubilized with XIRAN 25010 or XIRAN 30010 retains a layer of bound lipid.The resulting 10–15 nm nanodiscs exhibit resolvable solid-state NMR signals for amide or methyl groups that are deuterated and selectively re-protonated.Resonance assignments can be transferred by comparison with the NMR spectra of the solution state of the isolated cytoplasmic domain.The elucidation of novel 3D structures of proteins in SMALPs by solution NMR methods remains a challenge due to polymer polydispersity and broad linewidths.New formulations are being actively developed to reduce heterogeneity and aid in the assignment and structural characterization of transmembrane and attached soluble domains.Enzymes which work together are often organized as co-localized units in membranes.Isolating them intact has been technically difficult in the past.The plant metabolon that produces the glucoside dhurrin is a case in point.It comprises three membrane proteins along with a soluble glycotransferase protein.The entire complex can be isolated from microsome membranes by treating with SMA2000 to form nanodiscs of diameter of 10–25 nm and yields of 80%, while the complex is broken down by traditional detergents such as cholate.The soluble subunit binds to and modulates the nanodisc-based memtein, the activity of which depends on bound negatively charged phospholipids.As memteins containing G protein-coupled receptors remain the highest value superfamily of therapeutic targets, their solubilization in SMALPs has been a major goal.The adenosine receptor is solubilized by SMA treatment of expression hosts including Pichia pastoris and HEK 293 T cells, and is stable and binds ligand normally.This GPCR can be stored and freeze-thawed in SMALPs, and is stable for seven times the length of time as detergent preparations.The melatonin and ghrelin receptors can be placed into 13 nm nanodiscs by applying either SMA or SMA to liposomes or Pichia pastoris membranes, and still bind their ligands and transduce signals as expected.The nucleoside transporter hENT1, which also transport chemotherapeutic agents, can be isolated from insect cells using XIRAN 30010.The polymer was added at low levels with cholesteryl hemisuccinate at low temperature in order to avoid protein degradation and inactivation.The SMALP’d protein displays the expected level of inhibitor binding, while it is destabilized by conventional detergents such as decyl maltoside.Together this indicates that SMALPs are a viable way to present drug targets, including those that may be too unstable, scarce or dependent on lipids for detergent based extraction.Tetraspanins are challenging targets due to their small sizes, oligomeric states, disulfide bonds, glycosylation and palmitoylation sites and the paucity of biochemical activity assays.Five human members of this family were expressed in S. cerevisiae and solubilized by three different SMA copolymers.The yield of TSPAN7 was comparable to conventional detergents, while CO-029, TSPAN12 and TSPAN18 yields were higher in detergents, and CD63 was resistant to any approach.The organized superstructure of such memteins emphasize the need for further development of tools for efficient dispersal of very large, ordered networks.The mitochondrial cytochrome c oxidase from yeast contains 19 transmembrane helices.This multicomponent complex can be isolated from Saccharomyces cerevisiae using SMA.The resulting complexes are active in the ∼12 nm discs, which contain PC, PE and CL, and display the expected ligand binding and reaction kinetics.However, free polymer acts as an inhibitor, and thus ways to remove it after disc formation would be desirable, leading to plans to develop affinity tags.The entire biological assembly contains two more weakly bound respiratory supercomplex factors proteins that dissociate upon exposure to detergent but are retained in SMALPs.The solubilized particles exhibit dimensions of 11 nm and 14 nm, which can fit in the entire supercomplex including 11 protein subunits and assorted mitochondrial lipids in order to pump protons across the membrane.Proteins are secreted through bacterial inner membranes by the holo-translocon assembly, which comprises SecYEG, SecDF, YajC and YidC subunits.All these components in addition to associated bacterial lipids are solubilized together from the E. coli membrane by SMA copolymer and can be detected with available antibodies.The biologically relevant complex of the SecYEG translocon copurifies with the motor protein SecA and essential CL, PE and PG lipids when treated with SMA polymer but not with conventional detergents.When this memtein is transferred into proteoliposomes using Bio-Beads the complex is able to translocate transmembrane proteins and interact with the ribosome.The physiological structures and functions of α-synuclein remain mysteries, yet are important for tackling Parkinson’s disease.The protein assumes unstructured or helical conformations as monomers and tetramers and is present on membranes and in cytosol.Its biological states can be delineated by incubating with PC-containing SMALPs.In so doing the protein disaggregates and maintains a helical conformation, but loses the ferrireductase activity associated with its form in biological membranes.This infers that copolymers with reduced protein affinity would be beneficial, and highlights the need to compare SMALPs with other systems used to study memteins.Membrane protein structures in detergent-lipid mixtures continue to provide useful insights into the structures and interactions of memteins, as exemplified by the series of structures of complexes that follows.Detergents are particularly useful for structural analysis of stable and abundant recombinant membrane proteins that have assayable biochemical activities that can be monitored.Careful optimization of the LDAO and dodecylphosphocholine detergent-based purification of the photosynthetic LHC- reaction centre supercomplex from a thermophilic purple sulfur bacterium yielded its calcium-stabilized crystal structure.As seen at a resolution of 1.9 Å, 16 heterodimers surround the reaction center, which is comprised of 4 subunits, as well as six quinone cofactors in the midst of what appear to be ten PG, nine CL, and two PE molecules.The lipids are asymmetrically distributed, with CL molecules lining the cytoplasmic side of the memtein and head groups positioned towards the membrane surface.The other two lipid types are present on leaflet surfaces and engage with Arg and Lys residues on the cytoplasmic interface.SMALPs have also been used to characterize the photosystem I complex that is found in spinach thylakoids, which exhibits dimensions of 140 × 180 Å, i.e. bigger than the typical 10 nm nanodiscs.Nonetheless this complex solubilizes with SMA3000 with its 17 protein subunits interfacing with LHC protein trimers that deliver excitation energy, demonstrating the adaptability of the polymer encasement.The mammalian two-pore channel TPC1 protein is modulated by its endolysosomal ligand PIP2.This phosphoinositide can be added back after extraction with DDM and cholesteryl hemisuccinate from overexpressing HEK 293 T cells.The resulting 3.2 Å cEM structure shows a pair of 6 transmembrane bundles along with the positions of acetylglucosamine groups attached to a pair of asparagines.A set of Lys and Arg residues mediate binding of a single PtdInsP2 molecule which induces channel opening.The Gloeobacter violaceus ligand-gated ion channel GLIC4 can be overexpressed in E. coli and extracted with DDM.There are 6 DDM and 15 partial PC lipid molecules arranged in a bilayer type fashion inside the channel and perimeter of the apparently open pentameric channel, as seen in its 2.9 Å resolution crystal structure.Hence the use of detergents can be inferred to block channel function, with some exposed lipid headgroups being disordered and difficult to resolve.The detailed conformations of such memteins including their channels remain of great interest.The Saccharomyces cerevisiae oligosaccharyltransferase resides in the endoplasmic reticulum membrane.This complex of eight proteins has been resolved by cEM following solubilization with digitonin.This 3.5 Å resolution structure shows how one phospholipid sits at the substrate-binding surface while seven others mediate inter-subunit interactions via lipid headgroup interactions with aromatic and polar amino acid residues while their acyl chains are nestled among hydrophobic sidechains.An N-glycan also “glues” subunits together, indicating a structural role for lipids in this complex that mediates co-translational protein N-glycosylation.Structures of the PagP phospholipase are available in three different detergent systems, providing insights from many angles.The 1.4 Å XRC structure of PagP crystallizes directly from a cosolvent system composed of 2-methyl-2,4-pentanediol and sodium dodecyl sulfate.Its eight-stranded barrel structure is similar to that solved in LDAO or DPC and octyl glucoside.However, the conformations of the four extracellular loops differ, reflecting their flexibility and involvement in substrate binding and catalysis as well as their crystal lattice contacts.The interior pocket that provides access to palmitate chains in the sn-1 position of substrate access is occluded by detergent molecules, which hence are inhibitory.Five SDS molecules are arrayed across the hydrophobic surface of the PagP protein, with aliphatic chain being more ordered than headgroups and packed into similar crevices occupied by LDAO.When solubilized into 11 nm SMALPs, PagP co-purifies with 11 bound DMPC molecules and is more stably folded than in micelles, and is active in phospholipase assays, consistent with the absence of inhibitory detergents that could occlude the active site.Thus the stability of this cooperatively folded barrel generally withstands the destabilizing forces of detergents, while its flexible loops and active site are vulnerable.A similar barrel, OmpF, forms very stable trimers that pack tightly in the outer membrane of bacteria in vivo.However, unlike in detergent, this protein is entirely wrapped in lipid molecules in two different in meso crystal forms which were determined to resolutions of 1.9 and 2.0 Å.While monoolein molecules form extensive van der Waals contacts and engage with exposed Trp and Tyr residues, there are no direct inter-protein contacts, and physiological lipids had been eliminated.In a further development, a detergent-solubilized OmpF relative forms trimers that bind four lipopolysaccharide molecules, as seen in a XRC structure of OmpE36 solved to 1.45-Å resolution.Although not required for folding, the LPS molecules form extensive van der Waals and polar interactions with intramembrane and interfacial residues, respectively.These lipid-protein interactions recur across the densely packed outer membrane, allowing these memteins to form close-knit networks on gram-negative bacterial surfaces to ensure an impenetrable barrier.Mixed micelles can be used as realistic mimics of endogenous membrane surfaces to deduce stereospecific organelle recognition mechanisms by stable peripheral membrane domains.This approach has been demonstrated with FYVE, phox homology and pleckstrin homology domains.The FYVE domains are the most structurally efficient phosphatidylinositol phosphate binding domains, consisting of only 65 residues in a fold stabilized by two zinc binding clusters that almost exclusively recognizes phosphatidylinositol 3-phosphate molecules on endosomes.The recognition of early endocytic membranes by the FYVE domain of the EEA1 protein depends on stereospecific binding of the PI3P headgroup and simultaneous insertion of proximal hydrophobic residues into the bilayer.These interactions are supported by binding of accessory phosphatidylserine and PC headgroups, and are strengthened by the low pH that is encountered around these compartments.The insertion of the FYVE domain into PI3P-containing nanodiscs involves a slightly more extensive surface than seen in micelles, potentially reflecting attraction to a flat rather than convex bilayer.The dimers of FYVE domain-containing proteins employ a similar orientation for membrane docking, and this then provides avidity and opportunities for downstream membrane fusion events which are regulated by partner proteins.PX domains generally recognize PI3P in endocytic membranes but utilize a fold that differs from FYVE domains.Their ligand binding mode is apparent from the structure of the p40phox PX domain bound to a short chain PI3P molecule, as well as by hydrophobic insertion of a neighbouring loop into mixed micelles, as shown with the Vam7p PX domain.Phosphoinositide binding occurs within a pocket containing conserved Arg, Lys and Tyr residues.The Grd19p PX structure contains a bound PI3P molecule in the same pocket.A similar binding mode is found in the Snx9 PX domain, which can accommodate either PI3P or PIP2 through a larger pocket, with the attached BAR domain offering a broad concave basic surface that recognizes curved bilayers.A secondary site is separated from the first pocket that binds PIPs by the membrane insertion loop, and can simultaneously accommodate acidic phospholipids, as demonstrated for the p47phox PX domain.This multivalent interaction with proximal lipids boosts membrane avidity.The mixed micelle-inserted Vam7p PX domain displays extensive interactions with headgroups and tails of several PI3P and phosphocholine molecules, as resolved by NMR.The Snx3 protein utilizes a similar mechanism.It dips a membrane insertion loop nonspecifically into the bilayer, being guided by its electrostatic polarity that exposes its retromer-binding termini.It can then diffuse laterally in the membrane until a PI3P ligand is encountered and stereospecifically recognized, leading to deeper insertion as well as synergistic contacts with neighbouring lipid molecules.Its PX domain, as well as those of many other sorting nexins, also exhibit a so-called PIP stop.This regulatory feature consists of a conserved Ser or Thr residue at the rim of the PI3P binding site that becomes phosphorylated in order to release the protein from the membrane and into the cytosol.Solution NMR structures of lipid and mixed micelle-complexed states of Snx3 reveal how this switch operates to determine localization of the associated retromer complex, thus controlling receptor traffic in cells.The PH superfamily comprises 285 human proteins that contain 334 PH domains, about 61% of which associate with membranes based on Membrane Optimal Docking Area analysis.These interactions involve insertion by the first β hairpin loop and lipid docking to either one or both sides of this protruding loop via basic and hydrophobic residues.Lipid, micelle and bicelle binding by the PH domain of FAPP1 has been studied by biophysical, NMR and computational methods.This pinpoints the conserved aromatic and aliphatic sidechains that insert into the membrane and provide PIP specificity.A two pronged network of electrostatic interactions and hydrogen bonds bind a patch of lipid, as is reminiscent of FYVE and PX domains, which also are attracted to negatively charged lipid bilayers such as those that include PS.The FAPP1 PH domain and related FAPP2 protein both insert selectively into disordered PI4P-containing membranes, suggesting that the bilayer must be deformable to allow ready access to the PI4P headgroup.The dynamin protein binds to lipid bilayers via its PH domain and self-assembles through its activated guanine nucleotide-binding domain to form a membrane fission machine.The 3.75 Å resolution cEM structure of the protein assembly formed on PS-containing liposomes displays different interdomain orientations than the lipid-free state, with the PH and helical stalk domains positioned within membranes in order to mediate tubulation reactions, highlighting the complex interplay between lipid-protein and protein-protein interactions in memtein assembly and activity.A different mode of membrane binding via a pair of surfaces is exhibited by extracellular metalloproteases.These enzymes flip like pancakes on the surface of a cell using two separate membrane binding surfaces, thus localizing themselves by their sites of action.The MMP-12 catalytic domain prefers membranes with negatively charged lipids or unsaturated lipids like palmitoyloleoyl, forming an extensive array of hydrogen bonding, salt bridges and hydrophobic contacts with the lipid head groups.This ambidextrous binding process does not occlude the active site, but rather excludes binding of a protein inhibitor.The MMP-7 zymogen binds zwitterionic bicelles superficially via loops around the edges of its β sheet, leading to allosteric modulation of the active site.The presence of anionic lipids such as cholesterol 3-sulfate draws the protein deeper into the bilayer, altering its insertion angle and broadening its interface.By doing so the rocking of protein in the membrane becomes more restricted, while affording access to pericellular protein substrates.These studies show parallels between the membrane binding mechanisms of cytosolic and extracellular proteins, and suggest that common principles will emerge that could also provide insight into how transmembrane proteins behave.Over 525 in meso structures of proteins have been solved using LCP methods to date.These show how memteins could be modelled using crystal structures although physiological lipid ligands are typically displaced by, for example, added monoacylglycerol molecules.The 1.95 Å structure of the cobalamin transporter, BtuB, as solved by LCP methods, contains 11 monoolein molecules in a bilayer arrangement.No native E. coli lipids are evident due to extraction in OG detergent and the use of LDAO detergent during purification.One set of five lipids is arrayed where the outer leaflet would lie against the barrel, while another five are arrayed where the inner leaflet would lay.The headgroups are oriented toward the expected membrane-water interfaces of the protein, while the acyl chains interdigitate there termini and align across the hydrophobic interface of the barrel perpendicular to the β-strands.Another lipid molecule lies near the mid-plane of the bilayer, suggesting an unnatural pose, illustrating an inherent risk of artifacts resulting from membrane protein crystallization.The crystal structure of bacteriorhodopsin reveals that native purple membrane lipids occupy the central region within its trimeric complex.Lipids are arranged within crevices in the hydrophobic surface around the trimer, indicative of their positions within a bilayer.The 1.8 Å resolution structure of the light-driven chloride pump halorhodopsin also shows bound monoolein molecules.Ten lipids collect on the periplasmic half of this helical protein in the trimer interface with spacings consistent with their packing density, which differs from that of the branched, phytanol-containing lipids of haloarchaea.Three palmitates are deep inside the cytoplasmic side of the trimer’s transport site, with their C10-C16 acyl tails lying within a channel lined by Ala, Ile, Pro, and Thr residues.Hence this demonstrates how endogenous lipids can be displaced and how active sites can become occupied by monoolein, thus compromising function.Undecaprenyl pyrophosphate phosphatase recycles the lipid carrier that is used to build the bacterial cell wall.Its crystal structure determined at a resolution of 2.0 Å shows all ten transmembrane α-helices that form an active site deep in the membrane.A pair of monoolein molecules are oriented at the cytoplasmic side of the dimer interface, potentially stabilizing it, while another monoolein sits in the substrate binding cleft and two more decorate the portal to the active site.These examples demonstrate how physiological lipids can be displaced by monoolein.This limits the insights that can be gathered about the mechanism of stereospecific ligand recognition.Amphipols are a set of amphipathic polymers that are based on a hydrophilic backbone with hydrophobic sidechains that are intended to associate with hydrophobic surfaces of membrane proteins, stabilizing them for solubilization in aqueous environments.However, they require that proteins go through a detergent phase, and issues regarding polymer aggregation and scaffold-function interference have been raised.Nonetheless, several structures have been resolved by cEM using amphipols and reveal positions of bound lipids.The human γ-secretase complex resolved to 3.4 Å in amphipols reveals the positions of several N-linked glycans and lipid molecules.One lipid is found at the interface between catalytic subunit presenilin 1 and scaffolding protein Aph-1 subunits, and another bridges the transmembrane elements of the nicastrin and Aph-1 subunits via acyl chain contacts with several hydrophobic residues and hydrogen bonding of its phosphate group via Arg and Gln residues.The Polycystic Kidney Disease 2 protein overexpressed in HEK 293 cells, extracted with DDM and stabilized by either amphipol A8-35 or MSP nanodiscs yields similar cEM structures of the homotetrameric ion channel.An annulus of lipid bilayer is evident around the protein, and the positions of several ordered lipids can be seen between the crevices at subunit interfaces, as well as structured N-acetylglucosamine groups attached to three asparagines.The amphipol-stabilized trimer retains 12 tightly bound lipids, while its structure in MSP nanodiscs could be resolved to 3 Å resolution, indicating the complementarity of the two solubilization methods.The first nanodisc system developed for biochemical applications employs membrane scaffold protein constructs of different sequences and lengths that are originally derived from apolipoprotein A-1 structures.A pair of largely helical MSPs encircle the lipid bilayer and are zippered together by a series of salt bridges and cation-π interactions.The resulting discs are stable particles in aqueous solution and possess diameters ranging from 6 to 17 nm.They can hold in the range of 65 POPC or 90 DPPC molecules in the case of 9.6 nm nanodiscs made by MSP1D1, or 85 POPCs or 115 DPPCs in the case of 10.5 nm diameter nanodiscs made by MSP1E1D1 polypeptides.The stability of MSP-based nanodiscs and the ability to attach affinity tags have led to their use for functional and structural analysis of many membrane proteins including oligomers or complexes between membrane proteins.The optimization of MSP systems for both solution NMR and solid state NMR applications is also benefiting other applications.Cryo-EM is increasingly used to structurally characterize membrane proteins in nanodiscs.Several cEM structures solved using nanodiscs reveal positions of individual lipids.For example, the E. coli SecYEG complex reconstituted into nanodiscs reveals phospholipid headgroups that appear to directly contact ribosomal protein L24 and rRNA helix H59, suggesting an integral role of lipids in ribosome function.High resolution insights into memtein function are given by the TRPV1 ion channel.The recombinant protein overexpressed in mammalian cells could be solubilized with DDM, which needed to be replaced with soybean polar lipids while being incorporated in nanodiscs.Both annular and regulatory lipids form ordered contacts with the TRPV1 as resolved in the ∼150 Å nanodiscs formed by MSP2N.The 2.9 Å resolution of TRPV1 tetramer reveals how Arg, Phe, Tyr and Trp sidechains from the channel and toxin molecules bind to PC lipid headgroups and acyl tails, which must reorient from bilayer poses to become engaged.Importantly, bridging contacts between PC molecules and the channel contact the toxin, which inserts almost halfway through the position of the outer leaflet of the bilayer.This shows that antagonists are directly recognized by both lipid and protein moieties in memteins, and underscores the integral role of lipids in protein function.A PIP headgroup engages a charged pocket inside the channel which is lined by Arg, Lys and Glu residues.This supports a key role of this signaling lipid as a competitive antagonist, negative allosteric modulator and positively acting co-factor that primes the channel for activation.Hence protein and lipid function are interdependent based on an increasing number of structures, and point to the crucial role that memteins play in biology.In the past decade, the field of membrane structural biology has grown tremendously through the emergence of SMA-related polymers and native nanodiscs to isolate sections of membrane, and the concurrent development of cEM, NMR, XRC and MS technologies to resolve their structures.Their confluence has led to this proposal that much of biology is fundamentally driven by the specific activities of proteins in contact with a layer of biological lipid molecules, i.e. memteins.A wealth of accumulated data from many research groups and methods has been considered here, and is distilled into a single new word.Without such a word it is difficult to discuss, focus on and refine this concept.It is hoped that giving complexes of a continuous layer of biological lipids with proteins an intuitive label could spark further interest and studies that allow the research community to more fully explore how many more memteins work at an atomic resolution under a wide spectrum of different timescales and cellular contexts.New approaches for exploring the lipidomics and protemics of membrane compartments could expose the intricacies of biological machines such as those responsible for fibril and cytoskeletal assembly, cell adhesion and organelle biogenesis.As the tools become adapted for high throughput screening of native nanodiscs, novel ligands including lead molecules and lipid modulators may emerge.As much of drug discovery and pharmacology depend on an accurate understanding of proteins situated inside membranes, there is the possibility of entirely new classes of therapeutic agents arising from the discovery of memtein targets.There is much room for further improvement, given that the physical mechanisms of how the polymers bind, concentrate and disperse membranes into consistently sized discs with a layer of lipid surrounding a protein remain largely unknown.The chemical optimization of SMA is increasingly guided by the knowledge gained from synthesis and testing of an ever growing family of polymers.This is starting to reveal what appear to be optimal properties such as polymer sizes, residue charge and hydrophobicity, steric restraints, blockiness and polydispersity.However, the universe of synthetic polymers is larger than that of proteins, and hence we have probably not yet converged on the best system, nor is there likely going to be a single best solution for the array of possible biological systems, target types and assays.Already the dimensions of reported SMALPs span 5–100 nm with shapes varying from circular to ellipsoid to irregular, depending on the polymer used, conditions and cargo.How these nanodiscs could be deployed as protein- and lipid-containing sensors, transducers and delivery vehicles to harness various biological machines remains largely unexplored.Designing and making new polymers and tags is technically demanding, and scaling production of standardized polymers for reproducible results requires having industry on board.Hence an open innovation approach that involves academic researchers, pharmaceutical companies, and manufacturers of novel chemical products, analytical equipment and assay kits is needed.The SMALP Network is a grass roots community that has attempted to bring these parties together and has fertilized the field with new ideas, collaborations and polymers, and is intended to continue foster support of the field as it enters a new decade and becomes increasingly mainstream.M.O. and M.E. wrote the manuscript and prepared the figures.
The concept of a memtein as the minimal unit of membrane function is proposed here, and refers to the complex of a membrane protein together with a continuous layer of biological lipid molecules. The elucidation of the atomic resolution structures and specific interactions within memteins remains technically challenging. Nonetheless, we argue that these entities are critical endpoints for the postgenomic era, being essential units of cellular function that mediate signal transduction and trafficking. Their biological mechanisms and molecular compositions can be resolved using native nanodiscs formed by poly(styrene-co-maleic acid) (SMA) copolymers. These amphipathic polymers rapidly and spontaneously fragment membranes into water-soluble discs holding a section of bilayer. This allows structures of complexes found in vivo to be prepared without resorting to synthetic detergents or artificial lipids. The ex situ structures of memteins can be resolved by methods including cryo-electron microscopy (cEM), X-ray crystallography (XRC), NMR spectroscopy and mass spectrometry (MS). Progress in the field demonstrates that memteins are better representations of how biology actually works in membranes than naked proteins devoid of lipid, spurring on further advances in polymer chemistry to resolve their details.
77
SOX7 expression is critically required in FLK1-expressing cells for vasculogenesis and angiogenesis during mouse embryonic development
The development of the vascular system involves a complex array of processes necessary to regulate the dynamic nature of the emerging vascular network.During development, the first blood vessels form in the extra-embryonic yolk sac via vasculogenesis, which initiates following the formation of blood islands from mesodermal progenitors.Cells on the inside of the blood islands differentiate into blood cells, whereas cells on the outside differentiate into endothelial precursor cells, which migrate and associate to form a primitive vascular plexus.In the embryo proper, EPCs migrate to form endothelial chords that differentiate into the major arteries and veins.The primitive extra and intra-embryonic vascular network subsequently undergoes angiogenesis involving the remodelling and expansion of blood vessels resulting in the formation of a hierarchically organized vascular network.The transcriptional network regulating the identity and behaviour of EPCs involved in vascular development is extremely complex and remains poorly understood.The Sox family of genes encodes a group of transcription factors that all share a high mobility group DNA binding domain and recognise the AACAAT consensus sequence.The SOX F subgroup contains SOX7, SOX17 and SOX18, and a growing body of evidence indicates that they have important roles in cardiovascular development.However, SOX17 has pleiotropic functions and regulates a variety of processes including: definitive endoderm specification, fetal hematopoietic stem cell proliferation, oligodendrocyte development and arterial specification during cardiovascular development.The role of SOX18 appears to be more restricted with deficiency in this factor leading to specific defects in lymphangiogenesis.In contrast, the role and function of SOX7 is still poorly defined.SOX7 is expressed in primitive endoderm and in endothelial cells at various stages of vascular development.These include the mesodermal masses that give rise to blood islands in gastrulating embryos, and the vascular endothelial cells of the dorsal aorta, intersomitic vessels and cardinal veins in more developed embryos.Gross morphological examination of Sox7−/− mouse embryos suggests potential vascular defects; more recently, it was shown that the conditional deletion of Sox7 in Tie2 expressing endothelial cells results in branching and sprouting angiogenic defects at E10.5.Despite these recent advances, a comprehensive analysis of the developing vascular network encompassing both vasculogenic and angiogenic processes in SOX7 deficient embryos has not yet been undertaken.Here, we performed a detailed analysis of the vascular defects resulting from either a complete deficiency in Sox7 expression or from the conditional deletion of Sox7 in FLK1-expressing cells.Embryonic stem cells were cultured and differentiated as previously described.Embryoid bodies were routinely maintained up to day 3, and FLK1+ cells were isolated and cultured as previously described.Targeted Sox7 ESC clone B9 was injected into mouse blastocysts.Resultant chimaeras were crossed with C57BL/6 mice.Subsequent generations were crossed with PGK-Cre mice to excise the neomycin cassette and exon 2 of the Sox7 gene, resulting in the generation of a LacZ-tagged null allele.Alternatively to generate the Sox7-floxed allele, mice were crossed with an actin-FLP transgenic line resulting in the excision of both IRES-LacZ and neomycin cassettes that are flanked by FRT site.After eight backcrosses on C57BL/6, mice were either inter-crossed to generate Sox7fl/fl or crossed with Flk1-cre or Vav-Cre transgenic lines to excise in a tissue specific manner the exon 2 of Sox7 that is flanked by LoxP sites.Timed matings were set up between: heterozygous male and female Sox7LacZ/WT mice, heterozygous Sox7fl/wt Flk1-Cre male and Sox7fl/fl or Sox7fl/− female mice.The morning of vaginal plug detection was embryonic day 0.5.All animal work was performed under regulation governed by the Home Office Legislation under the Animal Scientific Procedures Act 1986."Total RNA was isolated using Rneasy Mini/Micro plus Kit, and 2 μg of which was used to generate cDNA using the Omniscript reverse transcriptase kit, according to the manufacturer's instructions.Real time PCR were performed on an ABI 7900 system using the Exiqon universal probe library.Gene expression was calculated relative to β-actin using the ΔΔCt method.Embryos were stained using a rat anti-mouse CD31 antibody and a goat anti-rat AF555 secondary antibody as previously described.Z-stack images were taken using a two-photon confocal microscope with a 5 × objective.E10.5 embryo sections were stained as previously described using a goat anti-SOX7 antibody and a donkey anti-goat AF647 antibody.Subsequently, embryos were stained with a rat anti-cKit antibody, and a rabbit anti-pan-RUNX antibody before staining with a goat anti-rat AF488 and a goat anti-rabbit antibody.Yolk sacs were isolated and flat mounted with DAPI as previously described before imaging.Specific labelling of primary antibodies was determined by comparison with no primary antibody stained controls.Cells were disaggregated by trypsinisation, and incubated with combinations of conjugated monoclonal antibodies on ice.Analyses were performed on a BD LSRII.Data were analysed with FlowJo, gating first on the forward scatter versus side scatter to exclude non-viable cells.Sample sizes were chosen based on previous experimental experience."Student's t-test was used to assess the differences between two populations in embryo experiments.* P-value < 0.05, **P-value < 0.01, *** P-value < 0.001.To define at the cellular level the expression of SOX7 during the earliest step of cardiovascular specification from mesoderm, we used an embryonic stem cell line carrying a BAC transgene with the first exon of Sox7 replaced by a Gfp reporter cDNA.These ESCs were differentiated in vitro to mesoderm via embryoid body formation.This differentiation process led to the generation of a FLK1+ mesoderm population that was isolated and further differentiated to a TIE2+ cKIT+ cell population containing both hemogenic endothelial and EPCs as previously described.FLK1+ cells sorted from Sox7-GFP EBs were cultured as a monolayer and analysed after 2 days of culture.The SOX7-GFP+ fraction was strongly enriched for the expression of the endothelial marker TIE2, VE-Cadherin and CD31, and to a lesser extent, for c-KIT when compared to the SOX7-GFP− fraction.Furthermore, the SOX7-GFP+ fraction had significantly higher transcript levels of Flk1, Gata2, and Scl genes, while there were also higher transcript levels of Fli1 and Cdh5.Collectively, these data indicate that in vitro, SOX7 is expressed in a very large fraction EPCs at the onset of endothelial specification from mesodermal precursors.To investigate the pattern of SOX7 expression during in vivo development, mouse embryos heterozygous for a Sox7-LacZ null allele were generated.Whole mount S-gal staining of E7.5 embryos revealed the widespread presence of SOX7:LACZ-expressing cells in the yolk sac region of the developing conceptus in agreement with previously published data.Further S-gal staining of E7.5 embryo sections confirmed the expression of SOX7 in the blood islands, allantois and primitive endoderm.In E10.5 embryos, S-gal staining highlighted SOX7::LACZ-expressing cells in the endothelium lining of the dorsal aorta.Additionally, immunostaining revealed that CD31+ c-KIT+ hematopoietic clusters expressed SOX7 whereas few hematopoietic cells within the aortal lumen also expressed SOX7.These data confirm that SOX7 is expressed in the blood islands during the emergence of the first EPCs, as well as at later stages in endothelial cells during vascular development.It is interesting to note that SOX7 also appears to be expressed in hemogenic endothelium of the dorsal aorta since emerging clusters of blood cells do express SOX7.Given the very early onset of Sox7 expression during the specification of the cardiovascular system, it is important to define how early during development this transcription factor is required for vasculogenesis and angiogenesis.In order to elucidate the role of Sox7 during embryonic development, we first generated complete Sox7 knockout embryos on homogenous genetic background by backcrossing Sox7lacZ/+ mice on C57Bl/6 then by inter-crossing these transgenic mice.The LacZ cassette was inserted at the beginning of exon 2 and therefore fully disrupts the expression of Sox7.Complete deficiency in Sox7 on this homogenous background led to a fully penetrant embryonic lethality phenotype by E10.5 characterised by severe growth retardation as well as an absence of large blood vessels in the yolk sac as previously observed.To understand how these defects occurred, we investigated the formation of the vascular system prior to E10.5, a developmental time point at which Sox7 deficiency resulted in lethality in all embryos examined.Whole mount PECAM1 staining at E7.5 revealed that Sox7 deficiency did not affect the overall generation of PECAM1+ primordial EPCs.However, one day later by E8.5, Sox7−/− embryos already displayed notable defects in the developing vascular networks which are formed by vasculogenesis.The development of the anterior region of the paired dorsal aorta was relatively unaffected, but the posterior region displayed areas of highly unorganised endothelial cords rather than a distinct paired dorsal aorta.In addition, the posterior regions of the dorsal aorta were not lumenized in Sox7−/− embryos.Finally, whilst a primitive vascular plexus formed within the yolk sac of Sox7−/− embryos, the vascular network appeared disorganized compared to that of the control embryos.In order to elucidate the role of Sox7 during embryonic development, we first generated complete Sox7 knockout embryos on homogenous genetic background by backcrossing Sox7lacZ/+ mice on C57Bl/6 then by inter-crossing these transgenic mice.The LacZ cassette was inserted at the beginning of exon 2 and therefore fully disrupts the expression of Sox7.Complete deficiency in Sox7 on this homogenous background led to a fully penetrant embryonic lethality phenotype by E10.5 characterised by severe growth retardation as well as an absence of large blood vessels in the yolk sac as previously observed.To understand how these defects occurred, we investigated the formation of the vascular system prior to E10.5, a developmental time point at which Sox7 deficiency resulted in lethality in all embryos examined.Whole mount PECAM1 staining at E7.5 revealed that Sox7 deficiency did not affect the overall generation of PECAM1+ primordial EPCs.However, one day later by E8.5, Sox7−/− embryos already displayed notable defects in the developing vascular networks which are formed by vasculogenesis.The development of the anterior region of the paired dorsal aorta was relatively unaffected, but the posterior region displayed areas of highly unorganised endothelial cords rather than a distinct paired dorsal aorta.In addition, the posterior regions of the dorsal aorta were not lumenized in Sox7−/− embryos.Finally, whilst a primitive vascular plexus formed within the yolk sac of Sox7−/− embryos, the vascular network appeared disorganized compared to that of the control embryos.By E10.5, whole mount PECAM1 staining revealed that Sox7 deficiency led to extremely severe vascular defects both in the embryo proper and in the yolk sac.In Sox7−/− embryos there was an absence of a definitive dorsal aorta in the posterior region of the embryo, indicating that the dorsal aorta did not recover from the initial vasculogenic defects observed at E8.5.Furthermore, the highly unorganised nature of the vascular networks indicates considerable angiogenic remodelling defects resulting from Sox7 deficiency.At E10.5, the yolk sac vasculature of Sox7−/− embryos was arrested at the primitive vascular plexus stage, with a complete absence of vascular remodelling.Together these findings demonstrate that SOX7 is critically required for both vasculogenesis and vascular remodelling during angiogenesis.A detailed study of sprouting defect in the retina upon Cdh5-CreER induced deletion of Sox7 has recently been published by Kim et al., suggesting that SOX7 is important for both remodelling and sprouting during angiogenesis.Unlike other intra-embryonic vessels and arteries, the nascent dorsal aortae originate from paired lateral cords that are formed by the migration and aggregation of angioblasts.The fact that the posterior part of the dorsal aorta is affected in Sox7 deficient embryos strongly suggests an early defect in cord assembly at a vasculogenesis stage.It has been well characterised that there is redundancy and compensation between SOXF family members in controlling vascular development.The conditional knockout of SOX7 in TIE2 expressing endothelial cells using mice from a mixed genetic background, resulted in relatively minor vascular defects such as a decrease in the diameter of the dorsal aorta.These relatively minor defects reported by Zhou and collaborators are most likely due to compensation by SOX17 and SOX18 rather than truly a result of Sox7 deletion in TIE2-expressing cells.Indeed, it was shown that in Sox18−/− mice of a mixed genetic background SOX7 and SOX17 were upregulated and substituted for SOX18.Given this known redundancy and compensation among the three SOXF genes, we examined transcript levels for Sox17 and Sox18 in Sox7−/− embryos relative to Sox7+/+ embryos.This analysis revealed an increase in the expression of both Sox17 and Sox18, suggesting possible compensation of SOX7 deficiency by SOX17 and SOX18.However, even with the compensatory activity by SOX17 and SOX18, Sox7 deficiency resulted in massive vasculogenic and angiogenic defects.Together these data support a critical and unique role for SOX7 in the development of the vascular system.In addition to its expression in endothelial progenitors, Sox7 has been previously detected in primitive endoderm, in the earliest specified hematopoietic progenitors and in emerging hematopoietic clusters in the dorsal aorta.To define a possible role and requirement for Sox7 in specific compartments, we generated a Sox7 conditional allele in which the exon 2 of Sox7 is flanked by LoxP sequences and can be excised upon CRE expression.First, we analysed the requirement for Sox7 expression in the vascular compartment using a Flk1-cre transgenic mouse line.The conditional deletion of Sox7 in FLK1-expressing cells resulted in early embryonic lethality and a phenotype very similar to the complete Sox7 knockout embryos.At E8.5, whole mount PECAM1 staining revealed that Flk1-Cre Sox7fl/fl embryos displayed already noticeable defects in the developing vascular networks including a disorganized yolk sac vascular plexus and areas of highly unorganised endothelial cords in the posterior region of the dorsal aorta.By E10.5, the highly unorganised nature of the vascular networks was indicative of considerable angiogenic defects resulting from the deletion of Sox7 in FLK1-expresssing cells in agreement with the recently published phenotype of the Tie2 specific Sox7 knockout embryos that was performed on homogenous genetic background unlike the study by Zhou and collaborators.In contrast to Tie2-specific Sox7 deletion, the endothelial Flk1-specific deletion of Sox7 revealed marked defects in the formation of the major blood vessels in the embryo indicative of vasculogenic defects.In Flk1-Cre Sox7fl/fl embryos, there was a lack of an observable vitelline artery in the embryo proper.Furthermore, the functional dorsal aorta in Flk1-Cre Sox7fl/fl embryos was extremely short, with the posterior region of the dorsal aorta resembling a cord of endothelial cells suggesting major defects in the vasculogenic events leading to the formation and organization of the angioblast cords giving rise to the posterior region of the dorsal aorta.The lack of vitelline artery is most likely a direct consequence of the absence of the posterior dorsal aorta as the vitelline artery arises from the dorsal aorta.It is likely that the differences between the Sox7 Tie2-deleted and Flk1-deleted embryonic phenotypes results from the earlier expression of FLK1 during development and therefore the earlier deletion of Sox7 in Flk1-cre than in Tie2-cre embryos.Unlike Tie2, the expression of Flk1 is detected in mesoderm and mesenchyme, it is possible that Sox7 deletion in these tissues contributes to the stronger phenotype observed.These findings demonstrate that the expression of SOX7 is required earlier than previously described and that SOX7 is an important transcriptional regulator of vasculogenesis.To further analyse the vascular defects in Flk1-Cre Sox7fl/fl embryos, we performed PECAM1 staining on sections of E9.5 embryos.The complete disorganisation of the vascular network made comparisons of specific blood vessels with control embryos impossible.However, there was clear evidence of lumenization defects in a major blood vessel, as a mass of disorganized endothelial cells was observed that in the subsequent section formed a blood vessel with a lumen.At E10.5 the yolk sac vasculature of Flk1-Cre Sox7fl/fl embryos was arrested at a primitive vascular plexus stage, with a complete absence of vascular remodelling.In control yolk sacs, venous and arterial areas were easily identified along with the vitelline vein and vitelline artery; in contrast the yolk sac of Flk1-Cre Sox7fl/fl embryos only contained a homogenous plexus of vessels with relatively large diameters as shown by blood vessel diameter measurement.Furthermore, measurement of the space between capillaries identified that Flk1-Cre Sox7fl/fl plexuses have a decreased avascular space compared to capillaries of control yolk sacs.This is in contrast to embryos with hemodynamic flow deficiencies which show larger avascular space between non-remodelled yolk sac plexus blood vessels when compared to control embryos.Together, this suggests that the phenotype of the Flk1-Cre Sox7fl/fl embryos is largely endothelial based and not due to cardiac defects causing decreased blood flow.Taken together, these data demonstrate that SOX7 is critically required in FLK1-expressing cells for both vasculogenesis and angiogenesis.In particular, SOX7 seems to be critical for the formation of a fully lumenized dorsal aorta, suggestive of an incomplete circulatory loop: a phenotype similar to that of SOX7−/− zebrafish.However, unlike the zebrafish model, SOX7 deficiency in the mouse embryo also results in angiogenesis defects as demonstrated by the absence of remodelling and unorganised patterning of the entire vascular network.The conditional deletion of Sox17 in TIE2-expressing cells also resulted in 100% embryonic lethality by E12.5 with major alterations in vascular remodelling, in the development of large arteries and in sprouting angiogenesis.Sox17 was also implicated in the acquisition and maintenance of arterial identity, while this role was shown to be performed by Sox7 in Zebrafish.It should be noted however that in Zebrafish Sox17 lacks the β-catenin binding domain which might explain these differences between mouse and zebrafish models.Together, these data suggested that both Sox7 and Sox17 are essential for the development of the vascular system and that they cannot compensate for each other, at least on homogeneous genetic background, and that they have slightly different roles in vasculogenesis and arterial specification, in line with their distinctive pattern of expression.Our findings also establish that while SOX7 is expressed in primitive endoderm, SOX7 is dispensable for the formation, maintenance or function of this lineage.Indeed, we observed an identical phenotype in Sox7 complete knockout embryos in which Sox7 is deleted in primitive endoderm and FLK1-specific Sox7 deficient embryos in which Sox7 is not deleted in primitive endoderm.Together this suggests that SOX7 deficiency in primitive endoderm in the complete knockout does not affect the early steps of embryonic development in which primitive endoderm plays a critical role in body plan formation and tissue induction.It is possible however that in Sox7 complete knockout embryos SOX17 compensates for SOX7 loss in primitive endoderm since SOX17 is also expressed in this lineage.The generation of Sox7 and Sox17 double knockout embryos would be required to address the specific role of SOXF factors in primitive endoderm formation and function.Finally, we analysed the consequence of the specific deletion of Sox7 in the hematopoietic compartment given the observed expression of SOX7 at all sites of hematopoietic emergence during embryogenesis.Indeed, this was further confirmed by the co-expression of SOX7 with RUNX and cKIT, both marking emerging blood cells in the ventral aspect of the dorsal aorta in wild type embryos at E10.5.To determine a possible role of SOX7 in hematopoiesis, Sox7fl/fl mice were crossed with Vav-Cre transgenic mice, which resulted in Sox7 deficiency in all definitive hematopoietic cells.Interestingly, Vav-Cre Sox7fl/fl pups were viable and lived to adulthood without any phenotypic abnormalities or observable defects in the hematopoietic system.These data demonstrate that while Sox7 is expressed in the earliest blood progenitors, this transcription factor is not required for definitive hematopoiesis which encompasses all blood cells generated from E8.5 onward, including hematopoietic stem cells.However, it remains possible that the two other SOXF factors are providing enough compensation to allow for the emergence of blood cells in developing embryos deficient for Sox7 expression in Vav-expressing cells.The generation of triple SoxF conditional embryos would be required to address this possibility.The development of new vascular networks via vasculogenesis and angiogenesis is an important factor in the pathophysiology of all solid tumours.Neoplastic vascularisation facilitates the proliferation and subsequent metastasis of tumour cells, making angiogenic processes attractive targets in combating cancer.The role of SOX7 in promoting tumour progression and angiogenesis is poorly understood.Recent data suggest that SOX17 is an important regulator of tumour angiogenesis.Together, these findings warrant further investigation into whether SOXF factors act redundantly or compensate for each other to promote tumour angiogenesis, which may offer novel therapeutic targets for the treatment of cancer.The authors declare no competing financial interests.The following are the supplementary data related to this article.Sox7−/− embryos have defects in the dorsal aorta at E8.5.Whole mount PECAM1 staining of E8.5 Sox7+/− and Sox7−/− embryos.Arrows indicate posterior section of dorsal aorta.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.mod.2017.05.004.
The transcriptional program that regulates the differentiation of endothelial precursor cells into a highly organized vascular network is still poorly understood. Here we explore the role of SOX7 during this process, performing a detailed analysis of the vascular defects resulting from either a complete deficiency in Sox7 expression or from the conditional deletion of Sox7 in FLK1-expressing cells. We analysed the consequence of Sox7 deficiency from E7.5 onward to determine from which stage of development the effect of Sox7 deficiency can be observed. We show that while Sox7 is expressed at the onset of endothelial specification from mesoderm, Sox7 deficiency does not impact the emergence of the first endothelial progenitors. However, by E8.5, clear signs of defective vascular development are already observed with the presence of highly unorganised endothelial cords rather than distinct paired dorsal aorta. By E10.5, both Sox7 complete knockout and FLK1-specific deletion of Sox7 lead to widespread vascular defects. In contrast, while SOX7 is expressed in the earliest specified blood progenitors, the VAV-specific deletion of Sox7 does not affect the hematopoietic system. Together, our data reveal the unique role of SOX7 in vasculogenesis and angiogenesis during embryonic development.
78
Introducing nature-based solutions into urban policy – facts and gaps. Case study of Poznań
Cities, as places of population concentration and high economic potential, with diverse natural conditions, face many challenges connected with the simultaneous provision of a high quality of life for the inhabitants and the resilience of the city to human pressure and extreme events, including changing climatic conditions.The key role in solving these problems is played by the local authorities.From the point of view of the city management, the concept of Nature-based Solutions seems to be particularly interesting, giving multidimensional benefits.NbS are recognised as a transdisciplinary umbrella concept that builds upon pre-existing concepts, such as blue-green infrastructure, natural capital, ecosystem services, and landscape functions in environmental planning.Eggermont et al. point out, that even though NbS are often referred to as innovative, they also encompass existing ideas that build on lessons from the past.Against this background Nature-based Solutions are defined as “actions which are inspired by, supported by or copied from nature”.They involve “the innovative application of knowledge about nature, inspired and supported by nature, and they maintain and enhance natural capital”, that “address societal challenges effectively and adaptively, simultaneously providing human well-being and biodiversity benefits”.Thus NbS are treated as a solution to contemporary societal challenges that meet environmental, social and economic objectives of sustainable development similarly to other pre-existing approaches.The NbS concept has emerged as an operationalisation of an ecosystem services concept within spatial planning policies and practices.Since NbS are based, in large part, on natural areas and features in and around cities, the presence of green infrastructure, which is a strategically planned network of natural and semi-natural areas with other environmental features designed and managed to deliver a wide range of ecosystem services is fundamental.As NbS may result from increased provisioning and improved availability of urban green spaces, development of GI elements is a key issue.In this respect, GI can be recognized as a physical structure, whereas NbS are activities aimed at the conscious, goal oriented development and/or the use of GI potential to solve urban problems.They rely on the application of knowledge about environmental processes, ecosystem services and benefits provided by GI.In this context, if GI planning, design and management are intentionally oriented to tackle environmental, social and economic challenges, they can be recognized as NbS.In summary, the term GI describes spatial structure of ecosystems, whereas NbS is seen as a human action oriented on the GI functions.However, it has to be bear in mind that the ecosystem services flow that benefits people, often require human investment.Therefore it is crucial to consider what should be understood as “nature” especially in the urban context.A mixture of green and man-made infrastructure of different proportion and configuration sometimes can make the distinction between green and grey solutions ambiguous.As broad framings can lead to uncertainties and unclearness in the common understanding of NbS and its relation to pre-existing concepts, each case needs to be explicitly explain what is meant by NbS.With the ambition of positioning Europe as a world leader in responsible innovation in Nature-based Solutions, the EU’s research & innovation policy introduces the concept of NbS to the family of approaches building on the usefulness of ecosystems to humans.EU supports and encourages local authorities to implement an integrated approach to the urban economy through, among others, Thematic Strategy on Urban Environment.At the same time, it indicates solutions based on green infrastructure that, having diverse functions, provide a bundle of ecosystem services, which is particularly beneficial in the urban environment.In order to promote good practices, the EU facilitates their dissemination and supports cooperation and exchange of experiences between the cities at various planes.Oriented towards transition in cities, the projects focusing on NbS, for example: CONNECTING Nature are executed within the framework of Horizon 2020.In urban environment management, it is required to have an integrated strategic approach based on long-term and medium-term plans of activities, including connections between different policies.The cities take various actions to improve living conditions; however, they often do not make use of the potential of the natural environment and the resulting benefits, of which the source is problem or opportunity oriented GI.We are convinced that the multiplication of NbS largely depends not only on the understanding of their benefits and cost-effectiveness by policymakers but also on the position of NbS in urban policy.Against this background, to support cities in their transition process towards more sustainable development through NbS, we provide a procedure to identify existing and potential NbS inclusion in the strategic, planning and programming documents developed for the cities.The objectives of the paper are:diagnosis of NbS position in the tasks and directions of planning, strategic and programming documents;,characteristic of activities related to NbS according to the form of human-nature interaction,determining the potential of including NbS in the local policy;,identifying the role of NbS in facing 4 main challenges in urban policy: resilience and climate change adaptation, health and well-being, social cohesion, economic development potential.In this paper, we intend to bridge the gap in knowledge of the extent to which planning, design and management of GI are intentionally oriented to face urban challenges such as resilience and climate change adaptation, human health and well-being, social cohesion as well as green economic growth.To recognise how far actions recognized as NbS are currently present in the directions of development and tasks, and what is the potential to include it, we conducted a review of planning, strategic and programming documents of Poznań City as a Case Study.Poznań is an interesting city for study, as it is in the transition process towards sustainable innovations through participation in the CONNECTING Nature Project under HORIZON 2020 in which it leads actions on NbS for being ‘Smart and Sustainable’.Poznań is the fifth largest city in Poland with a population of over 540 000 inhabitants and area of 262 km2.Simultaneously it is one of the most important economic centres and most urbanised areas in Poland.In addition, it has a well-established green infrastructure and faces contemporary urban challenges such as suburbanization and urban pressure on GI on one hand and depopulation process on the other.These challenges are reflected in the city’s vision and directions of development.The strategic goal of "improving the quality of life for all residents" is included in urban strategies, plans and programs.Although Poznań is a city rich in green infrastructure, with the total share of 58,5% of green and blue areas in land use structure, their distribution is spatially diversified across the city.The characteristic system of green wedges and rings developed in the 1930s in order to provide proper ventilation of the city and protect surface waters stands out on the urban fabric.The main green wedges extend from the suburban areas to the city centre along the Warta river valley and its tributaries providing spatial continuity, internal diversity and connection with the surrounding ecosystems.The concentric green rings formed by the ramparts of the city and the Prussian fortifications are less visible in the urban structure.On the other hand, densely populated areas such as the historical center of the city, districts of the dense quarter buildings and multi-family block buildings have the smallest share of green areas.Considering these conditions, Poznań faces an opportunity to use and develop existing green infrastructure as a solution for improving urban living conditions and facing related demographic challenges, while supporting urban resilience.We focused on planning, strategic and programming documents referring to various aspects of the environment management, which are particularly relevant for practical implementation of the Nature-based Solutions.Such documents define the policy of development of the city through the identification of key challenges, problems and needs of its inhabitants.They also specify the goals and directions of the activities aiming at the improvement of the state of the environment and quality of life in the city.Ten interrelated urban documents concerning urban policy of Poznań were reviewed for the identification of NbS-related content.From the analysed set of documents, development strategy is one of the most important documents that comprehensively programs a city’s development.Together with a study of conditions and directions of spatial development and a long-term financial forecast, they represent document of strategic character, which ensure implementation of development policy.Simultaneously goals of strategies are transposed to the programming documents, which are more sectoral in nature and are linked to specific and narrow issues of development policy.Programming documents like plans and programs have an executive character.In order to achieve defined goals, we developed an operational definition of Nature-based Solutions and criteria for identification of NbS-related contents in the urban policy documents.We assumed that the Nature-based Solutions are the conscious activities that increase the ecosystem services capacity of green infrastructure that contributes to solving urban problems.They include creation of new GI elements, strengthening its quality and/or multifunctionality and supporting its usage in diverse ways.NbS-related contents were identified in the text of urban policies and grouped taking into account types of human actions within GI and potential to the inclusion of GI in solving urban problems.The procedure for grouping the activities related to NbS and those with potential for introducing NbS in local policy documents consisted of a few steps.We treated the directions of development and tasks in the document as being related to NbS when the contents of the text clearly showed that they are based on a green infrastructure.The main criterion in grouping the activities related to NbS was the qualitative character of the human-nature interaction, which could be identified as a physical change or activities that do not lead to physical changes within GI element.We distinguished activities, in which new GI elements were physically created and, as a result, the area of GI was increased.Such solutions include, for example, creating new parks, tree planting or soil unsealing.Another NbS subgroup includes the activities that lead to physical changes in existing GI elements in order to make them more multifunctional or to improve their quality.As a result, qualitative changes within GI occur, for example, creating rain gardens and absorptive hollows extending water cycle.Within this subgroup, the surface of GI may be slightly decreased, for example, through building bicycle and pedestrian paths in the green areas.The last subgroup covers the activities that increase the usage of GI without physical changes.An example of such activity can be organising or promoting leisure in the green areas.The directions of the development and tasks that have the potential to use NbS were analysed as a separate group.Into group B we included provisions, which do not indicate the use of GI for solving urban problems, but on the basis of well known expert-based knowledge can be realised innovatively as NbS.Yet, these contents cannot be treated directly as NbS.An example of such a task is thermo-modernisation of buildings, which typically use artificial materials.However, it has the potential to include the green roofs and vertical greenery systems at least as a complement to artificial insulation.Reymond et al. found that NbS can have environmental, social and economic co-benefits and/or costs within and across 10 societal challenges including: climate mitigation and adaptation, water management, coastal resilience, green space management, air quality, urban regeneration, participatory planning and governance, social justice and social cohesion, public health and well-being, economic opportunities and green jobs.In this work, all NbS-related activities were assessed taking into account their influence on selected urban challenges.We focused on recognising their direct influence on 4 key urban challenges: resilience and climate change adaptation, health and well-being, social cohesion, economic development potential.We identified the direct influence of NbS, only when it was explicitly expressed in the text of the document.In this study, resilience is treated as the capacity of a system to absorb disturbance and reorganise while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks.Climate change adaptation is understood as an adjustment of ecological, social or economic systems in response to observed or expected changes in climatic stimuli and their effects and impacts in order to alleviate adverse impacts of change or take advantage of new opportunities.In relation to human well-being, the authors consider it as a context- and situation-dependent state, comprising basic material for a good life, freedom and choice, health and bodily well-being, good social relations, security, peace of mind, and spiritual experience, while health, is defined in accordance with WHO as a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.The third area of influence relates to social cohesion that creates a sense of belonging, promotes trust and offers its members the opportunity for upward social mobility.It also fights with exclusion and marginalisation.Finally, as an economic development potential, we considered the impact of implementation of a given NbS in terms of possibilities to strengthen the economic development of the city.As Maes and Jacobs noticed NbS have a high potential to generate considerable socioeconomic benefits in a resource-efficient way supporting the real-life transition towards a green and sustainable economy.The identification of NbS-related contents was based on the document analysis by skimming, reading and interpretation.A popular variant of this method is content analysis.Four reviewers conducted review using previously described criteria and a spreadsheet in which the primal results where entered.The database contained the following attributes: name of the analysed document, NbS-related content - original and aggregated record, NbS-related content group, NbS-related content influence on 4 selected urban challenges, the coder’s ID and potential comments.Data collected in the spreadsheet was the basis for all calculations.The inconsistencies in coding, encountered in identifying the NbS-related contents were subjected to joint consultations.We have repeatedly compared, interpret and correct the partial results to improve the proposed procedure of document analysis.Helpful in this process were the guidelines presented by Stevens et al.In total, 356 provisions referring to the Nature-based Solutions were distinguished from the contents of the analysed documents.146 provisions proposing solutions with the use of green infrastructure were identified.From this group, 99 provisions refer to the physical transformation of existing GI, and 41 to the creation of new GI elements.Only 6 provisions refer to the activities that include using or promoting the use of existing greenery.As many as 210 expressions of general character, not indicating the type of solution were identified as having the potential to include NbS.The provisions have been included in group B.The number of provisions and their structure differed depending on the document.General documents such as Development Strategy and Study of Conditions and Directions of Development contain a relatively small number of provisions; however, recommendations are of citywide character, determining the directions of development.The largest number of the provisions was included in the Environmental Protection Program that contains detailed tasks to be executed in this field.The documents are diversified in terms of the structure of NbS-related contents,NbS-related activities most often occur in the Environmental Protection Program and in Municipal Revitalisation Program.Actions with potential for NbS inclusion occur most often in the Environmental Protection Program.The documents were also analysed in terms of perceiving the impact of the activities related to NbS classified to group A on the resilience and climate change adaptation, health and well-being, social cohesion and economic development potential.The provisions directly expressing the contribution of NbS in solving urban problems were identified, and each of them could refer to more than one area of influence.The contribution of NbS to resilience and health and well-being is most often present in the urban policy.The use of NbS for supporting social cohesion was less visible.The references to economic development potential were identified only in 12% of the provisions.The most significant emphasis in solving urban problems by NbS is put on the physical changes in green infrastructure; the lesser stress is put on the creation of new green areas and the lowest on various use or promotion of existing greenery.The thematic scope of one hundred and forty six NbS-related provisions showing the role of GI in solving urban problems is very diverse, also within three main subgroups.To have a more detailed insight into provisions and to expose the activities characteristic for the city of Poznań, we grouped them according to their thematic similarity.Next we identified their relations to the main challenges in urban policy.This shows that the most thematically diverse are activities transforming GI elements.The tasks related to the creation of new green infrastructure elements occur mainly in SCDSD and EPP.The provisions increasing the share of green areas towards the development of an ecological system refer to the city scale.Here, the activities focus on the reconstruction of continuity and strengthening the existing elements of ecological system, mainly green wedges, through increasing the surface of green areas and maintaining their connectivity in the particular urban units.The provisions expressed also the need for afforestation of areas not suitable for agricultural production or wasteland and reclaimed areas as well as planting of street trees or trees and mid-field shrubs that protect against soil erosion.Increasing the surface of greenery is treated in the documents as a way to solve urban problems resulting from functional and spatial conflicts, accumulation of air pollution and acoustic discomfort.The activities in this regard focused on introducing screening greenery in the housing areas located along the streets, tramway routes and railway tracks, planting trees in the avenues, creating green belts along traffic routes and separating bicycle paths and pavements from the roadways by planting trees, bushes or lawns.With reference to the improvement of the acoustic climate of the city, sound-screening greenery in the form of green walls or green acoustic screens should be introduced.However, such solutions are not indicated in EPPN.The provisions classified in subgroup A1 include increasing the share of green areas for recreational and educational purposes.Such NbS include: creating new parks in the city that form connected system linking recreational areas around urban lakes with the largest residential areas, introducing cultivated greenery to the areas of residential and service development, increasing the acreage of forests and creating school gardens.The activities specific to Poznań include an adaptation of green spaces associated with the fortification objects for recreational purposes as parks.In the provisions of the documents, shaping the hydrographic network of the city through solutions based on increasing retention surface of existing rivers, creating additional watercourses within the floodplains, carrying water in the periods of swelling, restoring historical watercourses are also underlined.In subgroup A1, we also identified the actions related to NbS concerning the development of GI for the purposes of rainwater management, aimed at increasing absorptive surfaces allowing infiltration of rainwaters and meltwaters.NbS are the least present in remediation.One provision in the EPP concerned grass seeding and planting trees in the landfill site.The provisions concerning NbS classified to subgroup A2, mostly refer to the revival and restoration of existing green areas.The activities in this regard are, on the one hand, of general character and refer to green areas in the city, indicating mainly the need of restoration, revitalisation and development of urban parks, areas near the Warta river and lakes, or the need of protection, maintenance and reconstruction of habitats of the species.On the other hand, the activities refer to site specific locations.In terms of frequency of occurring in the documents, another group of the provisions concerning NbS is a development of a network of bicycle paths, bicycle and pedestrian shared paths, tourist and didactic trails running through green areas.Just like in the previous group, the provisions refer to both: site and city scale.The activities in this respect show the need for the development of a system of didactic trails in the forested areas, development of a system of bicycle and walking paths, the need for building new footbridges and bridges for the cyclists across the Warta river.The provisions referring to the development of waterside areas and restoration of watercourses are also highlighted in the documents.The goal of the following activities: building marinas, development of waterside infrastructure, river transport, an organisation of beaches near the lakes and Warta, is to develop leisure and recreation on the basis of blue infrastructure.Only in the two analysed documents can we find NbS-related activities connected with water purification, remediation and modernisation of water reservoirs.The activities in this regard concern mainly modernisation and rebuilding water reservoirs to improve the quality of water and their retention.The activities related to development and modernisation of sports facilities in the green areas, included in DSRW, also belong to subgroup A2.Finally, one provision in this subgroup refers to the use of NbS in agriculture and concerns the reclamation to obtain optimal water conditions for agriculture through retention.Much less NbS-related contents were recognised in a subgroup A3.They are connected with supporting the active use of green infrastructure.They refer to promoting an active way of spending time in the riverside areas, strengthening its tourist attractiveness and using the historical forts in order to create new tourist routes.The provisions supporting activation and social integration through the execution of research and application projects connected with green infrastructure, e.g. CONNECTING Nature were also identified.Different, single NbS-related provisions concerning the possibility of using compost from maintaining green infrastructure to reduce organic waste landfilling are also part of subgroup A3.Within the thematic scope, the directions of activities and tasks in the urban policy are focused on society, broadly defined infrastructure, environment and space.Within social tasks, the urban policy assumes supporting the mental and physical health of the inhabitants, activation, and integration of society, ecological education, as well as the promotion of the development of innovativeness.They are undoubtedly areas of activities, in which, apart from standard solutions, NbS can be introduced.The provisions concerning the development of infrastructure with the potential to use NbS include both the building industry and transport.It is a group of activities, in which NbS can be broadly applied, but they have not been popularised yet.Within the scope of the building industry, nature-based engineered solutions can be used both in building and modernisation, including thermo-modernisation of the buildings.The provisions concerning road infrastructure are focused both on road investments and creating bicycle and pedestrian paths.The location of bicycle and pedestrian paths in the GI or introducing greenery along existing communication routes that shall provide diverse regulating and cultural services are a combination of technical and natural solutions that can contribute to solving urban problems, therefore, they have premises to be classified as NbS.Area of activity in the city of significant potential of application of NbS is spatial development, including space restoration and revitalisation, especially on the wasteland such as post-military and post-industrial areas.Undeveloped wastelands may be transformed into new multifunctional green space.Moreover, single provisions, in which NbS would be used include animal species protection, management of rainwaters and meltwaters, protection of air and acoustic climate, as well as development of using renewable sources of energy and reduction of waste landfilling.The adequately shaped system of greenery allows strengthening resilience and adaptation to climate change, which makes it a Nature-based Solution.In the urban policy, multifunctional role of the system of green spaces is well recognised and understood.It strengthens urban resilience through protection of nature and waters, provision of proper ventilation of the city, as well as the provision of attractive landscape and natural recreational space for the inhabitants.Introduction of greenery as an element of space development is recognised as a way of creating an screening barrier from the noise or pollutants, as well as a mean of land remediation.However, different documents refer to such solution to a different extent.In this light, the activities aiming at increasing the surface of greenery and its restoration must be treated as supporting the natural system of the city and contributing to resilience.The activities aiming at strengthening the resilience with the potential for application of NbS also include: development of a network of bicycle lanes, modernisation of public transport and improvement of housing conditions limiting the impact on the environment.The issue of adaptation to climate change is present in the policy of the city of Poznań to the limited extent.It is clearly indicated by the Strategy for the Development of the River Warta, where the consequences and threats caused by climate change are presented and, as a response, the execution of the tasks in order to provide flood safety was proposed.NbS showing a direct impact on health and well-being most often occur in the activities oriented towards strengthening the quality and/or multifunctionality of GI.They predominantly concern general forms of GI development and restoration as well as development of a network of bicycle and pedestrian paths, tourist and didactic trails running through green areas.The provisions show the importance of these solutions in shaping the space for the recreation, leisure and tourism, as a significant element affecting the quality of mental and physical life of the inhabitants.The provisions from subgroup A2 of operational character are present in DSRW and MRP and indicate specific activities concerning the development of riverside areas, reconstruction of watercourses, construction, development and modernisation of sports facilities within the GI.The tasks that include creating new green areas referring to health and well-being are focused on creating green areas for recreational purposes, development of an ecological system of the city and possibility of introducing screening greenery for improvement of acoustic climate.Single provisions referring to H&W also emerged in subgroup A3, and referred to supporting active use of GI.The provisions indicating directly the possibility of using NbS for social cohesion emerge in two analysed documents.In the first document, the references to SC emerge in most of the provisions, but in the second document, only in one provision.They refer predominantly to the tasks from subgroup A2 within the scope of development of green areas, including restoration and development of bicycle paths, tourist and didactic trails in the green areas the only LFF provision was found in this category of tasks).In the provisions of MRP, there are also tasks that would be executed based on nature and they primarily connected with activation and social integration.Single provisions that include SC can be found in the subgroups A1 and A3, and they concern the tasks connected with building or planning parks, playgrounds, squares, outdoor gyms, social gardens.In the remaining planning documents, there are no provisions directly showing the possibility of solving problems within the scope of SC with the use of NbS.The direct impact of NbS-related contents on economic development potential was identified in several provisions in MRP and SCDSD.It concerned the provisions related to the development of green areas and their restoration, creating green areas for recreational purposes and development of green infrastructure for management of rainwaters and meltwaters.It was indicated in both documents that comprehensive execution of scheduled activities shall contribute to the improvement of working and living conditions in Poznań and make it more attractive for current and potential inhabitants.This in turn may inhibit the process of depopulation of the city and related decrease in budget revenue from taxes.A decreasing number of the inhabitants of Poznań contributes to the deterioration of the city’s economic situation because the city incurs the significant costs of provision of public services, the maintenance of infrastructure or the urban development.The case study of urban policy in Poznań shows that Nature-based Solutions can be identified in directions of development and specific tasks, even though they are not explicitly expressed in the documents.Our approach allows to identify contents of urban policies related to NbS and to assess their diversification regarding the type of human actions within green infrastructure and their application in solving selected urban problems.Interesting division of NbS in terms of their goals was also presented by Lafortezza et al., for example: urban regeneration through NbS; NbS for improving well-being in urban areas; establishing NbS for coastal resilience; multifunctional nature-based watershed management and ecosystem restoration; NbS for increasing the sustainable use of matter and energy; NbS for enhancing the insurance value of ecosystems; and increasing carbon sequestration through NbS.Different classification of NbS, in terms of the character of activities, was proposed by Eggermont et al.NbS were characterised as activities changing along two gradients 1.“How much engineering of biodiversity and ecosystems is involved in NbS?”,2.“How many ecosystem services and stakeholder groups are targeted by a given NbS?”.Type 1 consists of no or minimal intervention in ecosystems, with the objectives of maintaining or improving the delivery of a range of ES.Type 2 corresponds to the definition and implementation of management approaches that develop sustainable and multifunctional ecosystems and landscapes and improve the delivery of selected ecosystem services compared to what would be obtained with a more conventional intervention.Type 3 consists of managing ecosystems in very intrusive ways or even creating new ecosystems.Against this background, our approach combines elements of both these classifications, taking into account the type of activities within GI as well as their goals in facing urban challenges.Comparing our approach,with classification presented by Eggermont et al., we can find some coincident categories, for example subgroup A1- Type 3, subgroup A2 -Type 2.We took into consideration also activities aimed at using and promoting existing GI – subgroup A3, and the potential of implementation of NbS in the provisions not specifying particular solutions – group B.Our research shows that significant attention and a number of actions are targeting existing GI and focus on changes towards its multifunctionality and better quality.This illustrates the significant role of NbS in urban regeneration and the revitalisation of existing urban spaces that improves spatial and environmental quality in cities.It also has to be highlighted that despite the already high share of GI in the city, intensive development and limitation in space availability, new green spaces are still planned to be created.Relatively low attention is placed on the possibility to increase usage of green spaces by organising soft actions.Moreover, the Poznań Case Study shows the disproportion between number of NbS-related provisions and more numerous provisions not specifying particular solutions, which indicates not fully exploited potential for further implementation of NbS.The urban policy documents express the intention of municipality governments and their awareness concerning sustainable development.Therefore, the presence of references to GI in the text can be recognised as important evidence testifying the extent to which city management perceive its potential to solve urban problems.In policy documents of Poznań, the emphasis is put on urban resilience, which is strengthened through the use of the existing structure of green space for:air quality - ventilation of the city, mainly areas particularly exposed to an accumulation of pollutants,water management - retention of rainwaters and meltwaters in a drainage area, reduction of rain and melting wastewaters carried to the rain drain system or watercourses, contributing to the protection of water resources,green space management - maintenance of environmental functions.The role of GI in increasing resilience is well documented not only in Poznań.It was noticed in many cities that appropriate development of GI is important for the achievement of specific resilience goals and solving urban problems.Creative way of thinking about planning the structure of the city supporting its resilience should include GI and NbS.One of the most popular approach is a transition from grey to green infrastructure.An example is the city of Los Angeles where, as a part of the improvement of rainwater management, Los Angeles River was restored to its natural state through unsealing of the channel and Green Alley program was implemented.The concept of resilience is also used in the activities aimed at adaptation to climate change, but, as it was mentioned above, its application is broader.The adaptation to climate change combines social, economic, natural and spatial dimensions, which can adopt: coping, incremental and transformative approaches.In Poznań, the issue of adaptation to climate change is not a priority direction of the activities in the urban policy, similarly as in Europe several years ago.Only the Strategy of Development of the Warta River directly indicates the necessity of adaptation to climate change in order to improve flood safety.Although it is the only document referring to the issue of climate change, its provisions are innovative and based on nature.The provisions of the strategy within the scope of river safety are of transformational character, reaching beyond the current application of grey infrastructure and including, among others, the creation of more space for the water to absorb peak flows after mainly heavy rains; creation of the additional river channels and bringing back the historic river.Management of blue infrastructure is therefore seen as a Nature-based Solution towards the reduction of flood risk and a way of adaptation to climate change.Other remarks to climate change mitigation can be found in LCEP and PSDPT where the need for development of an alternative transportation is mentioned.Although the issue of adaptation to climate change in the urban documents is not highlighted, it must be emphasised that adaptation plans are being prepared now for 44 cities in Poland.The general urban policy provisions create a wider possibility for the inclusion of NbS-related actions toward urban resilience.In particular, they could promote nature-based engineered solutions that are currently used in building constructions and modernisation.Solutions such as green roofs provide thermal benefits and reduce energy consumption.It is particularly highly beneficial in retrofitting older buildings.Also, vertical greenery systems may decrease ambient temperature; however, the effect depends on adopted solutions.Additionally, green roofs increase water storage capacities, reduce surface run-offs and can support air purification, contributing in this way, also, to the adaptation of cities to climate change.Introduction of such solutions is still not widespread, although they could be more ferquently used for example in thermo-modernisations.The benefits of green roofs were noticed in the urban policy of London, where a part of the London Plan supports green roof development.Similarly, in Copenhagen, green roofs have become integrated into the Climate Plan, Strategy for Biodiversity as well as in the guidelines for Sustainability in constructions and Civil works.In North America, it is Toronto where requirements and standards concerning the installation of green roofs on new commercial, institutional, and multifamily residential developments were set in municipal bylaw.The next provisions with potential for NbS inclusion are those regarding the development and modernisation of road network that provide an opportunity to introduce sustainable water management such as bioswales and rain gardens.An example can also be the unsealing and greening tramway tracks with the lawns.The urban policy could more directly highlight the role of GI in water purification and flood protection as an attractive alternative for the grey infrastructure, which at a similar cost, provides additional benefits.Documents devoted to air protection could more strongly refer to the importance of green spaces for air purification, climate regulation and noise reduction as well as shaping acoustic climate at the city level.The last area of urban policy in which potential for inclusion of NbS was identified is the development of the use of renewable energy sources and the reduction of landfill."Current provisions do not mention the opportunity to use green infrastructure's waste biomass for bioconversion and biofuel production, which simultaneously would contribute to green space multifunctional use, resource efficiency and could provide green jobs and economic benefits.The results of the analysis of urban documents in Poznań showed that emphasis is put on improving the quality of life through the development of greenery and recreational infrastructure, but the presence of health goals is rare.However, the general provisions of the urban policy referring to health have a huge potential for including NbS.The presence of GI and its usage may have a positive impact on physical or mental health.Appropriate creation and arrangement of green spaces give possibilities for horticulture therapy that is recognised as a NbS for improving mental health that contribute to recuperation from stress, depression and anxiety.Therefore, it is crucial to emphasise currently unnoticed role of GI planning as a valuable tool in health policy.However, it has to be bear in mind that actions concerning the introduction of new green spaces are not sufficient activities at the policy level.The recent research shows that health is not among the main reasons for using public green and it requires soft actions to inform and encourage people in using it for health purposes.Although it is well recognised and proved that a systematic undertaking of physical activity has a positive effect on physical and mental health, physical inactivity is a global pandemic and one of the leading cause of death in the world.The awareness that constructing bicycle and pedestrian paths in the green areas indirectly contributes to the improvement of health of the inhabitants through reducing the emission of car pollutants and providing space for active outdoor activities should be stronger raised in urban policy documents.Similarly, provisions concerning creation of school gardens should highlight its contribution to children’s health.In the context of insufficient attention put on educational and promotional actions in using GI, we therefore agree with Corburn et al. that integrating health issues into the agendas of policymakers who have not previously identified health as their responsibility is a challenge.In cities Health in All Policies approach is increasingly seen as an solution that brings the health issue into policy outside the health sector.The guidelines on the integration of public health and well-being into the implementation of NbS is proposed by van den Bosch and Sang.In the context of growing social and economic inequality in the urban communities, the challenge for urban policy is to strengthen social cohesion.The case study of Poznań shows that although social cohesion is one of the urban policy areas it is rarely seen as an issue that can be solved or supported by GI and related NbS.Whereas, GI, apart from regulating ecosystem services often provides attractive and convivial meeting places.Green outdoor common spaces contribute to the strength of social ties and sense of community that, in particular, benefit older adults.Moreover, GI could also be used for nature-based education that benefits citizens of all ages.In this respect, the city of Poznań plans actions in creating school gardens but their social and educational benefits are not sufficiently emphasised in policy.What is missing in Poznań’s urban policy are provisions supporting the creation of community or social gardens that are widely recognized as socially important.In order not to limit the issues of strengthening social cohesion to single solutions, they must be included in the urban documents.Current policies rarely include the issues of a fair and equal greening urban areas to make them serve for the whole society, in all parts of the city.Despite recommendations for implementation of NbS within GI and in order to face various social challenges and many examples showing that urban spaces poor in greenery deepen the feeling of social inequality and environmental injustice, GI and NbS are still insufficiently included in the urban policy.On the other hand, it has to be mentioned that the recent research of Haase et al. showed that the relations between greening and social integration are not unambiguous and social effects of developing GI and NbS still require thorough analysis.It refers both to the chances of improvement of the quality of life in the city and potential negative effects.The interesting part of Poznań urban policy is highlighting importance of society activation and the development of innovativeness.Those general provisions from group B, if included in planning and managing GI could contribute to participatory planning and governance that is one of 10 challenges identified by Expert Working Group on NbS to Promote Climate Resilience in Urban Areas.The green spaces and related NbS are still rarely seen as contributing to economic development.The references to this issue were found only in 2 out of 10 analysed documents of Poznań.It has to be highlighted that potential effects of incorporating NbS into the urban policy need to be recognised.According to Nesshöver et al., they may stimulate, but also inhibit economic development.In order to assess the impact of NbS on the economic development of the cities, the ecosystem services concept may be applied.Urban ecosystems provide the inhabitants with a wide spectrum of services, including provisioning, regulating and cultural services.The benefits they bring to people can be attributed ecological, socio-cultural and economic value.For example, air purification and temperature regulating services of Beijing’s forest ecosystems have been valued at 7.72 billion yuan annually based on avoided air pollution charges and electricity savings.The value of greenery in the city is also reflected in real estate prices that depend on the distance from various forms of green spaces.The impact on the real estate market is undoubtedly the sign of the role of greenery in the quality of life in the city.On the other hand, potential limitations for economic development of the city connected with the development of NbS within GI may result from the provision of ecosystem disservices.The disservices can be, for example, damages caused by falling trees during storms, bio-emissions from trees, irrigation requirements.NbS are also expected to create business opportunities.The connections between NbS and local economy can be emphasised in the urban policy, however it should be preceded by an intensification of research on the impact of NbS on the economic development of urban areas.Their results and good practices from other cities should be popularised among stakeholders.The examples of good practices, including changes in policy, planning and society together with transition initiatives from Brighton, Budapest, Dresden, Genk and Stockholm are presented by Frantzeskaki et al.Convincing the stakeholders to implement NbS may happen through presenting real benefits resulting from appropriate problem or opportunity oriented planning, design and management of GI.The document analysis was limited to the Poznań city development directions and tasks expressing the role of actions within GI in selected urban documents.Yet, one has to keep in mind that those documents also include other solutions that may support solving urban problems."What's more, NbS that go beyond GI are not identified, and these are not included in the analysis.Importantly, the city also takes NbS-related actions and participates in bottom-up initiatives not directly included in urban policy documents.Therefore the results of conducted analysis should be seen as a piece of the city’s involvement.Other actions undertaken in line with urban policies concerning NbS should be a subject for further research.The identification of relevant document contents has required a thorough analysis, especially in the case of NbS influence areas.They have been expressed in the list of development directions or tasks, or document aims or conclusions of the city-state analysis.In each case, a direct reference to the urban challenges was taken into account.The analysis of the urban policy document is hindered due to a diverse spatial scale, timeframes and different level of details of its contents.The general directions for the development of GI and detail tasks like the planting of street trees are difficult to compare.However, despite the limitations, both are illustrating the policymakers’ awareness of GI potential to solve urban problems and NbS benefits.Bearing in mind that consideration of NbS in urban documents does not guarantee their implementation, another challenge is monitoring of the urban policy effects.Daniel defines it as evaluating the extent of town planning policies implementation or impact.In this regard a systemic monitoring and evaluation of strategic and programming documents are conducted according to the law requirements or on decision makers request.However, the key question is how and to what extent evaluation results are implemented.The evaluation usually has a quantitative and a qualitative character.In the second aspect, it has to be underscored that as NbS often rely on a mixture of green and grey infrastructure, the balance between those components in tasks as well as development directions final execution will be one of the aspects that will show whether created solutions are nature-based.The evaluation practices in this field are understudied and their recognition should be develop.Monitoring and evaluation of policy efficiency are key parts of the planning cycle, also within the scope of NbS implementation therefore they should be a subject for further investigation.Cities look for solutions to tackle contemporary challenges.Their striving for development is expressed in planning, strategic and programming urban policy documents.The case study of Poznań shows that urban policy documents contain a wide range of proposals for improvements using NbS-related actions.Nonetheless they are not expressed with this term.The identified NbS-related contents includes mainly activities that lead to physical changes of existing GI elements, to the lesser extent creation of new GI elements and several activities that increase the usage of GI.However, the potential for NbS inclusion is not fully exploit."The urban policy of Poznań, to a large extent, relies on the natural conditions in the city's development and treats it as a tool for building urban resilience.This aspect is also more and more promoted in politics and academia.GI contribution to inhabitants’ quality of life and related well-being is also clearly addressed in the documents, while climate change adaptation is just recently gaining more attention.The other urban challenges are expressed in the city’s policy to the lesser extent.The main gaps were identified in the field of:supporting development of NbS in construction and transportation sector that could influence all four urban challenges;,appropriate planning, design and management of GI towards building and strengthen social cohesion;,consideration of NbS as aiming at supporting the citizens’ health;,the influence of NbS on the economic development potential.In reducing these gaps, we find the potential to strengthen and develop the role of NbS in planning urban development.The recommendation that can be formulated to the next generation policy document focus on:supporting the development of nature-based engineered solutions such as green roofs in building constructions and sustainable urban drainage systems in the modernisation of roads;,supporting co-creation of common multifunctional green spaces such as community or social gardens, pocket gardens and other local scale solutions, not only within open public spaces but also associated with institutions such as schools, kindergartens, senior centres, hospitals or cooperatives;,actions activating and educating citizens to take advantage of the existing potential of GI and co-creation of NbS;,acknowledging NbS business opportunities and its influence on the economic development potential.The recommended NbS should be seen as an alternative or a supplement to other solutions.We agree with Eggermont et al. that NbS should not be considered as “the one and only” possible solution to urban challenges.However, it is crucial to embed NbS in urban policy in a comprehensive way.Since, in many cases, the same urban problems may be solved through grey or green solutions, it is crucial to raise awareness of multi-beneficial NbS.Although fostering NbS in urban areas receives more attention on the political agenda, realising them still faces political, economic and scientific challenges.In this context, the willingness to apply solutions based on GI expressed in local urban policy documents can be seen as a driving force for accelerating the transformations of cities using Nature-based Solutions.Including NbS in city document contents is an essential signal that can facilitate the transition in the long-term perspective.It is also an evidence about the usefulness of such solutions for the awareness of policymakers.We argue that to support large-scale, Nature-based Solutions implementation in cities, the crucial step is to bring them into the local urban agenda.Evaluation of urban policy documents based on the presented approach can serve as a guideline for identifying gaps and potentials for NbS inclusion.As a result, it can help the better organisation of urban policy and harmonisation of different sectors through NbS.
Cities often don't appreciate the benefits of green infrastructure (GI) enough. To recognise the extent to which green infrastructure and nature-based solutions (NbS) are present in the urban policy, we conducted a review of planning, strategic and programming documents of Poznań City as a Case Study. The study is aimed at 1) diagnosing of current position NbS in the tasks and directions of planning, strategic and programming documents; 2) characteristic of activities related to NbS according to the form of human-nature interaction; 3) determining the potential of including NbS in the local policy; 4) identifying the role of NbS in facing 4 main challenges in urban policy: resilience and climate change adaptation, health and well-being, social cohesion, economic development potential. The results show that a significant number of actions focus on GI changes towards its multifunctionality and better quality, while there are not many actions towards supporting citizens in using it. Also, despite urban pressure, new green spaces are still planned to be created. The role of NbS within GI in urban resilience is well recognised. Yet, the adaptation to climate change has gained a low priority so far. Linkages between GI and the wellbeing of inhabitants are well understood. However, the possibility to build and strengthen social cohesion based on GI is rather marginally noticed. The least recognised is the influence of NbS on the economic development potential. It is an area that still needs to be investigated to bring evidence in this field. We conclude that to support large-scale, nature-based solution implementation in cities, the crucial step is to bring them into the local urban agenda. An evaluation of urban policy documents based on the presented approach can serve as a guideline for identifying gaps and potentials for NbS inclusion. As a result, it can help the better organisation of urban policy and harmonisation of different sectors through NbS.
79
Amelioration of thyroid dysfunction by magnesium in experimental diabetes may also prevent diabetes-induced renal impairment
Diabetes and thyroid disorders are common endocrine disorders that have been shown to mutually influence each other .Studies have shown that thyroid hormones contribute to the regulation of carbohydrate metabolism and pancreatic function .Thyroid hormones have also been observed to directly control insulin secretion with hypothyroidism causing a reduction in glucose-induced insulin secretion by beta cells while the response of beta cells to glucose or catecholamine is increased in hyperthyroidism as a result of an increase in beta cell mass .Furthermore, thyroid hormones have been reported to exert pre-renal and direct renal effects resulting in alterations in cardiac blood flow, glomerular filtration rate, tubular secretory and re-absorptive processes as well as renal tubular physiology .Specifically, hypothyroidism is associated with increased creatinine and reduced GFR while hyperthyroidism results in increased GFR as well as increased renin–angiotensin–aldosterone activation .Altered thyroid states have also been reported to occur in diabetes mellitus resulting in reductions in thyroid stimulating hormone in triiodothyronine levels .It is thus likely that in diabetes mellitus, thyroid and renal functions need to be continually monitored so as to prevent metabolic anomalies that may arise from thyroid dysfunction and as well as prevent the development of chronic kidney disease.Oral magnesium supplementation in diabetes has been reported to exert beneficial effects in both rat and human studies .Hypomagnesaemia has been observed in most diabetics and this has been reported to increase the susceptibility of developing long-term complications of diabetes mellitus including thyroid dysfunction and CKD .Furthermore hypomagnesaemia has been independently associated with thyroid dysfunction especially hypothyroidism, as magnesium is essential for iodine utilization by the thyroid gland and conversion of inactive thyroxine to active tri-iodotyronine .Hypomagnesaemia has also been reported to be a novel predictor of renal disease .Hence it is likely that hypomagnesaemia as observed in diabetes could cause hypothyroidism, which in turn may cause impaired renal function.This is however unsubstantiated; in addition whether oral magnesium supplementation in diabetes mellitus ameliorates diabetes induced thyroid and renal function is yet to be investigated.This study was thus designed to investigate thyroid and renal functions in experimental type 2 diabetic rats treated orally with magnesium, metformin as well as simultaneous treatments with both metformin and magnesium respectively.Fifty male Wistar rats with an average weight of 128.9 ± 5.5 g were housed in well-ventilated cages, exposed to alternate light and dark cycles, maintained at 25–28 °C, low relative humidity, fed standard rat chow and allowed free access to drinking water in accordance with guidelines and protocol approved by the Animal Care and Use Research Ethics Committee of the University of Ibadan, Nigeria as well as guidelines given by the National Research Council, USA .Experimental type-2-diabetes was induced using the method of Srinivansan et al, ; briefly experimental animals were maintained on 30% high fat diet feeding for 2weeks followed by a single intraperitoneal injection of streptozotocin.Fifty rats were randomly divided into five equal groups consisting of control, diabetes untreated, diabetes treated with either magnesium alone or metformin and diabetes treated with both metformin and magnesium simultaneously.All treatments were carried out orally for 14 days post diabetes induction.Body weight was assessed throughout the duration of the study using a laboratory scale while blood glucose was monitored using the tail tipping method before diabetes induction and thereafter on days 1, 7, 14 post-treatment.Blood glucose was analyzed using an Accu-Chek active glucometer that used the glucose oxidase method as the basis for its analysis.At the end of the experiment, blood samples were collected by cardiac puncture after light di ethyl ether anesthesia into plain and EDTA sample bottles.Serum was separated from the samples in the plain bottles and analyzed for insulin while plasma was obtained from blood collected into EDTA-lined sample bottles and analyzed for total protein, albumin in all samples collected.Globulin level was derived mathematically from the total protein and albumin levels obtained.Serum was further analyzed for thyroid stimulating hormone, thyroxine and triiodotyronine level using ELISA kits while in plasma, blood urea nitrogen and creatinine was evaluated.Kidney samples were also obtained from five animals in each group, weighed and homogenized on ice in 1.15% KCl buffer.The kidney homogenates were centrifuged at 10,000 rpm for 15 min at 4 °C and the clear supernatant obtained was analyzed for lipid peroxidation , superoxide dismutase and reduced glutathione levels .Kidney samples were obtained from the remaining five animals in each group and analysed for structural changes using haematoxylin and eosin stains while tubular changes were evaluated using Periodic Acid Schiff reaction techniques respectively.Data were presented as mean ± standard error of mean."Statistical significance at P < 0.05 was established using one-way Analysis of Variance and Newman Keuls' post-hoc test.Animals in the control had 22.69% increase in body weight by day 14 compared to day 0 values.Values obtained in the diabetic untreated group, magnesium treated diabetic, metformin treated diabetic as well as the magnesium and metformin co-treatment diabetic groups had 35.28%, 26.11%, 35.07% and 37.37% increase in body weight compared to their respective day 0 values.Animals in control group had an increase in blood glucose at the end of the study compared to initial values.However these values were still within the normal range.Animals in group 2 had a significant increase in blood glucose level by day 1 and this was sustained up to day 14 of the study and significantly increased compared to control and all other experimental treatment groups.Groups 3, 4 and 5 had significantly increased blood glucose level on day 1 after diabetes induction compared to their respective day 0 values.On day 14, blood glucose values were significantly reduced compared to diabetic untreated animals.Animals in the diabetic untreated group had insulin values that were significantly increased compared to control, magnesium diabetic treated, metformin treated and magnesium with metformin co-treatment group respectively.Increase in renal lipid peroxidation, reduced SOD activity and a decline in reduced GSH level in diabetic untreated group compared to control.Reduced glutathione was increased in the diabetic animals treated with magnesium, metformin as well as magnesium and metformin co-treatment group compared to diabetic untreated group.Superoxide dismutase values obtained in the diabetic animals treated with magnesium, metformin as well as in the magnesium and metformin co-treatment group showed a 54.2%, 37.5% and 48.6% increase respectively compare to diabetic untreated group.Renal tissue lipid peroxidation was significantly reduced in the diabetic animals treated with magnesium, metformin, as well as in the magnesium and metformin co-treatment group compared to diabetic untreated group.Diabetic untreated animals exhibited a reduction in thyroid stimulating hormone and triiodothyronine level compared to control and other experimental groups while thyroxine values obtained were not significantly different between control and all other experimental groups.Assessment of renal function showed BUN, creatinine, total protein and albumin values that though increased in the diabetic untreated group were however not significantly different from control and other experimental groups.Globulin levels though reduced in the diabetic untreated group were also not significantly different from values obtained in control and all other experimental groups.Histological evaluation of the kidney for morphologic changes using H and E stains indicates that the diabetic untreated group had kidney samples showing poor architecture.The renal cortex in this group also showed some glomeruli with sclerosis and fused messangial cells, some renal tubules showing diffuse collapsed lumen and others showing the presence of eosinophilic renal cast within their lumen.Mild vascular congestion was also observed and the interstitial spaces seen appear limited.Kidney sample from control, diabetic animals treated with metformin and diabetic animals co-treated with magnesium and metformin showed normal architecture with renal cortex showing normal glomeruli with normal messangial cells and capsular spaces, the renal tubules including distal convoluted tubules and proximal convoluted tubules appear normal, the interstitial spaces appear normal.Furthermore, no pathological lesions were seen in these treatment groups.Diabetic animals treated with magnesium only had kidneys with poor architecture, however, the renal cortex in this group showed several normal glomeruli with normal messangial cells and capsular spaces.Furthermore, the renal tubules observed demonstrate diffuse collapse of the lumen and luminal spaces are not seen; the interstitial spaces appear limited as the tubules appear compact.There is also mild to moderate vascular congestion noted.Evaluation of the kidney samples using periodic acid Schiff stains for tubular aberrations showed that diabetic untreated group had poor architecture, their glomeruli show degrees of sclerosis and messangial cells in this group appear fused.Few glomeruli appear void lacking apparatus, the basement membrane of the glomeruli appears thickened and there is considerably loss of brush borders within the proximal convoluted tubules.Kidney samples in the control and experimental treatment groups indicate samples with normal basement membranes of the glomeruli and renal tubules.The messangial cells and the brush border of the proximal convoluted tubes were also PAS positive.Despite recent advances in the management of diabetes mellitus, there still exists a high prevalence rate in the world population .The increased morbidity and mortality associated with diabetes mellitus may be as a result of the multifaceted pathology of the syndrome, which results in it exerting its effects on various organ systems in the body .According to Srinvansan et al. , and Binh et al. , the induction of experimental type-2-diabetes mellitus using high-fat diet and low-dose streptozotocin injection proceeds with increased body weight, increased insulin secretion and insulin resistance often resulting in hyperglycaemia.These manifestations were noted in the untreated diabetic group in this study and suggest that these animals had diabetes mellitus.Magnesium supplementation in this study, either alone or in combination with metformin, exerted a hypoglycemic effect, as did metformin in diabetic treated animals, which for magnesium may be ascribed to its documented potentiation of insulin secretory activities, its glucose regulatory and blood glucose stabilizing effects .Similarly the effects of Metformin, the most frequently prescribed first line therapy for individuals with type 2 diabetes , seen in this study may be linked to its ability to increase insulin mediated glucose utilization in peripheral tissues and thus improve glycemic control .Prolonged hyperglycemia as observed in diabetes mellitus has been reported to exert deleterious effects via various mechanisms some of which include the polyol pathway, activation of the diacylglycerol/protein kinase C pathway, increased oxidative stress, increased advanced glycation end products formation and action, and increased hexosamine pathway .A prevalence of altered thyroid status has been observed in diabetes mellitus with little or no information on the precise mechanism of its occurrence.It is however known that in diabetic patients, the nocturnal TSH peak is blunted or abolished, TSH response to thyroid releasing hormone is impaired and T3 levels are reduced .This study suggests the presence of hypothyroidism in the diabetic untreated animals as TSH and T3 values seen were reduced compared to control and other experimental groups, which is in accordance with Baydas et al and may partially be ascribed to an impaired peripheral conversion of T4 to T3 resulting from decreased activity of type 1 liver monodeiodinase that has been reported in diabetic conditions .The reduction in TSH level in the untreated diabetic animals is also consistent with the report of Pasupathi et al and may be ascribed to impairment in the negative feedback control of TSH secretion by T3.Oral magnesium supplementation, either alone or in combination with metformin appeared to correct the TSH and T3 reductions seen in the diabetic untreated group and this may be attributed to either an improvement in glycemic control following magnesium and metformin administration or a potentiation of glutathione activity by magnesium , which has been reported to facilitate improved conversion of T4 to T3 .However, thyroxine level across groups was not changed.This is consistent with Donckier who also observed near- normal serum T4 level despite poor glycemic control.In hypothyroidism, it has been reported that there is usually a reduction in renal blood flow and glomerular filtration rate arising from a reduction in cardiac output, increased peripheral vascular resistance, reduced renal response to vasdilators and a reduced expression of renal vasodilators .Furthermore, pathologic changes in the glomerular structure such as glomerular membrane thickening and messangial matrix expansion have also been reported to contribute to a reduced RBF and GFR in hypothyroidism.This study shows a histological profile in the diabetic untreated rats that is consistent with the manifestations of hypothyroidism on renal structures that can lead to a reduction in GFR.This suggests that aside from the direct effects of diabetes on the kidneys, hypothyroidism that accompanies diabetes mellitus may also contribute to renal pathologies seen in diabetes.Furthermore, oxidative stress was observed in the renal tissue as renal antioxidants assessed were depleted and accompanied by an increase in lipid peroxidation.This is consistent with other studies and suggests impairment in renal antioxidant balance and likely renal impairment.Magnesium has been reported to act as an antioxidant, be a precursor molecule for glutathione as well as potentiate glutathione production in experimental diabetic animals .Glutathione has been described as a power intracellular antioxidant and detoxifier hence its potentiation by magnesium may partly be responsible for the improved renal histology and antioxidant status in the magnesium and metformin treated diabetic groups respectively.Furthermore the combination of normoglycemia and amelioration of thyroid dysfunction after magnesium or metformin treatment might also account for the observed alleviation of renal oxidative stress seen in the treated diabetic groups.Quantitative assessment of renal function in the diabetic untreated group via evaluation of blood urea nitrogen, creatinine, total protein, albumin and globulin levels show slight elevations in renal function indices particularly for creatinine which was elevated in the diabetic untreated compared to diabetic animals co-treated with magnesium and metformin.This suggests impaired kidney function or kidney disease in the diabetic untreated group.It is speculated that had duration of this study had been increased, clear quantitative renal function differences would be more apparent.This study however has few limitations; first basal and final magnesium status was not ascertained in this study, as magnesium deficiency is a proposed factor in the pathogenesis and progression of diabetic complications.Hypomagnesaemia has also been reported in diabetics.Evaluation of final magnesium status may have given information as to whether magnesium status was restored after supplementation or not.Secondly glomerular filtration rate has been described as the gold standard for measuring renal function.Its measurement in this study would have given further credence to the histological finding within the renal tissue that suggests that GFR would have been altered in the diabetic untreated animals.In subsequent studies on renal function, this would be taken into consideration and factored into the experimental procedures.In conclusion, this study suggests that in uncontrolled experimental type-2-diabetes there is impairment in thyroid function leading to reductions in serum thyroid stimulating hormone and triiodothyronine but not thyroxine levels.The observed thyroid dysfunctions may contribute to renal impairment that usually accompanies experimental type-2-diabetes.The study also suggests that treatment with oral magnesium may cause a partial restoration of thyroid function that may impede the development of renal dysfunction in experimental type-2-diabetic rats.Abayomi Ige: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Rachel Chidi, Evelyn Egbeluya, Rofiat Olajumoke Jubreel: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.Ben Adele: Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Elsie Adewoye: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper.
Background: Diabetes mellitus has been reported to cause thyroid dysfunction, which may also impair renal function. Magnesium has been reported to exert ameliorative effects in diabetes mellitus. This study investigated thyroid and renal functions in experimental type-2-diabetic Wistar rats. Methods: Experimental type-2-diabetes was induced using short duration high-fat (30%)diet feeding followed by single-dose streptozotocin (35 mg/kg i.p.). Fifty rats were randomly divided into five equal groups consisting of control, diabetes untreated, diabetes treated with either magnesium (250 mg/kg)or metformin (250 mg/kg)and diabetes treated with both metformin and magnesium simultaneously. All treatments were carried out orally for 14days post-diabetes induction. Body weight and blood glucose was monitored using the tail tipping method before diabetes induction and thereafter on days 1,7,14 post-treatment respectively. Thereafter, blood samples were collected by cardiac puncture after light anesthesia into plain and EDTA sample bottles. Total protein, albumin, globulin (plasma)and insulin (serum)were assayed in all samples obtained. Thyroid stimulating hormone (TSH), triiodothyronine, thyroxine was also evaluated (n = 5/group)in serum while blood urea nitrogen (BUN), creatinine was assessed (n = 5/group)in plasma. Kidney homogenates were obtained per group and analyzed for renal superoxide dismutase (SOD), reduced glutathione (GSH)and lipid peroxidation (MDA). Kidney histology was also evaluated per group using both Haematoxylin and Eosin and periodic acid Schiff stains. Results: Body weight, blood glucose, insulin, renal MDA was increased in diabetic untreated compared to other groups. Reductions (P < 0.05)in TSH, triiodothynine, Renal SOD and GSH levels where observed in diabetic untreated compared to other groups. Renal histology in diabetic untreated showed glomerula sclerosis, fused messengial cells and either collapsed tubular lumen or lumen with eosinophilic renal cast. These pathologies where partially reversed in the other experimental groups. Conclusion: This study suggests that thyroid and renal impairment may be present in experimental type-2-diabetes. Treatment with oral magnesium may cause a partial restoration of thyroid function that may impede the development of renal dysfunction.
80
Mortality of people with chronic fatigue syndrome: A retrospective cohort study in England and Wales from the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Clinical Record Interactive Search (CRIS) Register
Chronic fatigue syndrome is an illness characterised by persistent or relapsing fatigue of a debilitating nature, which is present for at least 6 months, in addition to at least four symptoms from a range including memory loss, poor concentration, joint pain, and tender glands.1,2,Patients with chronic fatigue syndrome usually have extensive investigations to ensure any potential treatable medical causes of fatigue are addressed.By definition, at diagnosis, individuals with chronic fatigue syndrome are free of prespecified major medical and psychiatric disorders leading to prolonged fatigue, and therefore might be expected to have a mortality risk similar to, or indeed lower than, the general population.1–3,Although claims have been made on the basis of small, uncontrolled, clinical case series of higher overall death risks for heart failure, cancer, and suicide in people with chronic fatigue syndrome,4,5 a review of descriptive studies that reported follow-up or outcome data from patients with a primary diagnosis of chronic fatigue syndrome showed no convincing evidence of increased all-cause mortality or suicide-specific mortality.6,Specifically, only one study has compared the mortality within a cohort of individuals with chronic fatigue syndrome with that of the general population,7 and reported no significant increase in all-cause mortality adjusted for sex, race, age, and calendar time.Although the relative risk of suicide was raised in this cohort, limited statistical power made it difficult to draw conclusions about excess suicide risk in people with chronic fatigue syndrome.By contrast, anecdotal accounts in various internet and patient forums repeatedly report increased all-cause mortality.8,9,Evidence before this study,We searched Embase, MEDLINE, and PsycINFO for all studies published from database inception to April 1, 2015, using the following search terms:We initially assessed the titles and abstracts identified by the search and excluded articles that were deemed not relevant.We reviewed the full text of the remaining articles for inclusion, and all relevant references were checked for additional citations.We included studies that compared mortality in individuals with chronic fatigue syndrome with that of controls without the disorder.The search process identified 121 unique records.We assessed three full text articles for eligibility and included one study comparing the mortality of individuals with chronic fatigue syndrome with that of a general population control.The results suggested no increased all-cause or suicide-specific mortality in 641 individuals with chronic fatigue syndrome; however, the study was limited by its small sample size, with both all-cause and suicide-specific mortality outcomes receiving a GRADE rating of very low quality evidence.As such whether chronic fatigue syndrome is associated with a differing mortality compared with the general population remains unknown.Added value of this study,We report the largest study of mortality in patients with chronic fatigue syndrome available so far, with 2147 individuals diagnosed with chronic fatigue syndrome.Our findings showed that although the overall and cancer-specific mortality of patients with chronic fatigue syndrome was not significantly different to that of the general population, we noted an increased risk of completed suicide in patients with chronic fatigue syndrome when compared with a population control.Individuals with a diagnosis of chronic fatigue syndrome and a lifetime diagnosis of depression might be at increased risk of completed suicide.Implications of all the available evidence,The evidence highlights the need for clinicians to be aware of the increased risk of completed suicide and to assess suicidality adequately in patients with chronic fatigue syndrome.Future studies should focus on identification of protective measures that can reduce suicide-related mortality in patients with chronic fatigue syndrome.Mortality associated with chronic fatigue syndrome remains uncertain, and larger studies are needed to further address the issue of chronic fatigue syndrome as a risk factor for all-cause and specific causes of mortality."Therefore, we investigated a retrospective cohort consisting of people diagnosed with chronic fatigue syndrome, using data from the national research and treatment service for chronic fatigue at the South London and Maudsley NHS Foundation Trust and King's College London Hospital.The cohort of patients assessed in this study was assimilated from the Clinical Record Interactive Search,10 a case register system that provides de-identified information from electronic clinical records relating to secondary and tertiary mental health care services across SLaM."SLaM is a National Health Service mental health trust that provides secondary mental health care to a population of roughly 1·3 million residents of four London boroughs, and additionally in collaboration with King's College Hospital, provides a single secondary and tertiary care national referral service for individuals with suspected chronic fatigue syndrome, accepting referrals from general practitioners, general and specialist physicians, occupational physicians, consultant psychiatrists, and community mental health teams.As is the system in the NHS in England and Wales, all referrals need to be approved by the local clinical commissioning groups before they can be seen at the service.Electronic clinical records have been used comprehensively across all SLaM services since 2006.CRIS was established in 2008 to allow searching and retrieval of full but de-identified clinical information for research purposes with a permission of secondary data analysis, approved by the Oxfordshire Research Ethics Committee C.10,The chronic fatigue syndrome service follows a routine assessment procedure, in which all patients undergo medical screening to exclude detectable organic illness, including a minimum of physical examination, urinalysis, full blood count, urea and electrolytes, thyroid function tests, liver function tests, tissue transglutaminase, and erythrocyte sedimentation rate.Patients were interviewed with a semi-structured diagnostic interview to establish whether they had fatigue and whether they met the 1994 case definition or Oxford criteria for chronic fatigue syndrome.1,11,12,Additionally, we had information about whether patients fulfilled chronic fatigue syndrome criteria as defined by the National Institute of Health and Care Excellence.13,Patients with the 1994 case definition-specified exclusionary psychiatric disorders and also somatisation disorder were excluded from this study.For this study we adopted the most inclusive criteria, and thus included all patients with a clinical diagnosis of chronic fatigue syndrome.A subsample of 755 patients had full diagnostic criteria applied prospectively of which 65% met Oxford criteria, 58% the 1994 case definition criteria, and 88% NICE criteria.All patients in this sample met at least one criterion.All were clinic attendees referred within the UK NHS and have been shown to be a representative sample of patients with chronic fatigue syndrome in secondary and tertiary care, similar to those in Australia, the USA, Scotland, England, and Northern Ireland.14–16,Study participants were included if they had had contact with the chronic fatigue service and received a diagnosis of chronic fatigue syndrome from Jan 1, 2007, to Dec 31, 2013.Anyone who was active as a patient with chronic fatigue syndrome or newly diagnosed as a patient with chronic fatigue syndrome at any point of this period was followed up until their death or the end of the observation period.The diagnosis was ascertained from having received the prespecified clinic code for a chronic fatigue syndrome diagnosis, which was the ICD-10 code for neurasthenia in structured fields within CRIS, and was supplemented by a bespoke natural language processing application developed at SLaM using General Architecture for Text Engineering software, which extracts and returns diagnostic statements from open-text fields of the source electronic health records.17,We emphasise that we do not use the category or criteria for neurasthenia in either our clinical or research practice—it is just a computer code imposed by our data or financial management systems that run across the trust and which are based on the ICD-10.The analysis outcome is mortality over a 7-year observation window.In each NHS trust, a list of deceased people is obtained on a monthly basis from the “Service User Death Report” of “the Spine”, maintained by NHS Care Records Service.18,Therefore, the date of death of each deceased patient ever served by SLaM is recorded.Further routine checking occurs for details, including the cause of death, which were retrieved from the diagnosis in death certificate via linkage with nationwide data from the UK Office of National Statistics, and classified by code of the 10th edition of the WHO International Classification of Diseases.ICD-10 codes for cause of death were searched and ascribed to malignant neoplasm, suicide, or other causes.Date of birth, sex, and ethnic origin were routinely recorded in NHS medical records.For the classification of age bands, the index date was set as July 1, 2010, or at death, whichever came first, to define age.Ethnic group was divided into four categories: white, black, Asian, and mixed, unknown, or other.Presence of a lifetime diagnosis of depression was defined as having had a recorded depressive episode or recurrent depressive disorder.Multiple deprivation score, a measure of socioeconomic status developed by the UK Office of National Statistics, which combines various indicators to include a range of economic, social, and housing dimensions into one deprivation score for each small area in the UK, was also available for analysis.19,SMRs were calculated for the cohort of patients with chronic fatigue syndrome during the 7-year observation period, using number of deaths observed in SLaM records as the numerator.The denominator was the expected number of deaths, estimated by 5-year age bands, and sex-specific mortality rates for the England and Wales population in 2011 multiplied by the weighting of average person-years in the at-risk period experienced by chronic fatigue syndrome patients in each age and sex category.20,We also did stratified analyses of SMRs by splitting the target population into groups for ethnic category, presence or absence of a lifetime diagnosis of depression, and tertiles of multiple deprivation scores.Focusing on suicide-related mortality of particular interest, we adapted competing risk regression, a modified Cox modelling method developed by Fine and Gray21 for univariate and multivariate analysis, with suicide-specific deaths as the target events and other causes of death as competing outcomes.Subhazard ratios and their 95% CIs were thus generated with the existence of lifetime diagnosis of depression as the major exposure of interest.This time-to-event analysis method accounts for the fact that cohort members are subject to various potential competing causes of death, which might occur ahead of the specific cause of interest.The main purpose of the modification of the Cox model was to discriminate censoring between deaths from other causes and end of follow-up or loss to follow-up to have a better estimation of relative risk on the specific event of interest within the chronic fatigue syndrome cohort.We regarded age and sex as potential confounders in the multivariate analysis.Tertile of multiple deprivation score and ethnic origin were too extreme to be imputed as potential confounders in multivariate analysis.All analyses were done by STATA SE and the significance level was set as 0·05.The funders had no role in study design; in the collection, analysis, or interpretation of data; in the writing of the report; or in the decision to submit the paper for publication.ER and C-KC had full access to all the data in the study and all authors had final responsibility for the decision to submit for publication.We identified 2147 cases of chronic fatigue syndrome in CRIS with 17 deaths.Of them, 1533 patients were women of whom 11 died, and 614 were men of whom six died.Eight deaths were from malignant neoplasm, five from suicide, and four from other causes.There was no significant difference in age-standardised and sex-standardised mortality ratios for all-cause mortality or cancer-related mortality.This remained the case when stratified by sex, and when those deaths from external causes were removed from the analysis.However, there was a significant increase in suicide mortality with an SMR of 6·85.Although the suicide-specific SMR was significantly increased compared with the general population, if there had been two fewer deaths by suicide, this result would have been non-significant, although the effect size would still be indicative of a strong effect.Table 1 shows detailed SMRs for the study cohort.1583 patients were white, 93 black, 48 Asian, and 423 other, mixed or unknown ethnic origin.One patient who died from cancer had a missing ethnicity value and was excluded from the analysis.All other patients who died were white.When restricted to only include white patients, there remained no significant difference in age-standardised SMR for all-cause mortality or cancer-specific mortality.Suicide-specific mortality remained significantly elevated.When stratified by lifetime diagnosis of depression, 216 patients had a recorded lifetime diagnosis of F32.x or F33.x.Four of 17 patients who died had a lifetime diagnosis of depression, for two of whom the cause of death was suicide.No significance was identified for all-cause, suicide-specific, or cancer-specific mortality by the presence of lifetime diagnosis of depression.The mean multiple deprivation score was 22·4%, suggesting that the average patient in our cohort lived in less deprived areas than 78% of the UK population.64 of 2147 patients had missing MDS values of whom one died from suicide and was excluded from the analysis.There was no significant difference in age-standardised and sex-standardised SMRs for all-cause or cancer-related mortality in any tertile of the MDS.Suicide-specific mortality remained significantly increased in the lower and middle MDS tertile.There were no deaths from suicide in the upper MDS tertile.With regards to the outcomes of competing risk regression using the significantly increased suicide-related mortality as the specific event of interest, univariate analyses showed that women with chronic fatigue syndrome had a highly raised but not significant relative risk of death from suicide-specific causes.Patients with a lifetime diagnosis of depression had a high risk of dying from suicide.The result remained significant when age and sex were controlled as confounders.Although the all-cause and cancer-specific mortality of patients with chronic fatigue syndrome in specialist care was not significantly different to that of the general population, the risk of suicide was higher.This is the first study to show a specific increased risk of suicide in a population of patients with chronic fatigue syndrome compared with the general population; however, if there had been two fewer deaths by suicide, this risk would not be significantly increased.22,There are limitations to our data including that, despite being the largest study of mortality in chronic fatigue syndrome available so far, the sample size is still modest.The all-cause mortality gave an estimation of SMR close to 1, and the study had insufficient statistical power to identify such a small effect size with a wide confidence interval.The SMR for suicide is greatly increased and although the estimate is imprecise, it is highly unlikely that the result is due to chance.The modest sample size limited our ability to explore other cause-specific mortality, or the effect of chronic fatigue syndrome on mortality in subgroups of patients.In view of the observational nature of the study design, and the limited number of confounders measured and controlled, it is possible that the findings are a result of confounding.For example, because we relied on population mortality rates, we were unable to control for smoking, BMI, and a range of chronic diseases that might affect mortality risk.Although the joint chronic fatigue syndrome service offered by SLaM and KCH is a national referral service, more than 80% of patients in the cohort were resident in the south of England, and as such national mortality statistics may not be representative of this region.However, previous work to establish sensitivity between mortality in “England and Wales” and London concluded there was no significant difference between the mortality estimates.18,We also accept that the cohort is quite young and it might, at least theoretically, be possible that differential mortality rates could have emerged after the 7-year observation window.Patients concerned by ongoing fatigue symptoms might not wish to be referred to mental health services or be assessed by a psychiatrist."Reasons for this are multifactorial but include the perceived stigma of psychological and rehabilitation treatment, and some patients' views that the cause of their symptoms is biological precludes any form of psychological treatment.Because the referral pathway for this centre includes a full assessment including a psychiatric evaluation, an argument could be made that cases referred to the joint SLaM and KCH service may not be representative of chronic fatigue syndrome cases seen in secondary and tertiary care, and may include a referral bias, favouring patients with more severe chronic fatigue syndrome, psychiatric comorbidity, and higher socioeconomic status.However, the study sample has previously been shown to be typical of secondary and tertiary care cases in the UK and internationally.15,We recognise the sample might not be generalisable to primary-care or community-based samples of patients with chronic fatigue syndrome,14,23 or generalisable to health-care settings in which services are not free to consumers.However, due to the no-cost setting, we are more likely to capture a broad coverage of source population when compared with insurance-based national services, and only moderate to severe cases not seen by the service would be those that can afford private medical care in the UK.Because this study is restricted to patients aged over 15 years the results cannot be extrapolated to children with chronic fatigue syndrome.Finally, our results might be affected by prevalence bias whereby cases known to a service within a given time are dominated by those with prolonged clinical courses; therefore, they cannot be taken to generalise to incident cases.Much research has been done to investigate the association between chronic fatigue syndrome and psychiatric disorder comorbidity.A significant cross-sectional and prospective association exists between chronic fatigue syndrome and non-exclusionary psychiatric disorder comorbidity,24–26 with depression and anxiety disorders being strongly associated with chronic fatigue syndrome.The incidence of detected comorbidity of psychiatric disorder are similar to those seen elsewhere.27,The lack of increase in all-cause mortality within the chronic fatigue syndrome cohort compared with the general population contrasts with that observed in most psychiatric disorders, which show increased mortality especially due to accidents, cancer, and cardiovascular disease.18,The reasons for the normal all-cause mortality may be that these patients might have good health behaviours, an inherently smaller effect size, or that all-cause mortality is confounded by the higher socioeconomic status of the patient cohort.Although the suicide-specific SMR is raised compared with the general population, it is lower than for psychiatric disorders including affective disorders, personality disorders, and alcohol dependence reported in other population-based studies.28,This study highlights the importance of adequate assessment of mood and other psychiatric symptoms in patients with chronic fatigue syndrome, because lifetime diagnosis of depression is an independent risk factor for increased risk of completed suicide in this population.Although completed suicide was a rare event, the findings strengthen the case for robust psychiatric assessment by mental health professionals when managing individuals with chronic fatigue syndrome.
Background Mortality associated with chronic fatigue syndrome is uncertain. We investigated mortality in individuals diagnosed with chronic fatigue syndrome in secondary and tertiary care using data from the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Clinical Record Interactive Search (CRIS) register. Methods We calculated standardised mortality ratios (SMRs) for all-cause, suicide-specific, and cancer-specific mortality for a 7-year observation period using the number of deaths observed in SLaM records compared with age-specific and sex-specific mortality statistics for England and Wales. Study participants were included if they had had contact with the chronic fatigue service (referral, discharge, or case note entry) and received a diagnosis of chronic fatigue syndrome. Findings We identified 2147 cases of chronic fatigue syndrome from CRIS and 17 deaths from Jan 1, 2007, to Dec 31, 2013. 1533 patients were women of whom 11 died, and 614 were men of whom six died. There was no significant difference in age-standardised and sex-standardised mortality ratios (SMRs) for all-cause mortality (SMR 1.14, 95% CI 0.65-1.85; p=0.67) or cancer-specific mortality (1.39, 0.60-2.73; p=0.45) in patients with chronic fatigue syndrome when compared with the general population in England and Wales. This remained the case when deaths from suicide were removed from the analysis. There was a significant increase in suicide-specific mortality (SMR 6.85, 95% CI 2.22-15.98; p=0.002). Interpretation We did not note increased all-cause mortality in people with chronic fatigue syndrome, but our findings show a substantial increase in mortality from suicide. This highlights the need for clinicians to be aware of the increased risk of completed suicide and to assess suicidality adequately in patients with chronic fatigue syndrome. Funding National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King's College London.
81
How do cities support electric vehicles and what difference does it make?
It is generally agreed that the global atmospheric concentrations of Greenhouse Gases such as CO2 have increased markedly as a result of human activities since the industrial revolution and humans are clearly influencing the climate system.The largest growth in anthropogenic GHG emissions between 1970 and 2004 was attributed mainly to energy supply, industry and transport.Transport in particular relies strongly on fossil fuels and accounts for about a quarter of global energy-related GHG emissions.Transport is a key enabler for economic growth that supports the productivity within conurbations and their catchment areas, by getting people to work and allowing the transfer of goods and services, which are all keystones of the economy.It is therefore important to reconcile the need for travel with the need to reduce carbon emissions from transport.This is particularly challenging in a post-2008 age of austerity where economic growth and productivity have, at least, as high a political priority as decarbonisation.There is an urgent need to concentrate on cities and their sustainable transport strategies for dealing with the challenges that climate change may bring."Today 54% of the world's population live in urban areas which is anticipated to increase to 66% by the year 2050.International and national commitments influence European city strategies positively.Urban areas in general and cities in particular are the hub of innovation, power and wealth and can shape socio-technical transitions, but are also responsible for some 70% of global energy related carbon emissions.Nevertheless, a self-reported survey of 36 megacities demonstrated that cities believe that they have the power and opportunities to take action to mitigate climate change.In the UK, the Climate Change Act, 2008) placed a duty onto the country to ensure that net carbon account for the year 2050 is at least 80% lower than the 1990 baseline."The Act aims to improve carbon management and help the UK's transition towards a low carbon economy. "Whilst the country's total GHG emissions were 29% lower in 2013 compared to 1990 levels, the emissions from the transport sector remained nearly constant in 2013 compared to 1990 levels.58% of the GHG emissions from the transport sector are attributed to cars and taxis, 12% to light vans and 21% to other road vehicles such as buses and Heavy Good Vehicles.It is evident that emission reductions from the transport sector are required to meet the overall reduction targets and since a large share of the emissions are coming from cars and light vans, climate change mitigation strategies are promoting the uptake of ultra-low carbon vehicles for road transport.One potential strategy for the reduction of emissions from cars and light vans is the electrification of the fleet through the replacement of existing vehicles with an electric equivalent.Research within this field has shown that EVs produce a decrease in the well to wheel emissions for CO2 in a country with a less carbon intensive power grid but demonstrate a reduced benefit when the full life cycle assessment of the EV is considered.Despite the larger amount of embedded carbon within the life-cycle of the EV it is possible that recent developments regarding the acknowledged gap between test cycle and real world emissions may show that the on-road benefits of EVs may be even greater than previously calculated.A recent study has highlighted the current state on policy goals in the UK and Germany to decrease GHG emissions with the fast introduction and diffusion of low emission vehicles and simultaneously the development or preservation of their automotive industry and its competitiveness.In 2011, the DfT committed £400 million for the development, supply and use of ultra-low emission vehicles.This package included over £300 million funding for the Plug-in car grant which reduces the upfront cost of purchasing EVs and qualifying Plug-in Hybrid Electric Vehicles plus £30 million for recharging infrastructure provision through the Plugged in Places Programme.The first eight Plugged in Places Project aimed to install up to 8500 charging posts across Central Scotland, the East of England, Greater Manchester, London, the Midlands, Milton Keynes, the North East of England and Northern Ireland.Since then, the UK Government has announced a further £37 million investment into public recharging infrastructure at train stations, on public sector estate and on-street and rapid charging networks.Despite Government efforts to promote the uptake of EVs, their market share is falling short of Government and industry expectations, with some authors suggesting that they will remain a niche market over the next 20 years.The UK market share of EVs in 2015 was just over 1%.If the UK is to meet its reduction targets, the Committee on Climate Change estimates that the ultra-low emission vehicles should reach a market share of 60% by 2030 indicating that drastic measures are needed to reach these market shares.A range of studies have investigated the incentives and policy requirements that can increase EV uptake, but little is known if and how local policies and/or strategies do impact on EV usage and its supporting infrastructure.To our knowledge, this paper reports for the first time the impact local climate change mitigation strategies have on the EV uptake and the provision of public charging infrastructure.To achieve this aim, the paper addresses the following objectives:Report on climate change mitigation strategies published by 30 UK cities,Analyse car ownership, EV registrations and the provision of public EV infrastructure,Conduct statistical testing and modelling to determine the impact EV strategies have on the uptake of EVs and the charging infrastructure provided at the city-levels,Provide explanations of the findings and recommendations for cities to promote EV and infrastructures effectively.To facilitate the analysis of mitigation efforts, the climate change policies and/or strategies were collected at the city level, i.e. the city is defined by its administrative and/or political boundaries and can be referred to as an Urban Area.Cities were selected following the Urban Audit Methodology.The Urban Audit aims to provide a balanced and representative sample of European cities and applies the following rules for including cities in the database:Approximately 20% of the national population should be covered by Urban Audit;,National capital cities and where possible regional capitals are included;,Some large and medium-sized cities are included; and,Cities should be geographically dispersed within countries.The Urban Audit lists 30 UK cities/urban areas that are deemed a good representation of the UK as a whole and we included all these cities in our research.The Urban Audit Cities represents a population of around 17,300,000, including two Welsh, three Scottish and two cities from Northern Ireland alongside 23 English cities.By far the largest city is London with a population of 7.6 million and the city with the smallest is Stevenage with a population 81,000.The greater area of London is most densely populated and Wrexham the least densely populated city with 257 residents per km2 in 2006.The 8 largest economies in England are referred to as Core Cities.These cities, forming the economic and urban cores of their surrounding areas, are major centres of regional and national economic growth, are part of this research.We gathered and analysed the climate change policies and/or strategies from the 30 UK urban areas by retrieving them from the website and/or by contacting the city directly.Of the 30 UK Urban Audit cities, 28 have published climate change policies or strategies outlining how they will tackle climate change mitigation.In the UK, cities are part of larger Metropolitan, District and County Councils and some cities do refer to regional strategies."For example Stoke on Trent Council does refer to the “South Staffordshire Council Climate Change Strategy” and Gravesham Council to the “Kent's Adaptation Plan Action Plan 2011-13”.In total, 307 documents were provided by the local authorities.Based on an assessment of suitability for analysis, 52 documents were analysed in detail.The documents are published at various dates and by different departments, for example, the Climate Change action programme for Aberdeen is the oldest ‘live’ document, published in 2002.The mitigation and adaptation strategies for London underwent various stages of consultation over recent years and were finally approved and published in October 2011.Out of the 52 documents, 18 defined the scope as the activities that are controlled by the council and 32 are covering activities across the council i.e. household, industry and business activities.Only documents from Gravesham and Stoke have not stated the scope of the strategy i.e. if the strategy is for the councils own operation only or if it does cover households, industry etc.Derry-Londonderry and Wrexham have not published an official document at the time of writing.Car ownership data and household composition data were collected from the Office for National Statistics for England and from the National Records of Scotland for Scottish cities."National travel data were used from the DfT's National Travel Survey.In order to evaluate the effectiveness of climate change strategies, the number of charging points from the National Charge Point Register, the proportion of EVs registered and the relative change in registered EVs from the DfT and the SMMT were analysed for the cities which have an EV strategy and those cities who do not using the Shapiro-Wilk test and a multi-variable regression model.All 30 cities acknowledge climate change being a threat and that their city is tackling this issue by adapting and mitigating with various levels of planning and success.Transport is listed in 45 documents by 26 of the cities with the aim of mitigating climate change by improving transportation.Transport measures proposed are wide ranging from providing green travel plan for its staff, introducing flexible working hours and low carbon vehicle fleet to developing a specific project such as the Bristol Rapid Transit Project and supporting EVs as mentioned by 46% of cities and 33% of the documents.With regard to EVs, 12 of the 25 Local Authorities had strategies promoting EVs in one shape or form.For example Aberdeen council stipulated in its Carbon Management Programme that part of its 13 Business Travel Projects one will be responsible for the installation of EV charging points in selected Council car parks.In Cambridge the Waste and Fleet Management would trial electric powered vans and introduce recharging facilities for EVs in car parks.Another example is Exeter City Council which recognised in its climate change strategy that 21% of its carbon emissions in 2004 came from road transport and, in partnership with the Transport Authority, wants to encourage public transport providers to invest in transport fleet to deliver carbon efficiencies using e.g. hybrid models.Finally, Manchester City Council stipulated that EVs will be the vehicles of choice, and making highly visible charging stations available across the city.To see how these strategies had an impact on EV uptake and infrastructure Table 1 summarises UK Urban Audit cities, their number of registered cars, their climate change strategies and whether these strategies explicitly mention EVs as one means to mitigate climate change.In addition to that, the number of installed charging points within 5 miles of those cities has been listed.5 miles was the smallest searchable area for each city which means that for smaller cities charging points in the surrounding areas were also counted.Table 1 also shows the number of EVs registered per city or region as reported by the DVLA to the House of Commons Transport Select Committee.This column however does not tell the full story.Many EV drivers lease their cars rather than buying them outright.Those vehicles are often registered by leasing companies which are located in London and the South East of Britain."The stark figure here is for Newcastle where over 500 Nissan LEAFs are leased by workers at the nearby Nissan factory but their home location is recorded at Nissan's head office elsewhere.The local EV charging service provider in Newcastle and the local area had over 800 EV owning members in in 2014, confirming that Newcastle is a major EV hub.It was possible to use Electric Vehicle Registration data encompassing quarterly registration up to January 2016.This data is shown in the final four columns of Table 1 and was used to create an additional metric representing the rate of increase in EV uptake.The argument is that if the climate change strategies are effective, then they will not only lead to an increase in the absolute number of EVs, but they will also contribute to the rate of increase in uptake for EVs.The data in Table 1 was used in the statistical tests.The limited range of electric vehicles is still seen by many as the key barrier to the mass uptake of EVs.This could be addressed in one of two ways: either the actual range of the cars needs to be improved or through an abundance of public charging infrastructure which would give drivers the confidence that they could complete their journeys and top up their charge as and when it was needed.Even though cities can address the lack of public recharging infrastructure, this has not been followed through by the cities which mentioned EVs in their mitigation strategy documents as demonstrated by the analysis undertaken in this paper.Moreover, even in cities with significant EV charging infrastructure such as Newcastle, many EV drivers still believe that more public infrastructure is needed.It was found that 30% of charge events took place at public charging infrastructure with 20% of EV drivers using public charging infrastructure as their primary means of charging.Yet, lack of public charging infrastructure was still quoted as one of the main barriers to the uptake of EV even by those drivers who extensively used public charging facilities.This suggests that cities may have to rethink the locations they choose for EV charging points and choose highly visible and strategic locations for the placement of new charging infrastructure.Fig. 2 shows both the average electric vehicle density plus the number of installed charge points within a fixed distance of each urban audit city.Due to a lack of comparable data it was not possible to visualise data for Wales, Scotland or Northern Ireland.The data for this visualisation was retrieved from the same source as Table 1, from the DfT Statistical data set “All licensed vehicles and new registrations VEH01”.The number of charging points, the proportion of EVs registered and the relative change in registered EVs were analysed.The Shapiro-Wilk test showed that both the number of charging points and the number of EVs were not normally distributed.This and the small sample size meant that non-parametric tests were used to test whether mentioning EVs in their climate change strategies influenced the uptake of EVs on a city-level.The Wilcoxon rank sum test was used to compare the two groups of cities.As shown in Table 2, there was no statistical difference between those cities who had an EV strategy and those who did not in terms of the uptake of EV and the number of public charging posts.There is therefore no statistical difference between those cities that promote EVs in their climate change mitigation strategies and those that do not; and although it is still possible that there is an effect on EV take-up, it is clear that this is either smaller than the noise within the data or it is being masked by the effect of other variables.This is a worrying trend as reaching mitigation targets anticipates the uptake of EVs as a new means of urban transport.It is therefore important that cities begin to actively and effectively encourage the uptake of EVs and start to remove some of the barriers by for example providing improved infrastructure or run promotional campaigns etc.To test for the potential masking effect of other factors over the existence of an EV strategy, a multi-variable regression model for EV uptake was created.The variables used in the model were those that were thought to have an impact on the uptake of EVs.The variables included the total number of all cars within each urban audit city, the level of local traffic flow, the presence of an EV/CC strategy, the local car population, the local number of jobs, the average local income and the vehicle turnover in the local area.Other variables, such as local population totals or local population growth were also investigated but were found to have little effect.Table 3 shows an example group of variables with their respective P-values.The linear regression model in Table 3 produced an R2 of 0.775 with and adjusted R2 of 0.60.It was found that with a simple regression model it was possible to predict the uptake of EVs with an adjusted R2 of 0.46 using just the local car population, the Local Job Level, the average local income and the Average Vehicle Turnover in the local area.Removing variables from the model with a low P-value improved the adjusted R2 without significantly decreasing the raw R2 value.The conclusion to be drawn from this is that the local EV growth is being strongly driven by factors which are not related to a local city EV or CC strategies and hence the conclusion drawn from Table 2 remains.This paper summarises findings from research into the mitigation strategies as published by 30 UK Urban Audit cities, their influence on the uptake of EVs and the future prospects for affecting the vehicle fleet.The analysis presented in this paper has shown that having a climate change mitigation strategy which includes EVs has no statistically significant impact on the uptake of EVs or the introduction of public charging infrastructure.Our findings suggest that cities may pay lip-service mentioning EVs in their climate change mitigation strategies.Cities must begin to actively encourage the uptake of EVs, to improve the infrastructure required for the ergonomic use of EVs and to remove some of the factors preventing drivers from purchasing these cars, whether those factors are directly related to EVs or not.In this work, we have argued that if there is an EV specific policy within the climate change mitigation strategy of a city, then it could only be judged successful if it leads to an increase in the number of EVs within that city.Unfortunately, separating the exact causes behind any particular variation in EV numbers within a city would be almost impossible due to the number of contributing factors.However, by looking at the effect of EV polices en masse, it is possible to assess if they have led to an increase in EV usage."Looking at the list of cities, their relative size and the measures they have in place, it is clear that the cities of London and Birmingham have the largest number of EV's, partially due to the Government/Corporate location of London and possibly due to the centre of the West Midlands car industry, which is likely to increase overall car turnover rates with the concomitant effect of increasing EV sales. "Moreover, it is evident that there is more experience and knowledge of EV's and their operation in these cities which means the city authorities themselves have more expert advice on how to introduce effective measures to encourage uptake of EV's.This has been corroborated through the new policy initiatives from OLEV announced in mid-2015 through their GUL Cities initiative – where cities aspiring to foster more EV ownership are encouraged to learn from those cities that have been successful at this in the period 2010 to 2014 and add new innovations to these.From the data shown here, none of the three main indicators of EV usage show any reliable statistical relationship with the presence of a specific climate change and EV policies by the cities investigated.We can assume two possible reasons:Either the motivating factors behind EV purchase and use are fundamentally beyond the abilities of cites to alter, or,The climate change policies published by the cities are ineffectual.Aspects for the first point will be true for all cities.For example, in multiple surveys the price and range of EVs has been brought up as a limiting factor in the purchase of such a vehicle.Indeed, it may be the case that in the future consumers, specifically city dwellers, will move towards an alternative transport system such as electric bikes.These are factors which an individual city is not able to alter.If the limiting factors for EV purchase are all on the national scale then it would be justified if cities did not include specific policies targeting EVs.It is possible that as the technology behind EVs improves, issues such as range and the general ergonomics of ownership will become less problematic.The second possible reason is more difficult to quantify.Whilst it may be possible to assign a cost for the implementation of any given strategy, its effectiveness is more difficult to determine.Untangling the web of behavioural influences, financial decisions and unconscious biases mean that finding the “levers” that cities can pull and their effect on the populace is a complicated task.One possibility for future research would be to further split each cities climate change policy and strategy into its constituent parts and then separate each policy into a series of specific actions that were planned and taken.If an action was taken by a city then there should be a corresponding expected result, such as free parking and charging, EV access to ‘no car lanes’ and other policy friendly incentives.Any action with either no expected result, or a result that cannot be measured would be flagged as a non-workable action.From this it should be possible to build up a picture of how individual actions taken by cities affect aspects of EV uptake.Cross-sectoral implications for alternative transport strategies, including additional power generation and infrastructure requirements for electric vehicles has been highlighted as a constraint.It has been concluded by Mazur that additional research on quantifying the environmental benefits is required and potential local transition policies do need to be consistent with governmental targets."In the UK, OLEV, the cross Departmental Government body tasked with providing national policy tools to support the roll-out of EV's and other Ultra Low Carbon Vehicles have recognised that the ‘message’ that EV's are not just a niche vehicle but something that is suitable and would benefit much of the driving population.Hence they have been awarded a significant budget for ‘communication.’,with the aim of providing lucid and compelling publicity that provide the public and business community information of the benefits of owning an EV, including running costs, decarbonisation and reduction in air pollution.Much of this money will be provided to individual local authorities and in particular those who have received recent GUL City funding, helping cities to solidify and take forward their mitigation strategies.There is also the possibility that the EV specific policies at the city level are not necessarily the most important incentives of EV uptake.From the results showing the significance of the variables it could be seen that the most important factor to increase EV uptake was the number of local cars.It may be that to increase the proportion of EVs in a city, a city will have to implement policies that are “car friendly” in general rather than being seen as EV friendly specifically.We recommend, due to the failure of current policies to increase uptake, cities must consider the local characteristic, to tailor the policies to increase EV uptake, whether this is from individual aspects of the policies already used or from city wide policies enacted both within the UK and further afield.In addition it must be considered whether there are aspects of EV uptake that are out of control of cities, e.g. consumer driven adoption of EVs that is motivated by either technological misgivings or cost considerations.For example, one apparently successful policy has been to invest in a public charging infrastructure which is highly visible easily accessible for drivers.Yet, many cities do not seem to actively invest in public EV charging infrastructure despite their stated aims of supporting EV uptake as part of their climate change mitigation strategies.Two notable exceptions are London and Newcastle upon Tyne.Both cities have been at the forefront of the introduction of significant public charging infrastructure and have seen a subsequent uptake of EV.Others are now following this and we are beginning to see corresponding rapid charging infrastructure on the inter-urban network too.The case study from the Switch EV trial in the North East of England has shown that electric vehicles could form a substantial part of a more sustainable urban transport system with proven carbon benefits.Expanding this into other cities and regions will allow the UK to meet its transport carbon commitments whilst delivering a user friendly transport system.In addition there is a general public support for unilateral climate policies in India and the US, which has been recognised by central government.In the UK new resources are now being allocated to ‘communications’ and media campaigns to inform the public more on the benefits of owning an EV and debunking some of the myths regarding range, purchase and running costs and performance."Much of this is targeted at supporting the cities climate mitigation policies, as there is a need to illustrate the benefits of EV's more clearly.As our analysis has shown city planners and Government are running out of measures in their tool-boxes to enable them to meet their targets.Thus even more radical thinking and policies may become necessary.It may be the case that in order for the UK to increase the fleet proportion of electric vehicles, it will need to look to other countries which have been successful in increasing their overall proportion.One example would be Norway which has seen electric vehicle market share rise to 29.1% in 2016.However, the rapid increase in EVs in Norway has come through extensive subsidisation and multiple “perks” many of which are out of the ability of local government authorities in the UK to implement.Finally, it should be noted that many of the perks for EVs are beginning to show many unintended consequences, such as congestion in bus routes, and as such could be seen as both an exemplar and as a warning on the creation of strategies designed towards a single transport goal rather than viewing the whole system.
Many cities publish climate change mitigation strategies and other policy measures to support the wide spread uptake of Electric Vehicles (EVs). This paper investigates the effectiveness of these strategies and the provision of infrastructures in 30 UK cities, with a specific emphasis on those strategies that are within the remit of cities and local authorities. The climate mitigation strategies and supporting documents were obtained from 30 UK cities recommended by the Urban Audit Methodology. We show that 13 cities mention EVs in their strategies. Analysing EV registrations and the EV infrastructures that is provided by cities we found that there is no statistical difference in the number of charging points or EVs between the cities that have EVs as part of their climate change mitigation strategy and those that do not. It was shown that EV uptake was more generally associated with other factors (such as local vehicle population or income) rather than any documented EV/climate mitigation strategy. We demonstrate that local strategies are failing in achieving the much needed step change and make suggestions how to improve EV uptake as an even more radical thinking and policies may become necessary to achieve carbon reduction targets.
82
Characterization of sheep pox virus vaccine for cattle against lumpy skin disease virus
Lumpy skin disease virus, sheep pox virus and goat pox virus comprise the Capripoxvirus genus within the Poxviridae family.Sheep pox and goat pox are endemic in northern and central Africa and in large parts of Asia.Lumpy skin disease occurs across Africa and has recently been aggressively spreading in the Middle East, despite excessive vaccination campaigns carried out in the region.The latest outbreaks of LSD were reported to the World Organization for Animal Health Wahid database from Turkey and Iraq, raising concerns that the disease will continue to spread to Europe and Asia.All cattle breeds, ages and sexes are affected, although the disease is more severe in young animals and cows in the peak of lactation, causing severe production losses throughout the cattle industry.It is widely agreed that vaccination is the only effective way to control the spread of LSDV in endemic countries.In previously disease-free countries, slaughter of infected and in-contact animals and movement restrictions have been effective, as long as the disease is detected at a very early stage and control measures are implemented without delay.However, if the disease has accidentally gone unnoticed, allowing time for vectors to become infected, it is difficult if not impossible, to eradicate the disease without vaccination.In resource-limited countries, slaughter of infected and in-contact animals is seen as a waste of a valuable source of food and is not usually feasible.In addition, in affected regions, it is often impossible to effectively implement movement restrictions for small and large ruminants.Cross-immunity is known to occur between the members of the genus Capripoxvirus.Because SPP and GTP do not occur in southern Africa, only attenuated LSDV vaccines are used against LSDV in the region.Whereas, in central and northern Africa and in the Middle East, where the distribution of SPP, GTP and LSD overlap, attenuated SPPV vaccines, such as KSGP O-240, Yugoslavian RM65 and Romanian SPPV strains, have been used against LSDV.Because the strain KSGP O-240 infected sheep and goats, causing only mild clinical disease, it was long considered as an ideal vaccine candidate against both SPP and GTP.In addition, it was surprisingly easily attenuated, after only 6 passages on cell cultures.Incomplete protection against LSD has been reported in cattle vaccinated with all SPP vaccines.On the other hand, the KSGP O-180 strain, collected from sheep during the same epizootics but at different time points than the KSGP O-240 strain, was successfully used in Kenya as a vaccine against SPPV, GTPV and LSDV without adverse reactions.The difference was that KSGP O-180 isolate had been attenuated by passaging the virus 18 times on bovine fetal muscle cells.The efficacy of the vaccine for sheep, goats and cattle was demonstrated by a challenge experiment and in the field.Lumpy skin disease was reported in Kenya for the first time in 1957.The disease was introduced to a mixed cattle and sheep farm near Nakuru by indigenous sheep infected with SPPV originating from the nearby Baringo district of Kenya.Sheep and Ayrshire calves were penned together at night.Soon after arrival, the lambs started to show clinical signs of SPP followed by a similar condition in the calves.During the same time period, SPPV was isolated from SPP samples from the Isiolo district and the Kedong Valley.The Isiolo strain is known to experimentally infect cattle and the Kedong strain has been used as a vaccine for cattle against LSDV in Kenya.In general, capripoxviruses are considered to be very host-specific.In addition to the isolate KSGP O-240, only a few other SPPV and GTPV strains have been known to affect both sheep and goats.However, no reports exist on CaPV infecting all three species: sheep, goats and cattle.The major difference between the African and the Middle Eastern and Indian SPP and GTP strains seems to be the wider host range of the African isolates.The Kenya sheep-1 strain is derived from the attenuated KSGP O-240 vaccine strain.Recent molecular studies have reported a close relationship between the KS-1 and LSDV, suggesting that KS-1 is actually LSDV.Later this finding was confirmed by sequencing the host-specific G-protein-coupled chemokine receptor, or RNA polymerase genes, which revealed the phylogenetic grouping of CaPVs.Members of the Capripox genus cannot be distinguished using serological methods.A recently published real-time PCR assay provides a simple tool for differentiation of CaPV strains.Here we report the molecular characterization of the virulent Kenyan KSGP O-240 field strain, Isiolo and Kedong SPP isolates and the attenuated KS-1 and KSGP O-240 vaccine strains held in The Pirbright Institute reference virus collection.Selected commercially available SPPV vaccines against LSDV, used for cattle in the Middle East and northern and central Africa, were also analyzed.Virulent KSGP O-240 field strain of 3rd passage, Isiolo SPPV and Kedong SPPV and the following attenuated vaccine viruses: KS-1 isolate, Kenyavac and Jovivac vaccines by Jordan Bio-Industries Centre; and Sheep Pox Vaccine by Saudi Arabian Veterinary Vaccine Institute, were included in this study.DNA was extracted from virus suspensions using DNeasy Blood & Tissue Kit following the manufacturer’s instructions.The presence of viral DNA in the samples was quantified using a previously described general CaPV real-time PCR.Primers and a probe were used in combination with a QuantiFast Probe PCR Kit in a Mx3005p Multiplex Quantitative PCR System.In order to identify which of the three CaPV strains were present in each sample, a species-specific real-time PCR method was used.The PCR assay detects differences in the melting point temperatures for SPPV, GTPV and LSDV, obtained after fluorescence melting curve analysis.It targets a 200 bp region within the GPCR gene.Samples were run on the Mx3005p Multiplex Quantitative PCR System and melting curves were analyzed to determine the CaPV strain.Mongolian GTPV was used as a positive control for GTPV, Mongolian SPPV for SPPV and South African LSDV Neethling strain for LSDV.RNase free water was used as a negative control in all PCR runs.Full length GPCR and RPO30 genes were generated by amplification of overlapping fragments using primers pairs described by Gelaye et al.In each reaction, 4 μl of the viral DNA was mixed with 12.5 μl KOD Hot Start Master Mix and 1 μl of each forward and reverse primer in total volume of 25 μl.The DNA was initially denaturated at 95 °C for 2 min and amplification was carried out in 35 cycles of 95 °C for 20 s, 65 °C for 10 s, 70 °C for 20 s.The amplification products were visualized and assessed for size by agarose gel electrophoresis.All PCR products were purified by GFX™ PCR DNA and Band Purification Kit.The amplicons were sequenced using the BigDye Terminator v3.1 Cycle Sequencing kit in a 3730 DNA Analyzer according to the manufacturer’s instructions using the same primer sets as for PCR amplification.The resulting sequences were assembled with the SeqMan Pro™ program and aligned with each other using the CLUSTAL W algorithm in BioEdit 7.0.5.3.Molecular phylogenetic analyses were performed using MEGA 5.2.The evolutionary history was inferred using the Neighbor-Joining method and confidence on branching was assessed using bootstrap resampling.The evolutionary distances were computed using the Kimura 2-parameter method.Using the species-specific CaPV real-time PCR method, the virulent KSGP O-240 isolate was characterized as LSDV.Isiolo and Kedong SPPV isolates were identified as GTPVs.The attenuated KSGP O-240 vaccine virus present in the Kenyavac and KS-1 isolates were identified as LSDVs.The RM65 strain in Jovivac and the Romanian SPPV strain in the Saudi Arabian Sheep Pox Vaccine were confirmed as SPPVs.The sequences of RPO30 and GPCR genes were determined for the six capripoxviruses under study and submitted to GenBank.These were compared with the sequences of capripoxviruses already available on the public sequence databases.Molecular phylogenetic analyses were performed on the coding regions of the RPO30 gene and the GPCR gene.These results confirmed the identifications made using the real-time PCR.Additionally, the sequence of KS-1 RPO30 gene revealed two A to G nucleotide substitutions between the KS-1 Pirbright isolate and the published sequence.These are clearly both A in the new sequence but do not result in any amino acid substitutions.The KSGP O-240 strain has long been considered as the SPPV reference virus for comparison with LSDV.This strain was chosen for use in vaccines by many vaccine producers because it was one of the CaPV strains listed as a possible seed virus for LSD vaccine in the LSDV chapter of the OIE Manual of Diagnostic Tests and Vaccines for Terrestrial Animals.Tulman et al. were the first to report similarities in the pattern of open reading frames of the KS-1 virus and LSDV: ORF 002, 155 and 013 were intact in both KS-1 and LSDV strains, while these regions were disrupted in other SPPVs.In general, the importance of this finding has not been fully appreciated because the origin of the KS-1 strain was not widely recognized to be an attenuated strain of KSGP O-240 strain and therefore LSDV.Because the whole genome of the KSGP O-240 has not yet been sequenced or published, the final confirmation of relationships between KSGP O-240 and LSDV is still to be investigated.The findings of our study are in agreement with previously reported results: the virulent KSGP O-240, the attenuated KSGP O-240 strain as well as the KS-1 isolate were identified as LSDV.The real identity of the vaccine virus explains the easy attenuation of the virus for safe use in sheep and goat vaccines.It is however clear that the level of attenuation of the virus was insufficient for the use of KSGP O-240 for cattle, in which clinical disease was observed post-vaccination.The level of attenuation in Kenyavac is 13–27 passages on lamb testis cells.In a similar KSGP O-240 vaccine, “Tissue Culture Sheep Pox Vaccine”, that was used against LSDV during the LSD outbreak in 2005–2006 in Egypt, was attenuated three times on choroid plexus cells, followed by three times on lamb fetal lung cells and three times on Vero cells.The level of attenuation is considerably lower than reported for safe use of LSDV in cattle.The LSD Neethling strain required 60 passages on lamb kidney cells and 20 on chorioallantoic membrane.The LSD Madagascan strain was passaged 101 times in rabbit kidney and 5 times in fetal calf kidney cells.After experimental infection with LSDV only half of the infected cattle developed clinical disease and silent infections without skin lesions are known to commonly occur in field outbreaks of LSDV.In above mentioned animal experiments, a minimum number of six, highly susceptible, naïve animals were required in order to produce clinical disease in cattle challenged with LSDV via an intravenous and/or intradermal route.This gives guidelines on the animal numbers required for safety and efficacy experiments for CaPV vaccines.It was believed that due to the cross-protection within the genus, any CaPV isolate could be used as a vaccine against LSDV.However, experience in the field setting indicates the superiority of LSDV vaccines when compared to SPPV vaccines against LSDV.In addition, according to the previous recommendations for SPPV vaccines for cattle against LSDV, the suggested titre for RM65 or Romanian SPPV vaccines is 10–50 times the recommended dose for sheep, whereas for KSGP O-240 strain an immunizing dose of 103.5 TCID50 was considered desirable for field vaccination campaigns.However, these recommendations may be out-of-date and the efficacy of the vaccine should be re-tested by a challenge experiment in a controlled environment, using sufficiently sensitive testing methods such as real-time PCR and a sufficient number of fully susceptible cattle.Due to difficulties controlling LSDV by vaccination in the Horn of Africa and the Middle East, and taking into consideration the distinct threat of incursion of all CaPV diseases to Europe and Asia, a new generation of effective and safe vaccines against LSDV, SPPV and GTPV are urgently required.Ideally the vaccine should be affordable and available for use both in endemic and non-endemic countries without adverse effect on global trade of live animals and their products.None of the currently available CaPV vaccines provides total protection against LSDV for all vaccinated individuals, which is a clear disadvantage for control of a vector-borne disease.In the OIE’s Manual Chapter 2.4.14 – Lumpy skin disease, the KSGP O-240 strain is mentioned as one of the four CaPV vaccine strains used for cattle against LSDV.The aim of this report was to confirm and highlight the most recent molecular findings, indicating that the KSGP O-240 vaccine strain is LSDV which at the low level of attenuation is still virulent for cattle.Consequently, the identity of the virus in all of the commercially available KSGP O-240 vaccines is likely to be LSDV instead of SPPV and characterization of the vaccine virus should be carried out before use in cattle.Clinical disease detected in KSGP O-240 vaccinated cattle is more likely to be caused by insufficient attenuation of the vaccine virus than incomplete protection and therefore the safety of the vaccines should be re-evaluated before the vaccine is used for cattle.Additionally, the use of virulent vaccine may lead to the spread of the vaccine virus itself via arthropod vectors.However, sufficiently attenuated KSGP O-240 strain is likely to afford protection for cattle equivalent to other LSDV vaccines.Due to their broad host-range the Isiolo and Kedong isolates may provide an alternative vaccine candidate that is effective against all capripox diseases.Both isolates were collected from infected sheep, molecular studies identified both as GTPVs.Most phylogenetic studies suggest that GTPV is more closely related to LSDV than SPPV is to LSDV.In addition, the Isiolo strain has been shown to experimentally infect cattle, while the Kedong vaccine strain protects cattle against LSDV.This warrants further investigation of the suitability, efficacy and safety of the Isiolo and Kedong GTP strains, as well as sufficiently attenuated KSGP O-240 and O-180 strains as a basis for affordable broad-spectrum vaccines against LSDV, SPPV and GTPV.
Lumpy skin disease is of significant economic impact for the cattle industry in Africa. The disease is currently spreading aggressively in the Near East, posing a threat of incursion to Europe and Asia. Due to cross-protection within the Capripoxvirus genus, sheep pox virus (SPPV) vaccines have been widely used for cattle against lumpy skin disease virus (LSDV). In the Middle East and the Horn of Africa these vaccines have been associated with incomplete protection and adverse reactions in cattle post-vaccination. The present study confirms that the real identity of the commonly used Kenyan sheep and goat pox vaccine virus (KSGP) O-240 is not SPPV but is actually LSDV. The low level attenuation of this virus is likely to be not sufficient for safe use in cattle, causing clinical disease in vaccinated animals. In addition, Isiolo and Kedong goat pox strains, capable of infecting sheep, goats and cattle are identified for potential use as broad-spectrum vaccine candidates against all capripox diseases. © 2014 Published by Elsevier B.V.
83
Performance assessment of chemical mechanical planarization wastewater treatment in nano-electronics industries using membrane distillation
In the 1980s chemical mechanical planarization was introduced at IBM for integrated circuit fabrication, later emerging as a crucial polishing technique in nano-electronics industries .In CMP processes abrasive materials are used in combination with ultrapure water and a range of chemical additives: complexing agents ; oxidizers ; corrosion inhibitors ; pH adjustors ; surface active agents ; high molecular weight polymers ; and biocides .According to Babu around 0.2–0.8 L of CMP slurry is employed per produced wafer, leading to a wastewater stream of about 7–10 L per wafer.The produced wastewater contains approximately 3–12% solids by weight and pH levels range from 6.8 to 10.Moreover, the total organic carbon levels are distributed between 2 and 15 mg/L.Total concentrations of used silica and alumina in wastewater as reported in the literature are 98–4000 mg/L and 0.01–11.8 mg/L respectively.With an ever-growing application of CMP technology in nano-electronics industries, the amount of CMP wastewater has increased exponentially and is thus attaining considerable attention.Typically, the fresh water demand of a nano-electronics manufacturing plant is approximately 1000 m3/day where 30–40% of total is accounted by CMP processes .To observe EU Directive 91/271/EEC concerning urban wastewater treatment , CMP-derived wastewater needs to be treated by removing nano-sized amorphous silica, alumina, ceria, and other chemical contaminants prior to discharge.Traditional chemical coagulation/flocculation treatment processes involve high dosage of chemicals to help solids content agglomerate and settle down for removal through subsequent filtration .These processes lead to high operational costs due to high chemical demand and sludge disposal costs.Moreover, such operations are unable to ensure a high separation efficiency for the typical contaminant concentrations.A number of alternate CMP wastewater treatment methods have been presented and analyzed by several researchers and include processes involving electro-chemical separation, membrane separation, and other methods.Within the first category, Belongia et al. studied electro-decantation and electro-coagulation for removing and reusing alumina and silica from CMP waste.The findings illustrated that the coupled method has the potential to agglomerate and recover alumina and silica.Lia et al. used an aluminium/iron electrode pair and observed 96.5% turbidity removal and 75–85% chemical oxygen demand reduction with an effluent COD of below 100 mg/l.Yang et al. investigated electro-microfiltration considering pulsed mode, no electric field mode and continuous mode operations for treatment of CMP wastewater.The outcomes showed that the continuous mode operation displayed the optimized results i.e., high quality filtrate having turbidity as low as 0.39 NTU.Further, coupled electro-microfiltration and electro-dialysis has also been examined by these researchers .Coupling the two methods yielded a permeate/filtrate suitable for high level recycling; obtained permeate exhibited turbidity <1 NTU, TOC <3 mg/l, and total dissolved solids <50 mg/l.Hu et al. performed experiments with aluminium electrodes for electro-coagulation and flotation process.A turbidity reduction of 90% was observed while adding cationic surfactant cetyltrimethylammonium bromide.Den et al. presented the effect of hydraulic retention time and applied current density on turbidity reduction.Iron anodes and stainless steel cathodes were used in this study to reach removal efficiency of 95%.Liu et al. studied electro-coagulation using iron electrodes for treating the CMP wastewater.The outcomes showed that the particles removal efficiency was ~ 99% at a current density of 5.9 mA/cm2.Wang et al. revealed that iron/aluminium electrode pair is relativity an efficient choice as compared to other typical electrode pairs for electro-coagulation in terms of energy demand.Finally, Chou et al. investigated thermodynamic aspects of the electro-coagulation for oxide CMP wastewater treatment and demonstrated that the system operation was endothermic and spontaneous between 288 and 318 K.Membrane based processes have also been investigated for CMP wastewater treatment.Brown et al. , Lin et al. and Juang et al. showed the performance of ultrafiltration and reverse osmosis in this regard.Brown et al. demonstrated that metal and mixed oxide CMP wastewater can be treated and reused using ultrafiltration.Lin et al. considered chemical coagulation and reverse osmosis for CMP wastewater treatment for reuse.High quality permeate has been recovered after removing 99% of alumina and silica, and lowering the CMP wastewater COD <100 mg/l.Juang et al. investigated an arrangement of integrated UF and RO for CMP wastewater treatment for reuse.The results showed permeate having turbidity ~0.01 NTU, conductivity ~6 µS/cm and TOC ~1.6 mg/L.Other approaches for CMP wastewater treatment include use of magnetic seeds along with chemical coagulant to enhance aggregation and precipitation of alumina and silica.Wan et al. indicated that turbidity of the CMP wastewater could be reduced from 1900-2500 NTU to 23 NTU with the action of 3.74 g L−1 magnetite seeds using applied magnetic field of 1000 G.The coupling has significantly reduced the production of waste sludge as well.Kim et al. tested the combined effect of magnetic separation and chemical coagulation on purification of CMP wastewater.The researchers employed magnetite and ferric chloride which displayed relatively better performance, reaching 0.94 NTU.These processes have shown promise in certain applications, however there are significant challenges that prevent their widespread adaptation.Electrode-aided processes have the problem of reduced treatment efficiency due to electrode blockage.These processes are also cost-inefficient due to high electrical energy demand.Furthermore, microfiltration and ultrafiltration have the issues related to organic and inorganic fouling/scaling resulting in membrane blockage.Reverse osmosis is a pressure-driven separation technique and has a relatively high electrical energy demand.Additionally, high-pressure differences across the membrane require high mechanical strength of the membrane and induces biofouling.Moreover, treatment of large volume of CMP wastewater using magnetic seeding aggregation becomes unnecessarily expensive due to high cost of needed magnetic seeds.Thus, these practices are unreliable, energy inefficient, involve chemical treatments and are expensive.Considering these limitations, membrane processes are judged to hold the most promise assuming that the following aspects can be addressed satisfactorily: reasonable pretreatment requirements; low fouling propensity; low chemical and electricity demands; and cost efficiency.Therefore, this study introduces membrane distillation as a promising method to treat CMP wastewater especially for removal of silica, alumina and copper.Membrane distillation is a thermally driven separation process utilizing a microporous hydrophobic membrane that only allows volatiles to permeate through the membrane.The main driving force is a vapor pressure gradient across the membrane, which is developed due to temperature differences involved in the process .The term MD originates from conventional distillation process modified with membrane separation .The involved phase separation in MD process is based on the vapor-liquid equilibrium where latent heat of evaporation drives the change in phase from liquid to vapor .The water transport through the membrane can be summarized in three steps: formation of a vapor gap at the hot feed solution–membrane interface; transport of the vapor phase through the microporous system; condensation of the vapor at the cold side of the membrane–permeate solution interface .As compared to other membrane technologies, MD operates at mild temperatures and atmospheric pressure.Moreover, it is relatively insensitive to pH and concentration fluctuations .Furthermore, previous record of MD’s successful applications for recovery of heavy metals , dehydration of organic compounds , concentration of acids , separation of pharmaceutical residues at very low to moderate concentrations , and wastewater treatment and water recovery provides a strong argument to consider MD as a potential technology that can be successfully employed for treating wastewater streams contaminated with heavy metals, organic compounds, acids and nano-scale oxides i.e., CMP wastewater.We believe that this is the first work that shows the potential of MD technology for treatment of CMP wastewater.The present study is dedicated to a performance analysis of membrane distillation for CMP wastewater treatment.In this regard, separation efficiency of major contaminants is considered as the key performance factor whereas, conductivity, pH, TOC, COD and TDS are also determined in order to satisfy the water quality standards for reuse of the treated water in industrial utilities processes.Moreover, energy and exergy analyses have also been performed for a complete technical evaluation.For all the experiments, a prototype air gap membrane distillation module supplied by Xzero AB has been employed, as presented in Fig. 1 .The design of the Xzero AGMD module is based on the HVR AGMD module with certain modifications.The comparison shows that both of the AGMD modules consist of single-cassette configurations employing two microporous hydrophobic polytetrafluoroethylene membranes.However, the way used for attaching the membranes to the polyethylene frame was thermal welding in case of HVR AGMD module and beam clamping in case of Xzero AGMD module.,Flow spreaders were added to the cassette for improved heat and mass transfer.The air gap between the membrane and condensation plates was also reduced.Furthermore, in order to solve corrosion issues and for providing an inert environment, the condensation plates were covered with polyvinylidene fluoride on the permeate side in order to ensure high permeate quality.The CMP wastewater is heated with a Teflon-coated immersion heater mounted in a 30 L PVDF feed storage tank.The heated water is then circulated towards AGMD module using an Iwaki MD-30 PVDF pump and controlled by an FIP FlowX3 paddlewheel flow sensor.The hot water is fed into the top of the MD module and the exiting feed recirculates back to the storage tank.Fresh water is cooled using a R1134a chiller integrated with a 80 L PP cold-water tank.Using an Iwaki PP-70 pump, the cold water is circulated through the cooling plates.The flow rate of the cold-water has been controlled and measured with similar type of flowmeter as mentioned earlier.Permeate is collected at the base of the MD module and is measured with a graduated cylinder and stopwatch.The temperatures of hot and cold streams are measured with temperature sensors.All sensors and alarms are controlled by a Crouzet logic unit.A handheld conductivity meter and temperature sensor are used for checking the permeate conditions.In this unit, Donaldson® PTFE membrane is used considering its attractive cost–performance comparison.The characteristics of the used membrane are mentioned in Table 1 and the process flow diagram of the Xzero membrane distillation purification system is shown in Fig. 2.A total of 100 L of CMP wastewater was collected in five 20 L samples from imec, Belgium during a ten day period.Samples 1, 2 and 3 were used to determine the separation efficiency of contaminants using the aforementioned Xzero AGMD module.Considering that concentration does not affect the parametric study significantly, the other two samples were considered to determine permeate yield and energy requirement.For CMP wastewater treatment tests, sample 1 was tested as MD feed without considering any pretreatment however, samples 2 and 3 were neutralized with 10 mL of 40% H2SO4 per 20 L of CMP wastewater samples prior to introduction into the MD modules.The samples S1, S2 and S3 were used in Test 1, Test 2 and Test 3, respectively.The nominal operating conditions for CMP wastewater treatment tests were as follows: MD feed inlet flow rate 7.2 L/min; cold-water inlet flow rate 8.3 L/min; MD feed inlet temperatures 85 °C for S1, 80 °C for S2 and 75 °C for S3; cold-water inlet temperatures 35 °C for S1, 30 °C for S2 and S3; and elapsed time of 3 h. Fluid flow conditions within the module indicate an average main flow channel velocity within a range of 0.025–0.055 m/s, assuming a U-type flow pattern from inlet to outlet.This condition corresponds to a Reynolds number range of 800–2500.After achieving steady state, feed, retentate and distillate samples were taken in every 30 min, and the subsequent physico-chemical analysis included determination of cations concentration, anions concentration, metals concentration, conductivity, pH, TOC, COD, IC, and TDS.For determining the mentioned water quality parameters, several analytical methods have been used.The pH was measured with an Orion Star Series meter with Orion Ross half-cell electrode and Ross reference electrode.The conductivity was determined using LF3000 with Pt-Cell.The TOC) was measured with Sievers 900 TOC-analyzer and COD was determined with Hach reagents LCI500.The anions and cations were measured with ion chromatography after adequate dilution on ThermoFischer ICS-5000 Capillary System.Metals were analyzed on ICP-OES from PerkinElmer.Typically, flow rates and temperatures have been considered as the critical variables that influence the transmembrane flux and thermal energy demand of AGMD system.The considered levels of these parameters for energy analysis were as follows: feed flow rates, feed temperatures, cold-water flow rates and cold-water temperatures.Each experiment was performed for 30 min after approaching steady state conditions.The outlet temperatures of feed and cold-water, module surface temperature, permeate temperature and permeate flow rate were measured.Additionally, rate of heat transfer flow to the cold-water and heat transfer flow via convection and via permeate release were also determined using Eqs.–.The Xzero AGMD module performance was assessed mainly on the basis of the separation efficiency of the contaminants and permeate water quality.Transmembrane flux, thermal energy demand, energy distribution and exergy efficiency of the system are also presented in this section.Table 2 presents the concentration of MD feed samples and the resulting permeate samples after 3 h of operation.The analysis results show that in all the MD feed samples ammonium ions, potassium ions and phosphate ions were in high concentration, while except S1, the other two samples also have higher concentration of sulfate ions.The reason is addition of 10 mL of 40% sulfuric acid as neutralization solvent in the pre-treatment process for S2 and S3.In CMP wastewaters, the key contaminants were silicon, aluminum and copper as expected.Other than these contaminants, phosphorus was also in high concentration.The MD permeate analysis from all the three runs with different composition of CMP wastewaters shows the metal concentrations under detection limit except calcium, which was also reasonably low.The reduction of ammonium ions concentration in the permeate from S1 was not substantial due to the presence of highly volatile ammonia vapor in S1.However, the addition of sulfuric acid in feed samples played an important role in reducing the volatility of ammonia gas, resulting in <0.05 ppm of ammonium ions in permeate.Thus, the pretreatment of feed samples shows three times better rejection performance in case of ammonium ions.For other contaminants, the outcomes show non-detectable concentration of sodium ions, potassium ions, nitrate ions, chloride ions and fluoride ions in the permeate along with very low levels of phosphate.Moreover, in case of sulfate ions, the MD shows remarkable results in terms of separation efficiency i.e., <0.1 ppm.Since the permeate was released from the MD system, the volume reduction of the initial feed samples led to an increase in retenate concentration.The concentration levels of the contaminants in MD retentate samples are summarized in Fig. 3.The outcomes are for three tests of S1, S2 and S3, which were run for 3 h.The concentrations of ions and metals were increased in the initial feed over time, as expected.However, chloride ions and nickel shows a slightly different trend of concentration change compared to other contaminants.For both of them, the initial feed shows higher concentration as compared to the concentrated retentate.The probability could be that these contaminants might be adsorbed on the membrane surface.,Since volatility of the NH3 is highly dependent on pH, therefore the better MD performance was obtained.Furthermore, the TOC, TDS and COD were reduced up to 96%, 99.8% and 97.8% respectively.Considering sample S3, which was introduced in MD set up at relatively low temperature, permeate water quality is also quite satisfactory.For instance, the conductivity was decreased up to 98.8% and TC reduction was reached to 82%.Moreover, TDS and COD were reduced >99.9%.When comparing overall separation efficiency performance, MD shows very encouraging results for CMP wastewater treatment as compared to other available methods.Table 4 shows the comparison of MD with potential technologies including electro micro-filtration and combination of electro-dialysis and RO .For other related membrane based technologies i.e., integration of UF and RO was found to have comparable performance i.e., the reported conductivity was 5–6 µS/cm and TOC was 1.2–1.6 ppm .Fig. 4 shows the effect of varying feed flow rates and cold-water flow rates on transmembrane flux.In these experiments, the feed and cold-water temperatures were considered constant i.e., 80 °C and 25 °C, respectively.However, the feed flow rates ranged between 3.5 L/min to 7.2 L/min while the cold-water flow rate was held constant at 8.3 L/min in the first set of experiments.In the second set, the cold-water flow rates were varied from 3.5 to 8.3 L/min while considering the constant feed flow rate of 7.2 L/min.The reported transmembrane fluxes were measured when MD system approached steady state i.e., approximately after 60 minutes of operation.The results obtained from first set of experiments show that with increasing feed flow rate, the transmembrane flux increases and presents the positive linear trend because the higher bulk temperature is maintained along the feed flow path and due to decrease in the boundary layer resistance.The increase in the transmembrane flux was observed from 9.7 to 11.7 L/m2h while almost doubling the feed flow rate in line with the results published by Baaklini .Moreover, it was observed from the second set of experiments that the reduction in cold-water flow rate provides the lower transmembrane flux at the constant feed flow rates.The lower cold-water flow rate indicates the lower heat recovering capacity from the distillate water vapors.This leads to lower extent of condensation happing in the air gap of the MD system.The same feed to cold-water temperature difference results in different values of vapor pressure difference which effects directly on the transmembrane flux.The transmembrane flux was maximum at feed temperature of 85 °C and cold-water temperature of 15 °C.These temperatures provide the highest extent of driving force as compared to other scenarios.Since CMP wastewater samples were quite diluted, therefore, fouling phenomenon was not observed as expected during the elapsed time.However, follow-on studies would be needed to investigate this phenomenon in more detail.Furthermore, specific heat transfer flow rates to cold-water and via permeate and convection were also determined and presented in Table 5 for constant feed inlet temperature of 80 °C and varying cold-water temperature between 15 °C and 35 °C.The calculations show that approximately 90% of the total specific thermal energy was transferred indirectly to the cold-water circulating through the cooling plates.The rest of the total specific thermal energy was accounted for energy stored in the distillate, lost due to convection and lost through the pipe walls, valves and joints.Although higher driving force at lower cold-water temperatures was associated with higher transmembrane flux, however, the permeate temperature was lower as compared to when the cold-water temperature was higher i.e., 35 °C.Therefore, with increasing cold-water temperature, specific heat transfer flow rate via permeate release is relatively higher.Moreover, the similar trend can be observed for convective heat transfer that indicates higher specific heat transfer flow rate from the module surfaces at elevated cold-water temperatures.The specific heat transfer flow rate to the cold-water also increases at higher cold-water temperature due to lower transmembrane flux.Along with energy analysis, exergy efficiency was also determined in this study.The considered operating conditions include MD feed inlet flow rate of 7.2 L/min, cold-water inlet flow rate of 8.3 L/min, MD feed inlet temperature of 80 °C and cold-water inlet temperature of 30 °C.Moreover, the chemical composition and concentration of sample S2 has been considered for calculating total exergy flow rates.The total exergy flow rates are shown in Table 6 for each component.Furthermore, it was found that the exergy efficiency of the whole unit was 19%, which is comparable to the published results .Each component in the unit is typically accountable for certain percentage of the total irreversibility produced.The results show that recirculation tank is responsible for ~ 32% of total exergy destruction.Heat losses through the recirculation tank walls and evaporation through the tank cover openings are responsible for the exergy destruction in the hot recirculation tank.The cold-water tank share was ~ 48% of total exergy destruction, which was comparatively higher since the cold-water tank was uncovered.MD module was accountable of ~ 20% exergy destruction due to heat losses through condensation walls and the heat transfer through conduction, convection and permeate release.These results indicate the need of optimized MD unit in terms of its membrane material, insulation and condensation plates design.Moreover, the performance of recirculation tank and cooling water tank can be improved using proper insulation in order to reduce evaporative and conductive losses.The study presents the potential of membrane distillation technology for treatment of chemical mechanical planarization wastewater from nano-electronics industries.Case study of imec, Belgium has been selected for the purpose and Xzero MD prototype was used for experimental studies.Considering the performance of MD unit in terms of treated water quality, different parameters have been reported including the compositional analysis, concentration, conductivity, pH, TOC, TDS and COD however, in terms of technical assessment of the methods transmembrane flux, specific heat demand, energy distribution and exergy efficiency were determined while varying different operating parameters.The outcomes depict that high quality permeate was recovered having major contaminants concentration below the detection limit, conductivity ~2.11µS/cm, pH ~ 5.4, TOC ~ 1.13 ppm, IC ~ 0.24 ppm, TDS ~ 1.1 ppm and COD ppm ~ 1.9 while considering neutralization prior to membrane distillation at MD feed flow rate of 7.2 L/min and temperature of 80◦C and cold-water flow rate of 8.3 L/min and temperature of 30 °C.From the parametric analysis, the maximum flux achieved was 14.8 L/m2h at the feed to cold-water temperature difference of 70 °C.The specific heat demand was varied between 1390 and 2170 kWh/m3 depending on the feed temperature and feed to cold-water temperature difference.Moreover, the estimated exergy efficiency of Xzero AGMD prototype was ~19%.
Wastewater from chemical mechanical planarization (CMP) processes in nano-electronics industries must be treated properly in order to fulfil local and international environmental regulations. This study is focused on a performance assessment of membrane distillation (MD) technology for CMP wastewater treatment. A new prototype of air gap membrane distillation (AGMD) module was utilized, with feed water consisting of CMP wastewater collected from imec, Belgium. The module was tested at different operating conditions (temperatures, flow rates and filtration time) and responses in terms of separation efficiency, permeate water quality, transmembrane flux, specific heat demand and exergy efficiency were determined. High quality permeate was produced in all trials, i.e. conductivity ~2.11 µS/cm, pH ~5.4, TOC ~1.13 ppm, IC ~0.24 ppm, TDS ~1.18 ppm and COD ~ 1.9 ppm; for most of the contaminants the separation efficiency was >99%. These findings clearly show that the resulting MD permeate does not exceed environmental regulations for release to recipient, and the permeate can even be considered for reuse. Moreover, the determined specific heat demand at different operating conditions was varying between 1390 and 2170 kWh/m3 whereas; the achievable exergy efficiency was ~19%.
84
Vaccination program in a resource-limited setting: A case study in the Philippines
Human resources are an integral part of every healthcare system .However, in low- and middle-income countries, human resources for health are often limited, which impacts both the access to and quality of healthcare .The shortage of healthcare workers can be due to different factors such as low production capacity for HRH, brain drain of healthcare workers, inefficient use of human resources or imbalance in the composition of demographics .As the demand for healthcare services increases, this scarcity can result in instability of the healthcare system.In addition, introducing health interventions or technologies will always have an effect on the demand for human resources and this should be a serious concern for decision-makers, especially in LMICs, as training health workforce requires a considerable amount of time .Vaccination can be considered as a unique intervention in the context of human resources.Vaccination programs are a long-term investment for preventing specific diseases, and have a dynamic effect on the utilization of different types of HRH.Vaccination programs could increase the present need of particular types of human resources and reduce the future need of the health workforce in treating vaccine preventable diseases .Although there are few papers addressing the success of vaccination in terms of HRH requirements , no study has been conducted that compares the impact of vaccinations in terms of both HRH needed and reduced within a study.It would be appropriate for evidence-informed policy decisions to take into account both HRH required and saved due to vaccinations.Therefore, this study aims to take this challenge by estimating the HRH needed and reduced as a result of introducing the pneumococcal conjugate vaccine.This study is conducted as an additional analysis to the economic evaluation using the Philippines as a case study due to their existing economic model which compares various vaccination policy options .This study examines the impact of HRH by using the quantity, task, and productivity model.The QTP model is one of the approaches to determine HRH and was developed under the concept of functional job analysis whereby the skill requirements to complete a certain task are assessed .There are four main key features of this model: it includes a set of priority interventions, it estimates HRH by calculating the number of cases needed for a service, it identifies the tasks and estimates the time needed to deliver a service, and it includes the productivity by combining staff productivity and service productivity.This model has been developed for low-income countries that want to scale up their priority interventions.A study conducted by Kurowski et al. showed that the QTP method was robust in estimating the required human resources .Furthermore, this model is practical and feasible for application in the Philippines where data resources are restricted.The adapted version of the QTP model for estimating the HRH impact of introducing the PCV vaccination in the Philippines is shown in Supplement A. Using this model, the HRH-related parameters were obtained from existing clinical practice guidelines wherein the applicability of each procedure in the local setting was verified by conducting interviews with healthcare providers.The adapted QTP model includes four major steps and is described in the following paragraphs.In the first step, several health services for the treatment of pneumococcal-related diseases were identified, including the number of cases that occurred in each scenario, i.e. with and without the vaccination program.The number of cases for each disease was estimated using the Markov model from a prior economic evaluation study that was conducted in the Philippines .One-year time horizon was employed in this study, while the prior economic evaluation used a Markov model with a lifetime horizon.The target population for universal coverage of the PCV vaccination was set at 2,200,000 eligible infants below the age of one year based on 2013 data obtained from the Philippine Statistics Authority .In the second step, the types of health workforces needed for each type of health service relating to vaccination, treatments of non-hospitalized pneumonia, and acute otitis media were obtained based on consensus among municipal health officers in the seminar for the primary care benefit package of the state insurance scheme; meanwhile, data on treatment of meningitis, sepsis/bacteraemia and hospitalized pneumonia in intensive care units and non-ICUs were derived from four medical specialists.In the data collection process, participants were given information on healthcare services as stated by the clinical practice guidelines from the Philippine Health Insurance Corporation ; Philippine Clinical Practice Guidelines on the Diagnosis, Empiric Management, and Prevention of Community-acquired Pneumonia in Immunocompetent Adults ; and Integrated Management of Childhood Illness .After that, they were asked to indicate the set of healthcare services they provide in their practice, followed by identifying the types of health workforce needed as well as the average amount of time for each health professional spent per treated patient or vaccinated child.Based on five available policy options, Table 1 illustrates six types of healthcare services that are related to the prevention or treatment of pneumococcal-related diseases: vaccination, meningitis treatment, sepsis/bacteraemia treatment, hospitalized pneumonia treatment, non-hospitalized pneumonia treatment, and acute otitis media treatment.Regardless of the vaccination program, the demand for acute otitis media treatment is the highest, followed by the pneumonia treatment.Implementing the PCV10 or PCV13 would reduce the number of the aforementioned vaccine-preventable conditions.The higher the vaccination coverage, the lower the number of patients treated.Table 2 presents the average amount of time that each type of health professional spends per treated patient or vaccinated child.The average time spent per case is the highest for the treatment of meningitis by paediatricians, which equals 746.50 min per case.The lowest average time spent per care is for midwives, which is only 5 min for non-hospitalized pneumonia and acute otitis media treatment, and for radiologists, which is 5 min for both hospitalized and non-hospitalized pneumonia treatment.Further, the number of healthcare professionals needed for each policy option was identified in Table 3.This table and Fig. 2 show the increase and reduction of FTEs for each type of healthcare professionals required for the treatment of pneumococcal-related diseases resulting from the implementation of the PCV vaccination policy.It can be seen that the implementation of the PCV vaccination significantly increases the number of general practitioners, nurses, and midwives required for the vaccination program.This is the first attempt to estimate the impact of HRH alongside a model-based economic evaluation study, which can be eventually applied to other studies, especially those that inform resource allocation in developing settings where not only financial resources but HRH are also constrained.This study is different from economic evaluations, which focus only on comparisons between costs and outcomes.Although economic evaluation guidelines recommend reporting costs and resources used separately , only few papers do this and even fewer papers report resources used by giving detailed information of health workforce.This HRH study is a complementary analysis and the results are reported as HRH saved and needed.The results showed that the number of FTEs for GPs, nurses, and midwives increases significantly if the universal vaccination coverage policy is implemented.Nevertheless, the vaccination program can avert HRH requirement for specialized healthcare professionals who require longer term training and are more restricted compared to GPs, nurses, and midwives in the Philippines .Moreover, the salary of GPs, nurses, and midwives are likely to be lower than those specialized healthcare professionals.An associated cost-effectiveness analysis found that the universal coverage for PCV13 has the lowest ICER compared to no vaccination amongst the four vaccination strategies .Although this policy needs an additional 380 FTEs for general practitioners, 602 FTEs for nurses, and 205 FTEs for midwives, it can reduce the number of FTEs for medical social workers, paediatricians, infectious disease specialists, neurologists, anaesthesiologists, radiologists, ultrasonologists, medical technologists, radiologic technologists and pharmacists by 7, 17.9, 9.7, 0.4, 0.1, 0.7, 0.1, 12.3, 2, and 9.7, respectively, when compared to the no vaccination policy.From the analysis can also be observed that HRH requirements for both PCV10 and PCV13 were very similar, meaning that choosing PCV13 over PCV10 would not have differed in terms of HRH impact.This type of information should be presented to decision-makers so that they can make an appropriate and feasible policy choice, taking into account not only cost-effectiveness evidence and budget impact but also impact on HRH, which cannot be increased within a short period of time.Neglecting information on HRH when introducing any large health programs, including vaccination, can put the policy at high risk of failure due to the overburden of existing healthcare workers.If the estimated HRH required seems infeasible for implementing a universal vaccination program, a proper plan for task shifting of vaccination activities – i.e. physical examinations that are being carried out by GPs - to nurses, midwives, and pharmacists should be investigated.Meanwhile, FTEs of specialized healthcare professionals freed up by the vaccination program can be used for other health policies.In addition, it is interesting to note that only a relatively small reduction of FTEs among specialized healthcare workers from the vaccination program was observed in this study.An explanation could be that in the Philippines, there is a severely limited number of these specialists and they are working under high workloads, resulting in limited time spent for each patient treated.The prevention of pneumococcal-related diseases from the vaccination means that they can spend a longer time providing better care for patients.Although this study did not investigate whether the current time allocated to the treatment of pneumococcal-related diseases was appropriate, the current estimates indicate that the quality of care may be less than optimal due to shortages of key staff.This study has some limitations.Firstly, the profile and magnitude of health professionals required for each service are specific to the context of the Philippines; therefore, the use of this study’s results for other settings needs to be performed with caution.Secondly, the study focuses on the impact of HRH at the national level.However, the aforementioned HRH may not be distributed equally across geographical locations, especially between urban and rural areas .It would be necessary to estimate the HRH impact for each province when introducing the vaccination policy in order to ensure that HRH planning is adequate across sub-national levels.Thirdly, this study only considered a one-year time horizon despite the fact that previous evidence suggested that the vaccine may be able to prevent pneumococcal-related diseases for up to 5 years after vaccination , which may result in an underestimation of HRH freed up by the vaccine program.However, this study already included herd protection for universal vaccination coverage but not for 25 percent coverage.Lastly, this study employed an expert opinion approach for data collection on the number of minutes used by health professionals in treating each child and this can be problematic in terms of robustness.Although experts were requested to discuss about the estimates among their peers in order to ensure that the provided numbers were feasible, this might not be an ideal approach.With this regard, it is recommended for future study to conduct a prospective observational data collection.The benefit of collecting primary data from observation is that a probabilistic sensitivity analysis can be performed to assess the uncertainty on parameters.There are two interesting points for further study: 1) to explore the effects of adherence to clinical practice guidelines for physicians on estimating HRH in comparison to real practice, and 2) to consider whether there is a need for discounting if the HRH impact is estimated beyond a one-year time horizon.For example, when implementing the human papilloma virus vaccination program at present, the effects on HRH will occur in the next twenty or thirty years.In such case, it is questionable whether the effects on HRH in the future should be discounted similarly to the standard practice in economic evaluations that discounts further costs and outcomes .Furthermore, the future impact of technology and innovation will have unknown implications on the HRH required in terms of health management.Any adjustments for predicted HRH in the future may introduce considerable uncertainty and may limit future HRH planning.This study examines an approach for estimating HRH impact alongside economic evaluation studies on PCV vaccination policy in the Philippines.It illustrates the importance of HRH impact estimations to inform policy decisions in resource-limited settings; this is to ensure that comprehensive evidence was used to formulate policy choice for decision makers.The study informs the reduction of specialized healthcare professionals for treatment of pneumococcal-related diseases vis-a-vis the increase of HRH requirement among GPs, nurses, and midwives as a result of implementing a PCV vaccination program.The authors conclude that the HRH impact should be estimated to inform priority setting for a vaccination program.
Objective Implementing national-level vaccination programs involves long-term investment, which can be a significant financial burden, particularly in resource-limited settings. Although many studies have assessed the economic impacts of providing vaccinations, evidence on the positive and negative implications of human resources for health (HRH) is still lacking. Therefore, this study aims to estimate the HRH impact of introducing pneumococcal conjugate vaccine (PCV) using a model-based economic evaluation. Methods This study adapted a Markov model from a prior study that was conducted in the Philippines for assessing the cost-effectiveness of 10-valent and 13-valent PCV compared to no vaccination. The Markov model was used for estimating the number of cases of pneumococcal-related diseases, categorized by policy options. HRH-related parameters were obtained from document reviews and interviews using the quantity, task, and productivity model (QTP model). Results The number of full-time equivalent (FTE) of general practitioners, nurses, and midwives increases significantly if the universal vaccine coverage policy is implemented. A universal coverage of PCV13 - which is considered to be the best value for money compared to other vaccination strategies - requires an additional 380 FTEs for general practitioners, 602 FTEs for nurses, and 205 FTEs for midwives; it can reduce the number of FTEs for medical social workers, paediatricians, infectious disease specialists, neurologists, anaesthesiologists, radiologists, ultrasonologists, medical technologists, radiologic technologists, and pharmacists by 7, 17.9, 9.7, 0.4, 0.1, 0.7, 0.1, 12.3, 2, and 9.7, respectively, when compared to the no vaccination policy. Conclusion This is the first attempt to estimate the impact of HRH alongside a model-based economic evaluation study, which can be eventually applied to other vaccine studies, especially those which inform resource allocation in developing settings where not only financial resources but also HRH are constrained.
85
Effects of agricultural mechanization on economies of scope in crop production in Nigeria
Agricultural mechanization is an integral part of agricultural development, since it is commonly characterized by scale-effects that allow for specialization.Empirical evidence is, however, scarce on how mechanization affects other aspects of agricultural production like economies of scope.EOS, the cost advantage of producing the aggregate outputs in an integrated firm rather than individual outputs by specialized firms, is a key economic characteristic of agricultural production systems.EOS is particularly relevant for farm-level crop diversification.Assessing the effects of technologies like mechanization on EOS is important because the positive attributes of on-farm crop diversity have been increasingly recognized.Crop diversification is associated with improved productivity and higher food production, risk mitigation, and often facilitates in-situ resource conservation for genetic diversity.In low-income countries, high on-farm crop diversity often contributes to greater dietary diversity, because of market imperfections in food crop trade.However, reducing the costs of such crop diversification remains challenging.We fill this important knowledge gap on the role of mechanization adoption on EOS using Nigeria as a case.Nigeria is particularly appropriate in assessing this issue because its crop production systems are diverse regarding both farm size and number of cultivated crops.For a variety of crops including rice, grains like maize, and legumes like cowpea, Nigeria is the largest producer in SSA.In such a setting, EOS can potentially complement heterogeneity in production environments within multiple crop systems.Furthermore, agricultural mechanization, like animal traction in Nigeria has spread considerably, but only relatively recently.Consequently, its effect on production characteristics, including EOS, may be more pronounced now than was the case in the past.We focus on the EOS between two crop groupings: 1) rice and other crops; and, 2) legumes/seeds1, non-rice grains, and other non-rice crops.These specific crop sets are chosen because the production of these crops accounts for a significant share of cultivated area in Nigeria,2 and intensive tillage is often introduced first for these crops in Africa.3,Furthermore, for rice and some non-rice grains like maize, monoculture has often been associated with erosion of crop diversity or reduced yield around the world.Similarly, legumes / seeds are some of the crops whose complementarity with other crops has been recognized in the literature.For example, nitrogen fixation can be enabled when legumes are grown before other crops like maize are planted, while these legumes can also be grown immediately after the grain harvest, using the residual soil moisture.Likewise, as is partly shown in this paper, rice production systems tend to be distinct from other crops, and, thus, likely have unique implications for EOS and mechanization adoption.We assess whether mechanization is associated with greater or less EOS in production of primary crops in the Nigerian context.We use the Living-Standard Measurement Study – Integrated Survey on Agriculture panel dataset for Nigeria.Appropriately, these data contain plot-specific input usage and output volumes.As is described more in detail, these sets of information enable us to estimate the effects of mechanization on EOS in both primal- and dual-models.Further, while our analyses incorporate both long-run models and short-run models, estimation of EOS relies on the panel data in both models, since it helps to mitigate various endogeneity issues.We directly contribute to several strands of literature.We contribute to studies assessing the impact of agricultural mechanization on production characteristics, and those assessing the linkages between mechanization and cropping systems.We also contribute to the literature on production economics, including studies assessing the EOS by firms or organizations of distinct types, by extending their methodologies to the case of African agricultural production.Methodologically, our study also contributes to the literature on impact evaluations by expanding the inverse-probability-weighting regression adjustment method to assess the impact of agricultural mechanization on EOS.Our results indicate that mechanization raises EOS between rice and non-rice crops that are distinct from each other in production environments, while it reduces EOS for crops that are commonly grown on land with similar agroecological conditions, such as non-rice grains or legumes/seeds, relative to other non-rice crops.The results hold for both primal- and dual-models, and we robustly reject opposite hypotheses.Our paper proceeds in the following way.Section 2 discusses the empirical methodologies.Section 3 presents the data and descriptive statistics.Section 4 discusses the results, while section 5 concludes.Furthermore, Appendix A conceptually describes the potential linkages between mechanization and EOS, Appendix B describes more in detail the variables in the empirical models, Appendix C summarizes some descriptive statistics, while Appendix D presents expanded empirical results.We assess the impact of mechanization on EOS in both LR and SR settings using primal- and dual-models.Importantly, the LR framework corresponds to a cross-sectional model in which unobserved household fixed effects are time-variant, while the SR version corresponds to a standard panel model which assumes that unobserved household fixed effects are time-invariant.4,Estimating EOS in the LR setting is important because it is generally considered a long-term concept as are other properties of production relations like economies of scale.Estimations of EOS utilize the panel nature of our data in both the LR and SR models.However, our LR models treat the panel data as sub-periods within a single cross-sectional period, while treating household mechanization adoption status as time-invariant during the full period.Our LR-model is similar to the standard cross-sectional model except that EOS is estimated from disaggregated subperiod panel data.The LR model is estimated through the Inverse-Probability Weighting Regression Adjustment method.IPWRA has been increasingly used in the literature to assess the effects of technology adoption on various outcomes, including those of mechanical technologies.IPWRA is suitable when the outcomes of interest are parameters, rather than single values.IPWRA involves estimating the probability that the farm household adopts mechanical technologies, computing the inverse of the estimated probability as a weight, and applying this weight to each observation prior to implementing the main regressions of interest."In contrast, the SR models are estimated without the IPWRA, because the potential endogeneity of household's mechanization adoption can be mitigated by conditioning parameter estimates on unobserved household fixed effects.We then apply wi for all the subsequent LR-models.Specifically, all subsequent LR-regressions are run using wi as sample weights, so that results reflect more the observations with greater wi.Observations with greater wi have characteristics that indicate greater probability of adopting mechanization even though they actually do not adopt mechanization, and thus carry more important information about the impacts of mechanization adoptions.The literature in production economics proposes both primal- and dual-models to empirically estimate EOS in multiple-output production systems.We use both models to show the robustness of our analytical results, in addition to estimating both their LR and SR versions.Importantly, in the primal-model, x’s are combined inputs that are used for both crop groups, and not disaggregated inputs for specific crop groups.This contrasts with the dual-model described below, which requires cost information for each crop group.For the LR-primal-model, we assess the impact of mechanization adoption on EOS by estimating and separately among mechanization adopters and nonadopters using IPW-adjusted samples, and then comparing the signs and statistical significance of αjk between the adopters and nonadopters.Since EOS is calculated based on the IPW sample, a simple comparison between mechanized and nonmechanized farm households suffices to attribute any differences to the adoption of mechanization.Note that, the SR model assumes that Mit is exogenous to x0, it after household fixed-effects θi is controlled for.Therefore, the SR model above is estimated without applying IPW.The dual-model is estimated only for EOS between rice and non-rice crops because it requires crop-specific cost information.In our data, such information is available only for rice in a sufficiently large sample size.EOSDual < 0 and EOSDual > 0 indicate economies and diseconomies of scope, respectively.6,As in the LR-primal-model, the dual-models through are estimated separately for mechanization adopters and non-adopter samples, using wi as weights.We then compare EOS between the two samples.In both the primal- and dual-models, we limit our focus on detecting the statistical significance of coefficients related to EOS, rather than the magnitudes of these effects.Although the interpretation of magnitudes is possible in the dual-model), the ability of a dual-model in recovering the magnitudes has recently been questioned in the literature.Due to space limitation, Appendix B describes the construction of variables y, x and C, as well as the composition of variables that comprise Z. Appendix B also describes the construction of weather proxies, including historical reference periods, measures of deviations of weather conditions in survey years with respect to these historical patterns, and time frames.In Appendix B Table B.2, we show that contemporary weather proxies are relevant factors in production functions relative to the historical weather proxies.Based on these results, we use contemporary weather proxies in equations estimating EOS, while including historical weather proxies as determinants of mechanization adoption.Our analyses also consider several measurements of the variable M, to capture different dimensions of mechanization “adoption”.Specifically, in addition to our primary measurement of M, we consider distinctions between tractors and animal tractions, time-variant adoption patterns, and extent of adoption regarding area/area shares mechanized.7,Table 1 summarizes corresponding models and exact definitions of variable M in each model, for both SR- and LR-models.Definitions in SR-models are straightforward, and Mit in eq. can be either binary or continuous as described in the table.Definitions of M in LR-models are made in ways that maximize the mechanization adopter samples in each model.For example, in LR-models, for model A2 and A3, we treat households as mechanization adopter as long as they used tractors or animal tractions in at least one of the three survey years.Doing so increases the sample of mechanization adopters and increased sample sizes of these relatively minor groups seem to provide more stable results.Similarly, for models B1-B3 and C1-C3 which capture mechanization adoption intensity, we still include a majority of mechanization adopters, so that sufficient sample sizes are retained among mechanization adopters.Focusing on fewer, more intensive mechanization adopters, simply lead to statistically insignificant estimates.We interpret these robustness patterns in the results section.Our primary data are three survey waves from the Nigeria LSMS-ISA dataset, collected jointly by the National Bureau of Statistics and the World Bank.For the surveys, 5000 of the same households were interviewed in 2010/11, 2012/13, 2015/16.The sample selected for the first wave is nationally representative.The panel sample was selected through stratified random sampling methods in 2010/11.For a total of 500 enumeration areas, 10 randomly selected households were interviewed.LSMS-ISA data are then complemented by various agroclimatic data.Historical average and standard deviations of rainfall and temperatures are from CRU and National Oceanic and Atmospheric Administration/OAR/ESRL PSD, respectively.The Euclidean distance to the nearest major river from a corresponding EA is calculated from Lehner et al.The Euclidean distance to the nearest major agricultural research station is from Takeshima and Nasir.The data on various aspects of soil are taken from soil mapping data at 1-km resolution, 2013; Hengl et al., 2014).Spatial data of pasture area is obtained from Ramankutty et al.Based on LSMS-ISA data, we define non-rice grains as the combination of sorghum, maize, millet, acha, and wheat.Legumes/seeds are defined as the combination of cowpea, groundnut, bambara nut, sesame, pigeonpea, pumpkin seed, soybeans, zobo seed, and agbono.8,For crops other than rice, we focus on a group of crops, rather than individual crops.This is because in Nigeria, most non-rice crops are grown in mixed-cropped plots, rather than mono-cropped plots, but information needed for estimating EOS is available only at plot levels.9,All monetary values are expressed as real values through deflation by the local average prices of the main staple crops, which are rice and gari, as calculated by the prices reported by the community survey data included in the LSMS-ISA dataset.In the dual-model, households growing rice through mixed-cropping with other crops on the same plots are also excluded.This is because information of production costs is only available at the plot level rather than the crop levels.Additionally, crop-level production costs Cj for rice only can be computed if rice is the sole crop grown on the plot.This reduces the sample size by about 30%.Furthermore, about 10% of farmers report missing values of either farm size or soil quality of the plots.Since farm size is one of the critical variables, these observations are also excluded in the analysis.In primal-models, while our primary sample is a total of 8186 household-wave observations, the sample for each model includes households that grow each primary crop group.Because of this, sample sizes vary across equations focusing on different primary crop group.Altogether, in EOS estimation, the dual-models use 592 household-wave observations as primary samples, while the primal-models use 760, 3530 and 2615 household-wave observations for rice, non-rice grains, and legumes/seeds producers as the primary samples.Table C.1 in Appendix C provides indicators of plot heterogeneity of production associated with rice, non-rice grains, and legumes/seeds, with respect to other crops grown by the same households.Other crops are also grown in only 30% of rice-grown plots, indicating that rice is predominantly sole-cropped.This is consistent with the hypothesis that the characteristics of plots suitable for rice production are quite different from plots suitable for other crops.In contrast, among plots where non-rice grains or legumes/seeds are grown, 70 to 80% of them are also cultivated with other crops.Similarly, 33% of rice growers report growing other crops on plots with different soil types than are found on their rice plots.This proportion is only 10 to 12% for non-rice grains or legumes/seeds.Similar differences also hold for plot slope characteristics.Table C.2 summarizes the calculated household-level production values and production costs of rice, non-rice grains, and legumes/seeds, used for both primal- and dual-models.Typically, our samples consist of smallholders whose production values are equivalent to around 1000 kg worth of staple crops.Importantly, resource uses for rice are generally relatively small compared to combined resource uses for both rice and non-rice, with the median values that are worth 129 versus 1091 kg of staple crops, respectively.This suggests that resource uses for rice can be significantly affected by resource uses for non-rice crops among households who grow both rice and non-rice crops.Table 2 summarizes the distribution of samples with different transition patterns of mechanization adoption status during the three survey waves.Among households who adopted mechanization in at least one wave, a relatively small share did so in all three periods.Many are partial adopters switching between adopters and nonadopters status.These patterns suggest that, in addition to LR-models, the SR-models are important in assessing the effects of time-variant adoption on EOS, as is done in this paper.The results of primary interests are the effects of mechanization adoptions on EOS.We first briefly describe the factors associated with mechanization adoption, and then discuss the results on EOS."Various key factors are associated with farm households' mechanization adoption decisions.Mechanization is negatively associated with the opportunity costs like the price of beef, since more draft work often negatively affects the live weight of draft animals.The greater availability of pasture per livestock may indicate longer fallow periods and lower levels of farming system intensification associated with lower mechanization.A greater livestock holding is positively associated with mechanization."This could be due to the limited renting of animals for drafting because of owners' fears over animal maintenance.The probability of mechanization is also positively associated with greater farm size, higher wages, and the proximity to the nearest ARS.The positive association with greater distance from the nearest administrative center may partly reflect the lower population density and greater land endowment in remote areas, which enables larger-scale production with mechanization.Similar patterns are observed in countries like Ghana.Higher prices of some substituting inputs like chemical fertilizer, are negatively associated with mechanization.This suggests that mechanization adoption may also accompany greater intensification in agricultural production, including greater overall chemical fertilizer use.Higher average and lower variations in past rainfall, which may be conducive for greater agricultural production, are positively associated with mechanization adoption."Historical temperatures' effects are the opposite.Intensive tillage is often used for raising soil temperatures, which generally facilitates plant growth.Therefore, higher temperature may discourage intensive tillage.Opposite signs with greater rainfall risk and temperature risks may reflect the complex roles of weather risks on mechanization.Weather risks may lower demand for intensive production methods like mechanization, but they may also lower the costs of mechanization if greater weather risks stimulate investments in certain assets like livestock which is often used as both an insurance and a draft power source, or tractors that can be also used for the non-farm sector that may be less vulnerable to weather risks, although these aspects warrant more rigorous assessments in future studies.In contrast, as expected, weather conditions in the survey-year growing season are not associated with mechanization adoption, as the adoption decisions are made at the land preparation stage, which precedes the realization of these weather outcomes.Greater soil bulk density is also positively associated with mechanization adoption, as such soils tends to require greater farm power.Other local level soil characteristics are also important predictors of mechanization adoption, affirming the general importance of their effects on tillage.Greater household-level diversity in soil types is also associated with mechanization adoption, although their combined effects are somewhat unclear.This may be because mechanization adoption may exploit both scale effects from homogeneity and scope effects from heterogeneity, respectively, of production environments.Table 4 presents how the IPW process improves the balancing properties regarding means and standardized differences between mechanized and nonmechanized samples.11,Statistical significance indicates the differences in the mean values of each variable.A comparison of raw sample and IPW sample indicates that the IPW process significantly reduces the differences in sample averages between the two groups of farmers.Furthermore, the standardized differences in each variable are significantly reduced in the IPW sample to the order of 0.1 or less in absolute values, which suggests satisfactory sample balance properties.The estimated EOS parameters and the effects of mechanization are summarized in Tables 5 and 6 and Table 7.Recall that the presence of EOS is indicated in opposite ways in the primal- and dual-models.The primal-model results in Tables 5 and 6 suggest that mechanization adoption significantly increases the EOS between rice and non-rice crop production.Specifically, while non-mechanized farmers generally exhibit no EOS, mechanized farmers do.In other words, while there is no economic advantage to diversify production between rice or non-rice crops under non-mechanized farming, there are advantages in diversifying under mechanized conditions.On the other hand, mechanization adoption generally seems to lead to diseconomies of scope between non-rice grains or legumes/seeds and other non-rice crops.The results of dual-models are also largely consistent with those from the primal-models.The estimates are significantly negative for mechanized farmers, which indicates that costs for producing both rice and non-rice crops are lower than costs from specializing in one of these crop groups.12,Additionally, they are not statistically significant for non-mechanized farmers.Mechanization raises EOS between rice and non-rice crops.We also estimate the dual-model applying a Box-Cox transformation.This was done because the cost and production figures used in eqs. and are fairly skewed, and, thus, the results may be partly driven by such skewness.We find that, when using λ = 0.5 so that the cost and production figures are transformed into their square roots, respectively, the results still hold.The findings are generally robust across various measurements of mechanization adoption as in Table 1, and whether we treat mechanization adoption as time-invariant or time-variant.While statistical significance does not always hold, none of the results indicate statistically significant effects in the opposite direction for the same sample under different specifications.Our results, therefore, are particularly robust against the alternative hypotheses that mechanization reduces EOS between rice and non-rice, or increases EOS between non-rice grains, legumes/seeds and other non-rice crops.As is described in Appendix B, our main results above hold for weather proxies based on historical patterns of growing-season rainfall and temperature since 1980 up to each survey year, i.e., Z-scores of these weather outcomes in the survey years with respect to historical distributions.Table 8 summarizes the same set of estimated coefficients and their statistical significance when we instead use weather proxies based on post-1990 period, percentiles instead of Z-scores and all-year weather instead of growing season weather.Our results are also robust when using these alternative sets of weather proxies.We further assess whether these differences in the effects of mechanization on EOS for rice and on EOS for non-rice grains or legumes/seeds lead to significant differences in actual cropping patterns.Mechanization adoption generally induces joint production of rice and other crops, consistent with the above findings that mechanization raises EOS between rice and other crops.Similarly, mechanization adoption generally discourages joint production of non-rice grains or legumes/seeds and other non-rice crops, compared to non-mechanized conditions.These patterns are fairly consistent with the hypotheses that, between legumes/seeds, non-rice grains, and other non-rice crops, mechanization may lead to more specialization due to the decline in EOS.The findings on non-rice grains and legumes/seeds are consistent with the studies generally suggesting the associations between mechanization and greater crop specialization."In contrast, the findings on rice are more unique, as few studies, to the authors' knowledge, have so far found the evidence of the positive effects of mechanization on EOS in a cropping system.However, the findings on rice are somewhat consistent with studies suggesting that commercialization is not necessarily associated with diseconomies of scope, and diversified production can still achieve high technical efficiency.Similarly, the findings on rice are consistent with Nguyen, who finds in Vietnam, where land preparation has already been highly mechanized, that rice still exhibits EOS with non-rice crops.Each of the estimated eqs., and focuses on samples that grow certain crop-groups."Thus, our estimates may be potentially biased if farmers' self-selection of crop choices are ignored.We offer some indications, albeit weak, that such self-selection may not seriously bias our results.First, biases, if they exist, may be such that our results provide more conservative estimates.As are shown in Tables D.8, D.9, mechanization adoption is not negatively associated with the decisions to jointly grow rice and non-rice crops.EOS is likely to be higher for farmers with higher probability of growing rice.The average EOS is likely to decline if more farmers grow rice.Therefore, EOS estimated among mechanized farmers are likely to be a lower bound, compared to the EOS estimates among non-mechanized farmers.A similar argument holds for legumes/seeds and non-rice grains.Additionally, in the dual-models, we employ a bivariate probit in place of standard probit to jointly estimate the probability of adopting mechanization and rice production.13,Their results are shown in additional columns of Table 7, under “Bivariate probit IPW model”.They are qualitatively similar to the main results.That is, the EOS between rice and other crops is more statistically significant and stronger among mechanized farmers, than among nonmechanized farmers.It is, however, important to note that, fully accounting for the self-selection of crop choices, including, for example, potential sequences between crop-choice and mechanization decisions, is still challenging in our analyses, as the lack of suitable instrumental variables and limited sample sizes may lead to significant loss of efficiency in estimates.Therefore, our results should still be interpreted with some caution, and future studies should aim to provide more precise evidence with suitable data.Our primary results are on the estimates of EOS and the effects of mechanization adoptions on EOS.Interpretations of individual coefficients in Tables D.3 through D.7 are of secondary importance.In primal-models, positive coefficients indicate positive associations with the input distance function, or greater savings in inputs used given the output level, and, thus, higher efficiency.In contrast, positive coefficients in dual-models in Tables D.5 and D.7 indicate positive associations on the production cost functions, and thus lower efficiency.To retain consistency across models, the same set of covariates are included for all equations."Many of these covariates are statistically significantly associated with the input distance functions or cost functions, suggesting that our analyses effectively separate out their effects on EOS, and, thus, more correctly identify mechanization's contributions on EOS.Our analyses using panel data on farm households and information on crop-specific production costs suggest that in Nigeria, mechanization generally raises EOS between rice and other crops, while it lowers EOS between non-rice grains or legumes/seeds and other non-rice crops.Mechanization may raise EOS with crops that are more distinct in their preferred production environments, such as rice, while it may reduce EOS among crops that can be grown under similar agroecological conditions such as non-rice grains or legumes/seeds and other non-rice crops.These results offer important empirical insights into the agricultural systems in Nigeria.When mechanization is used for non-rice grains or legume/seeds systems, the production systems are likely to favor specialization which may be further accelerated by sharpening of comparative advantages among producers and growing local or regional trade.The production systems for these crops may also become more prone to risks in response to market or climatic uncertainty, compared to more diversified, mixed cropping systems.The efficiency of the system may also become more determined by the realization of economies of scale through appropriate technologies and knowledge.In addition, as specialization progresses in the production of non-rice crops, production and market supply may become more concentrated into larger but fewer producers.When mechanization is used for rice, on the other hand, it induces diversification between rice and other particular non-rice crops.The efficiency of rice-based systems is likely to depend more on key knowledge, crop husbandry practices, or inputs that are more versatile and can be applied commonly to both rice and non-rice crops."Furthermore, our findings suggest that farmers' rice production decisions continue to depend on decisions related to non-rice crop production on separate plots and through decisions on the use of resources other than land, such as labor and external inputs.Even with mechanization, rice production may remain atomistic, characterized by small production by many farmers – instead of large production by a few farmers.
Agricultural mechanization has often been associated with scale-effects and increased specialization. Such characterizations, however, fail to explain how mechanization may grow in Africa where production environments are heterogeneous even within a farm household, and crop diversification may help in mitigating risks. Using panel data from farm households and crop-specific production costs in Nigeria, we estimate how the adoptions of animal traction or tractors affect the economies of scope (EOS) for rice, non-rice grains, and legumes/seeds, which are the crop groups that are most widely grown with animal traction or tractors in Nigeria, with respect to other non-rice crops. The inverse-probability-weighting method is used to address the potential endogeneity of mechanization adoption and is combined with primal- and dual-models of EOS estimation. The results show that the adoption of these mechanization technologies is associated with greater EOS between rice and non-rice crops but lower EOS among non-rice crops (i.e., between non-rice grains, legumes/seeds, and other non-rice crops). Mechanical technologies may raise EOS between crops that are grown in more heterogeneous environments, even though it may lower EOS between crops that are grown under relatively similar agroecological conditions. To the best of our knowledge, this is the first paper that shows the effects of mechanical technologies on EOS in agriculture in developing countries.
86
Cross-sectional variations of white and grey matter in older hypertensive patients with subjective memory complaints
Subjective cognitive impairment is common in the elderly, and may serve as a symptomatic indicator of a precursor stage of Alzheimer's dementia, even if subtle cognitive decline is difficult to detect on standardized cognitive testing.While this condition is not considered to be a definite neurodegenerative process such as mild cognitive impairment or AD, it may precede a further cognitive decline and the development of dementia.In addition, impaired cognitive performance has been associated with cardiovascular risk factors such as hypertension, in keeping with our recent observation that brain remodeling with age is linked to the level of central pulse pressure.Thus, older hypertensive patients with SCI may constitute a particularly high-risk group for subsequent dementia and may therefore benefit from dedicated modalities of medical management and of early diagnosis.Recent advances in MRI and PET imaging modalities can detect early changes in brain structure and/or metabolism, before the stage of dementia.Among these, diffusion tensor imaging, through the mean apparent diffusion coefficient, may be particularly useful for the early diagnosis of neurodegenerative disorders.This DTI-derived parameter provides a measurement of diffusion rate and its global value is more closely linked to the neurodegenerative process than local values.White matter ADCmean is indeed commonly increased in neurodegenerative diseases owing to the loss of axonal myelin and the disruption of cell membranes.18F-Fluorodeoxyglucose positron emission tomography is also a useful imaging method in this setting, owing to its ability to quantify neuronal activity through the glycolytic metabolism of the brain grey matter."18F-FDG PET is moreover increasingly used for the early diagnosis of dementia and predementia states and more precisely, for detecting the degenerative component of these diseases, in accordance with the recommendations of the international Alzheimer's Association.18F-FDG PET abnormalities, which are documented in AD or MCI patients, were recently shown to correlate with the microstructural white matter changes observed with DTI.To date, however, it is not known whether these interrelated white and grey matter changes are also present in patients with only subjective memory complaints, before the stage of any objective cognitive impairment.In light of the above, this dual DTI and 18F-FDG PET study aimed to determine whether cross-sectional variations within the white and grey brain matters are also associated before the stage of any objective cognitive impairment, in a high-risk population of older hypertensive patients with only subjective memory complaints.This ancillary PET/MRI study was extracted from the ADELAHYDE longitudinal single-center study, which aimed at identifying factors associated with cognitive decline and white matter diseases in older hypertensive patients with subjective memory complaints.Inclusion and exclusion criteria have already been detailed elsewhere.A total of 131 patients participated to the ADELAHYDE-2 study and results of the present study were extracted from the second control visit which comprised a medical examination with measurement of central blood pressure, various neuropsychological tests and a brain MRI.Among these 131 patients, 71 accepted to undergo an additional investigation by brain 18F-FDG-PET agreement no 2010-A01399-30), although the study population was finally restricted to 60 patients, 11 being excluded for technical issues at MRI or for a significant cognitive impairment and a high probability of MCI or AD based on neuropsychological tests).All investigations were planned on the same day, except for the brain MRI, which was performed in the following 3 months.Peripheral brachial BP was measured in the supine position with an oscillometric semiautomatic device after a minimum 10-min rest period.Systolic, diastolic, pulse and mean BPs were recorded three to four times and averaged for subsequent analyses.Central BP was determined in 56 patients by the transcutaneous analysis of the carotid pulse wave with an applanation tonometer.Carotid pressures were deemed as a close surrogate of central pressures and calibrated with the diastolic and mean brachial BP values.Neuropsychological assessment was comprised of: 1) a Mini-Mental State Examination test for global cognition, 2) the Free and Cued recall tests a French equivalent of the Grober-Buschke test for the capacities of encoding and consolidation, as well as for the efficiency of the recovery mechanisms, 3) a Benton Visual Retention Test for visuospatial capacities, 4) the Verbal Fluency Test for executive function and long-term verbal memory, and 5) the Trail Making Tests for visual attention and task switching.All data were acquired on a 1.5-T magnet with an 8-element receive head coil.The protocol involved the recording of a 3-dimensional T1-weighted sequence with the following parameters: slice thickness 1.4 mm; TE/TR/TI = 5/12/350 ms, field of view 240 mm, and Fluid-Attenuated Inversion Recovery images, with the following parameters: slice thickness 5 mm, TE/TR/TI = 158/10000/2300 ms, field of view 240 mm.White matter hyperintensities of presumed vascular origin were assessed on the FLAIR images by a blinded experimented radiologist using the Fazekas score, corresponding to the sum of periventricular and deep white matter hyperintensity ratings.A DTI axial acquisition was also performed with the following parameters: 15 non-collinear gradient directions with b = 1000 s/mm2, one b = 0 reference image, contiguous slices of 5 mm thickness; TE/TR = 72–100/9.600 ms, field of view 240 mm covering the entire brain and cerebellum.An automated parcellation of the subcortical white matter was obtained on the 3D T1-weighted images from each patient and by using the “-recon-all” processing pipeline of the FreeSurfer software version 5.2.This parcellation was thereafter applied to the DTI images through a co-registration and a transformation in the 3D-T1 space of the DTI images.Finally, ADCmean values were obtained from the total white matter and from the different lobes using a subcortical white matter parcellation atlas “wmparc”.The 18F-FDG-PET images were recorded on a Biograph™ 6 hybrid PET/Computed Tomography system.Patients were fasted for at least 6 h prior to the injection of 4 to 5 MBq/kg of 18F-FDG and subsequently placed in a quiet environment with eyes closed.Fifty minutes later, a 3-dimensional Computed Tomography of the brain was recorded and immediately followed by a 3D PET brain recording over a 15 min period.Images were reconstructed with an iterative 3-dimensional Ordered Subset Expected Minimization method, corrected for attenuation and diffusion, and displayed through 2.7 × 2.7 × 2.7 mm3 voxels.A whole-brain statistical analysis was performed at the voxel level using the SPM8 software.18F-FDG PET images were spatially normalized onto an adaptive template derived from the MR and 18F-FDG PET images of our subjects, as previously reported.After normalization to the adaptive template, 18F-FDG PET images were smoothed with a Gaussian filter and normalized to through intensity ratios relative to mean cerebellar activity.Thereafter, the PET images were corrected for partial volume effect using the grey matter volume segmented from co-registered MR images for each patient.The SPM linear regression models, used for correlating 18F-FDG metabolism from the brain grey matter voxels with the ADCmean values from overall white matter as well as for each lobe, were obtained: 1) at a threshold of p < 0.005, 2) with a correction for cluster volume and using the expected voxels per cluster provided by SPM, in order to avoid type II errors as recommended and 3) by using age and gender as covariates.The anatomical localizations of significant clusters were identified using the MNI atlas.Finally, mean relative values of the cerebral metabolic rate of glucose of the combination of clusters interrelated with ADCmean of overall white matter were extracted for each patient.Quantitative variables are expressed as means ± standard deviations, and categorical variables as percentages.Student-t-tests were performed for the unpaired 2-group comparison of quantitative variables.Pearson coefficients were used to assess the correlation between ADCmean, CMRGlc, cardiovascular parameters and/or neuropsychological test scores.A p < 0.05 was determined as significant.Statistical analyses were performed with SPSS® 20.0 software.The statistical analyses performed with the SPM software have been already detailed above.The study population involved 60 patients and all were treated with at least one hypertensive medication, in accordance with the inclusion criterion.Seventeen had an uncontrolled hypertension, as defined for older subjects by a brachial systolic BP higher than 150 mm Hg.On FLAIR-MRI images, the severity of white matter hyperintensities of presumed vascular origin was absent to mild in 26 patients, moderate in 18 and severe in 16.The main recorded quantitative data are summarized in Table 1 for the overall population along with a comparison between men and women.No difference was documented between men and women except for certain neuropsychological tests which were less well performed by men.The ADCmean of the overall white matter was variable between patients, ranging from 0.82 to 1.01.10− 3 mm2 sec− 1, and as detailed in Fig. 1A and Table 2, this ADCmean was strongly and inversely related to the CMRGlc of areas extending over 23.3 cm3 and involving internal temporal areas, posterior associative junctions, posterior cingulum and insulo-opercular areas, independently of the additional influences of age and gender.The strength of the link between the ADCmean of the overall white matter and the global CMRG1c from these interrelated grey matter areas is also displayed in Fig. 1B.The Fazekas score was not correlated with the global CMRG1c from these interrelated grey matter areas although it was correlated with the ADCmean of the overall white matter.Further SPM analyses, obtained with the white matter of each individual lobe as opposed to the overall white matter, are provided in Table 2.Highly significant relationships were documented for the white matter of occipital and temporal lobes and to a lesser extent, of the parietal lobe, with the selection of grey matter sites being very close to those obtained with the ADCmean of the overall white matter).By contrast, much poorer relationships were documented with the white matter of the frontal lobe.Further SPM analyses, obtained with the white matter of each individual lobe as opposed to the overall white matter, are provided in Table 2.Highly significant relationships were documented for the white matter of occipital and temporal lobes and to a lesser extent, of the parietal lobe, with the selection of grey matter sites being very close to those obtained with the ADCmean of the overall white matter).By contrast, much poorer relationships were documented with the white matter of the frontal lobe.As detailed in Table 3, the ADCmean values of the overall white matter were significantly correlated with older age, with a deterioration of both Gröber and Buschke and Trail Making tests, as well as with higher peripheral systolic BP and higher central BP parameters.Otherwise, no significant correlation was noted with Fazekas score.These relationships remained significant when ADCmean values were replaced by the CMRGlc of the interrelated areas, except that the relationship with the MMSE test became significant and that the central systolic BP became the sole significant BP parameter.Correlations between global ADCmean, CMRGlc of interrelated areas and the Gröber and Buschke free recall test, the Trail Making test and central blood pressure are illustrated in Fig. 2.Detailed correlations between: 1) ADCmean and CMRGlc from the different lobes and 2) clinical and neuropsychological scores and frontal, parietal, temporal and occipital white matter are provided in a supplementary file.Detailed correlations between: 1) ADCmean and CMRGlc from the different lobes and 2) clinical and neuropsychological scores and frontal, parietal, temporal and occipital white matter are provided in a supplementary file.This dual DTI/18F-FDG-PET study shows that cross-sectional variations in the structure of the overall white matter are linked to the metabolism of Alzheimer-like cortical areas in a high-risk population of older hypertensive patients with only subjective memory complaints and thus, before the stage of any objective cognitive impairment.The clinical significance of these interrelated variations, as well as a possible contributing pathogenic role of hypertension, is strengthened by further observed relationships with neuropsychological tests and with central BP.Multimodal DTI and FDG-PET imaging have already been reported in MCI or Alzheimer dementia patients.The place of these two imaging modalities in the assessment of neurodegenerative disorders remain nevertheless debated, with the most efficient imaging method being DTI for certain authors on the one hand, and 18F-FDG-PET for others due to a closer link with cognitive impairment.However, these two imaging methods do not provide the exact same information and, furthermore, our study shows that this interrelationship is already present at a very early stage of the development of cognitive impairment.Thus, it is likely that a better understanding of this interrelationship could constitute a key point for enhancing our knowledge on the development of neurodegenerative diseases.The interrelationship between grey and white matter injuries are generally explained by the fact that any neuronal loss within the cortical areas commonly leads to a degeneration of the corresponding fiber tracts within the white matter.However, this cause-effect relationship can also likely occur in the other direction, notably with white matter vascular lesions documented at MRI which have been suggested to lead to a significant decrease in the metabolism of the frontal and temporal grey matter.The determination of ADCmean allows an assessment of the overall white matter volume without the need of a prior hypothesis on specific diseased sites, contrary to the determination of fractional anisotropy variations, which is mostly based on a region-of-interest approach, especially for the posterior cingulate and hippocampus.Using ADCmean, stronger influences on the CMRGlc of grey matter were documented herein for the white mater variations neighboring the grey-matter damages, i.e. occurring within the temporal and occipital lobes and to a lesser extent, within the parietal lobes.Interestingly, the white matter lesions component of the global ADCmean was not related to the CMRGlc of these interrelated grey matter areas nor with neurocognitive tests or blood pressure parameters.This likely suggests a predominant impact of white matter microstructural changes over macrostructural vascular lesions on the interrelationships with the metabolism of grey matter Alzheimer areas.The determination of ADCmean allows an assessment of the overall white matter volume without the need of a prior hypothesis on specific diseased sites, contrary to the determination of fractional anisotropy variations, which is mostly based on a region-of-interest approach, especially for the posterior cingulate and hippocampus.Using ADCmean, stronger influences on the CMRGlc of grey matter were documented herein for the white mater variations neighboring the grey-matter damages, i.e. occurring within the temporal and occipital lobes and to a lesser extent, within the parietal lobes.Interestingly, the white matter lesions component of the global ADCmean was not related to the CMRGlc of these interrelated grey matter areas nor with neurocognitive tests or blood pressure parameters.This likely suggests a predominant impact of white matter microstructural changes over macrostructural vascular lesions on the interrelationships with the metabolism of grey matter Alzheimer areas."A striking observation was that these interrelated grey matter areas mostly corresponded to typical Alzheimer's dementia hypometabolism patterns, in particular the internal temporal areas, the posterior associative junctions and the cingulum posterior.This finding would suggest that these interrelated white and grey matter variations could potentially constitute a very early stage of a cognitive neurodegenerative process, this consideration being furthermore strengthened by the clearly observed relationships with the results of the neuropsychological tests.Indeed, the ADCmean of the white matter, as well as the CMRGlc of the interrelated grey matter areas, were significantly correlated with the results of the Gröber and Buschke tests, yielding evidence of a link with memory functions."It should be pointed out that such correlations have previously been documented between these tests and equivalent imaging parameters, but only in populations involving patients with abnormal tests and suffering from Alzheimer's disease or MCI.In the present study, these correlations were also observed for variations in the results of the Gröber and Buschke tests lying within the normal range, thereby constituting a highly original finding.The interrelated white and grey matter variations were additionally linked to a decrease in executive function, as assessed by the Trail Making tests.This latter observation is however not surprising since such functional decrease has already been documented in patients with low CMRGlc within temporal and parietal areas, as well as in those with white matter lesions, and presumably linked to an impaired connectivity with the frontal grey matter areas.Lastly, while the exact mechanism of these interrelated white and grey matters variations remains to be established, the observed relationships with BP level nevertheless strengthen the hypothesis of a contributing pathogenic role of hypertension.The central systolic BP level was indeed found herein to be a strong correlate for both the white matter ADCmean and the CMRGlc of the interrelated grey matter areas.This finding is in keeping with the previous observations that central BP is a strong predictor of brain remodeling in the elderly as well as a sensitive indicator of cognitive performance not predicted by brachial pressures.In this setting, central BP has the advantage of being a more accurate reflection of the BP level found in cerebral arteries, comparatively to brachial BP which is dependent on a highly variable amplification of systolic BP from central to brachial arteries.It remains to be determined whether this strong association with central BP is also documented in other populations of older subjects and especially those with no history of hypertension.Although a strong association with hypertension is well documented for grey matter hypometabolism as well as for future development of vascular dementia, it must be recognized that this association is less well established for neurodegenerative diseases."However, based on epidemiological, clinical and neuroimaging studies, certain authors have supported the hypothesis that Alzheimer's disease could primarily have a vascular-related mechanism.Furthermore, a longitudinal study found an association between antihypertensive treatments and a decrease in the rate of cognitive decline in two populations of hypertensive patients, one of which was treated with antihypertensive therapy."The present observations lend further support to the role of hypertension on not only the occurrence of white matter lesions but also on the decrease in the metabolism of certain grey matter areas, namely those evolving in parallel with white matter variations and occurring in regions corresponding to the current pattern of Alzheimer's disease.From a more practical viewpoint, these data suggest that hypertensive subjects with normalized central systolic BP may be at lower risk of further deteriorations not only of white matter but also of the grey matter areas involved in cognitive diseases, hence further supporting the interest of drugs aimed at lowering central pulse pressure.The principal limitation of our study is its cross-sectional nature without any longitudinal follow-up.Thus, the impact of our imaging findings on the conversion into MCI or AD is presently unknown.Further longitudinal clinical trials conducted in populations at risk of cognitive decline and with sufficiently long follow-up periods are hence warranted.In addition, further comparisons with elderly patients, who are free of any hypertension and/or memory complaints, could be useful to accurately establish the interrelationships between hypertension, memory and results from PET/MRI imaging.In conclusion, this dual DTI and 18F-FDG-PET study shows that cross-sectional variations in overall white matter structure are linked to the metabolism of Alzheimer-like cortical areas in older hypertensive patients, before the stage of objective cognitive impairment.The clinical significance of these variations is strongly supported by the concurrent observation of relationships with the results of cognitive tests, while the presence of further relationships with central BP strengthens the hypothesis of a contributing pathogenic role of hypertension.The following are the supplementary data related to this article.Three-dimensional volume rendering images representing the grey matter areas for which the metabolic rate of glucose was significantly and negatively correlated with the mean apparent diffusion coefficient of the frontal, parietal, temporal and occipital white matter, using age and gender as covariates.Pearson coefficients for the correlations between 1) the ADCmean and CMRGlc values from individual brain lobes and 2) clinical and BP variables as well as neuropsychological test scores.Supplementary data to this article can be found online at https://doi.org/10.1016/j.nicl.2017.12.024.
Mild cognitive impairment and Alzheimer's dementia involve a grey matter disease, quantifiable by 18F-Fluorodeoxyglucose positron emission tomography (FDG-PET), but also white matter damage, evidenced by diffusion tensor magnetic resonance imaging (DTI), which may play an additional pathogenic role. This study aimed to determine whether such DTI and PET variations are also interrelated in a high-risk population of older hypertensive patients with only subjective memory complaints (SMC). Sixty older hypertensive patients (75 ± 5 years) with SMC were referred to DTI and FDG-PET brain imaging, executive and memory tests, as well as peripheral and central blood pressure (BP) measurements. Mean apparent diffusion coefficient (ADCmean) was determined in overall white matter and correlated with the grey matter distribution of the metabolic rate of glucose (CMRGlc) using whole-brain voxel-based analyses of FDG-PET images. ADCmean was variable between individuals, ranging from 0.82 to 1.01.10− 3 mm2 sec− 1, and mainly in relation with CMRGlc of areas involved in Alzheimer's disease such as internal temporal areas, posterior associative junctions, posterior cingulum but also insulo-opercular areas (global correlation coefficient: − 0.577, p < 0.001). Both the ADCmean and CMRGlc of the interrelated grey matter areas were additionally and concordantly linked to the results of executive and memory tests and to systolic central BP (all p < 0.05). Altogether, our findings show that cross-sectional variations in overall white brain matter are linked to the metabolism of Alzheimer-like cortical areas and to cognitive performance in older hypertensive patients with only subjective memory complaints. Additional relationships with central BP strengthen the hypothesis of a contributing pathogenic role of hypertension.
87
Challenges in integrative approaches to modelling the marine ecosystems of the North Atlantic: Physics to fish and coasts to ocean
The North Atlantic Ocean and its contiguous shelf seas provide a diverse range of goods and services to mankind.However, global climate change will lead to substantial changes in the physical conditions of the basin.At the same time, combinations of direct anthropogenic drivers impact at both an organismal and population level, thereby influencing the biogeochemical cycles of carbon and nutrients on a regional and basin wide scale.The coupling between the climate, marine ecosystems and the human impacts on these ecosystems is a key facet of the Earth System, of which our understanding is only beginning to scratch the surface.This coupling relates to two overarching scientific issues of immense societal concern:the role of the oceans in mitigating the effects of anthropogenic CO2 emissions,the impacts of climate and fishing pressure on ecosystem structure and function, and the consequences for biodiversity and fisheries production.BASIN is a joint EU/North American research initiative with the goal of elucidating the mechanisms underlying observed changes in North Atlantic ecosystems and their services, and EURO-BASIN is a programme to implement this, funded under the European Commission’s 7th Framework Programme.Much can be learned on these issues through an extensive observational and experimental effort, however, a crucial challenge for BASIN is to develop the predictive capability necessary to understand the space and time variation of broadly distributed and dominant members of the North Atlantic plankton and fish communities, the relevant biogeochemical processes, as well as feedbacks between and within these components and climate.It is only through the development and application of integrative modelling that these questions can be explored together and under possible future conditions, potentially far removed from any conditions in the observational base.In this paper, we explore the fundamental challenges of an integrative approach to modelling the marine ecosystem in the North Atlantic and its adjacent shelf seas, with a focus on these overarching issues.To illustrate this, we draw on examples from the Integrative Modelling Work Package in the EURO-BASIN programme, where state of the art models of physical, lower and higher trophic level processes are deployed.In the remainder of this introduction we set the scene by considering how these two overarching issues give rise to key science objectives in this region.While the open-ocean and shelf seas biological carbon pumps are well established, the dynamics of these processes and their vulnerability to future change are far from certain.This is particularly the case in the context of changing marine management strategies and physical, ecosystem and biogeochemical responses to climate change and variability.The recent identification of the ‘non-steady-state’ nature of the ocean carbon pump and its response to climate raises concerns over its ability to continue to mitigate increasing atmospheric CO2 levels.Alongside the carbon cycle context, the structure and function of the ecosystem itself and how this responds to changing external conditions such as climate and fishing pressure is of particular importance as it relates to the economic and food security aspects of the exploitation of living marine resources, and also the societal drive to ensure a healthy marine environment.In Europe this is encapsulated in the Marine Strategy Framework Directive and the descriptors of Good Environmental Status therein.1,Fig. 1a shows a schematic contrasting the shelf sea and open-ocean biological carbon pumps.In both cases the driver is the same, photosynthesis.However, the pathways of the fixed carbon to the point where it is isolated from atmospheric exchange on centennial time scales are very different.In the open ocean the respiration that occurs as material sinks is a critical control, whereas in shelf seas the on/off-shelf transport is an important additional factor.In shelf seas much of the sinking carbon enters the benthos, but it is still largely respired and its long term fate largely depends on the relation between lateral transport and the exposure to atmospheric exchange through vertical mixing.In both cases top-down control has the potential to alter these pathways.This simple conceptual model belies the underlying complexity of the ecosystem, whereby individual organisms compete for resources at trophic levels from primary producers to top predators, leading to intricate ecological interactions.While this ecology has long been studied in the context of living marine resources, its relationship to the carbon cycle is far from clear.The North Atlantic is important and unique in several respects.It is a key component in the climate system due to the substantial poleward heat flux in its surface waters and the formation of intermediate/deep water masses in its northern regions that help drive the Thermohaline Circulation.This region accounts for 23% of the global marine sequestration of anthropogenic CO2 despite having only 15% of the area.This arises because of the deep winter mixing forming intermediate and mode water masses combined with a lower Revelle factor than other mid- to high latitude regions.There is exceptionally high primary production in the sub-polar gyre region owing, among other factors, to significantly deeper winter mixed layers than other ocean basins.The ocean basin is bounded by shelf and marginal seas that support substantial economic activity and are themselves bounded by populous countries of Europe and Africa on the eastern side and the Americas on the west.Hence, impacts of large coastal cities and resource exploitation are acutely felt in this region, potentially mitigated by recent legislative action.In contrast, the less developed countries of West Africa rely on artisanal fisheries as an important protein source and so are highly vulnerable to changes in fish production in this upwelling region.The particular question within the BASIN programme we aim to make progress towards answering are:What defines the biogeographic regions of the North Atlantic, and how might these change, and in what way and on what time scales might the ecosystem respond to these changes?,What is the impact of top down control on the carbon cycle and phytoplankton community structure, how does this vary temporally and spatially, and under future climate and fisheries management scenarios?,What are the pathways and ultimate fate of carbon sequestered by biological production, and how might these change?,How does climate change and variability impact the ecosystem productivity, structure and function?,This requires a truly integrated modelling approach that spans from fisheries to plankton, and from the shelf seas to the open ocean.However, to achieve this we must, not only make significant advances in modelling individual systems, but also break down barriers in traditional scientific approaches, for example between modelling biogeochemical systems and modelling ecological systems, and between modelling the open-ocean and coastal–ocean.There is of course sound scientific reasons why different approaches are taken for each of these so full harmonisation is neither possible nor desirable, but to move towards the goal of an integrative system we must find the common ground and exploit the potential linkages.Modelling approaches are context dependant; at each stage there are several complimentary ways to explore the system differing in how the system is represented, in the time and space scales considered, and in the capability to address the particular questions at hand.Each will be a compromise in some sense, but also have particular advantages.Hence an integrative modelling approach needs to embrace this diversity and rather than providing a single mechanistic connection between drivers, impact and response, each component provides complimentary evidence towards our understanding of the system’s behaviour.Practical considerations inevitably limit the approach to a few discrete choices.Within EURO-BASIN, we consider three configurations of a common physical model; three biogeochemistry/lower trophic level models; a regional scale Individual Based Model for the zooplankton species Calanus spp. coupled to a small pelagic fish population model; a spatially explicit size-based model of open-ocean ecosystems, which aims to represent the joint effects of environmental variability and fishing on the structure and dynamics of pelagic ecosystems; and a spatially explicit population dynamics model predicting the effects of environment and fishing on key pelagic species, and including a functional representation of Mid-Trophic Level groups that are forage species of large oceanic predators.We also consider a convective scale phytoplankton IBM.The particular combinations we consider here are listed in Table 1.Specific issues we address in this paper are:Ocean physics in the open-ocean and shelf seas, and the coupling between the two.Biogeochemistry and lower trophic level ecosystems.Higher trophic levels including populations or functional groups of Mid-Trophic Level and top predators, and the coupling between these.Experiment design for climate change impact simulations.Finally we conclude by exploring how this approach can specifically address the questions identified above.The modelling of marine ecosystems is intimately linked to modelling marine hydrodynamics.The often quoted remark by Doney “biogeochemical models are only as good as the physical circulation framework in which they are set”, implies that we must consider which aspects of the physics are important controls of the ecosystem, how well these are modelled and how this might be improved.When considering lower trophic levels and biogeochemistry, there are essentially three paradigms that mediated the biophysical interactions.First is the physiological response of the organism to the environmental conditions.Second, mixing and transport processes control both the phytoplankton’s exposure to light, hence triggering blooms, and the resupply of nutrients to euphotic waters.These generally act on seasonal or shorter time scales and are predominantly vertical processes, but it is appropriate to include mesoscale eddy and cross-frontal transport processes here.Finally, the basin scale transport sets the overall elemental budgets, e.g. of carbon and nutrients; a simple view of this is provided by the LOICZ2 methodology of fluxes into and out of a well mixed box.The modelling of higher trophic levels is considered in more detail in Section ‘Higher trophic levels modelling: state of the art, challenges and gaps’, however, it is worth briefly identifying some key aspects of the biophysical interactions applicable to that case.As soon as we are concerned with species, rather than ‘functional groups’ then issue of habitat arises, and whether or not it is suitable for a particular species across its life stages, depending on the behaviour of a population, time/space scale of change in the habitat and their ability to acclimate and eventually evolve to accommodate this change.This introduces other facets to the biophysical interaction that are not so important for biogeochemical/LTL considerations, namely: the ‘bioclimate envelope’ of the habitat and the connectivity and transport between regions of different habitats.i.e. what is the acceptable physical environment for a species and can an individual successfully move between regions of these characteristics as it changes life stage, given that these regions are themselves changing, on generally longer timescales?,This then puts more detailed requirements on aspects of the physics to be modelled and understood, which are not necessarily required for modelling LTLs.Examples on the timing of stratification and spring blooms to determine prey availability, and on the details of currents to move larvae from spawning grounds.While basin-scale oceanography and its climate variability drive the population dynamics of pelagic species, the mesoscale activity is also of interest to investigate in detail the behaviour of animals and to address key mechanisms that need to be included in the new generation of population dynamics models.Various sources of biological data exist today that can be confronted to these multiple spatial and temporal scales.Generally, the biophysical interactions put specific requirements on a hydrodynamic model used to simulate ecosystem processes, which in turn impose limits on the accuracy of the ecosystem model.Ecosystem processes are often non-linearly dependent on material fluxes that are not constrained by external feedbacks, and so maybe more sensitive to internal model dynamics than aspects of the physics often used for model validation.The classic example is sea surface temperature and diapycnal mixing.While SST is an important parameter for coupled ocean–atmosphere modelling, successfully reproducing the field is not a particularly good guide to whether the mixed layer dynamics are well modelled, since the sensible heatflux will compensate for errors in this.In contrast, accurately modelling mixed layer properties is a necessary condition for a well modelled phytoplankton seasonal cycle; i.e. success in modelling the ecosystem should be used as a guide to improving the physical model.Horizontal resolution is crucial, and central to this is whether motions at the first baroclinic Rossby radius are permitted.This allows a class of phenomena that are either absent or poorly represented in coarser resolution models to be simulated, specifically: coastal upwelling, mesoscale eddies and internal tides; all of which have important consequences for the modelled ecosystem.The scale for many important processes is the first internal Rossby radius of deformation.The eddy scale is known to vary linearly with R1 from both empirical altimeter based studies, Lo ∼ 1.7R1 + 86 km, and theoretical and laboratory studies such as those of Griffiths and Linden.Similarly the lateral scale of upwelling velocity is also, R1; this can be shown analytically for the case of a vertical wall, but R1 decreases rapidly at the shelf edge so resolving the deep ocean value should be seen as a lower bound.Internal tides have a wavelength ∼R1f/ω, so show a similar pattern to the Rossby radius, but without the strong increase towards the equator.Internal tides and upwelling require several grid cells per Rossby radius, whereas mesoscale eddies can be permitted at lower resolution owing to the mulplier in their scaling.However, upwelling will still occur in models that do not resolve this scale, but it will not be well represented; internal tides and eddies will simply be absent.The ORCA series of global NEMO model configurations includes 1/12°, 1/4° and 1° versions, with typical grid size in the North Atlantic of, respectively, 6 km, 18 km, and 72 km.From Fig. 2, the 1/12° configuration can be characterised as being eddy resolving in the subtropical gyre, comfortably eddy permitting in subpolar gyre and nordic seas, but eddy excluding on-shelf.The 1/4° model reduces this ratio by a factor of 3 so is eddy permitting in sub-tropical gyre, marginally eddy permitting in sub-polar gyre, otherwise eddy excluding.Alongside the dynamical scales, the resolution of geographic scales is important in determining the local of the currents and between basin transport.To illustrate the importance of horizontal resolution, results are presented for three models with comparable physics but different horizontal resolution in the ORCA series of NEMO models, along with climatological observations, for surface current speed, mixed layer depth, and sea surface temperature.Ecosystem models are, to some extent, tuned to a particular representation of the physical environment, i.e. the time/space scales and process representation.Ideally this would be the best physical representation available, but inevitably practical considerations limit this, and ecosystem models tend to be developed and tuned on the coarser end of this scale.This potentially leads to error compensation and over-tuning of the ecosystem model to compensate for inadequate physics.Hence a detailed analysis is required of how different aspects of the physics are modelled and how these constrain the ecosystem.Of the many currents forming the gyre circulations in the North Atlantic, the Gulf Stream and its extension into the North Atlantic Current and Azores Current is the most prominent.The currents on the eastern side are weaker, but none the less form important components of the circulation.The Gulf Stream path has particular importance to the surface fluxes, for example Eden and Oschlies in studying OCMIP-2 model biases found that “ lead to a large range of simulated total air–sea carbon flux patterns and in consequence a large uncertainty in simulated oceanic uptake of anthropogenic CO2”.A central issue in modelling the circulation of the North Atlantic is to achieve an accurately located Gulf Stream separation at Cape Hatteras, and subsequent current pathways, particularly the Northern Excursion.This has been the subject of substantial effort and current thinking is that many factors, including coastline, bathymetry, barotropic–baroclinic coupling with the deep western boundary current, and mesoscale eddies, control this circulation.Similarly, many modelling factors play a role in producing a realistic Gulf Stream separation.There is great sensitivity to subgrid scale parameterisations, boundary conditions and choice of dissipation operators.Bryan et al. suggest the Gulf Stream is greatly improved as the horizontal resolution is reduced below 10 km, thus resolving the first baroclinic Rossby radius and also more accurately representing the bathymetry and coastline.This is clearly seen in Fig. 3 in terms of the location of the surface maximum and Fig. 5 in terms of the location of the temperature front.As far as numerical solution methods are concerned, Barnier et al. found in a 1/4° study, that by implementing partial cells for the geopotential vertical coordinates, and an energy and enstrophy conserving scheme for solving the momentum equation, they were able to improve the flow patterns in the North Atlantic.But given all these factors, the key determinant in accurately representing the circulation is model resolution.For example, Fig. 3 shows the non-eddy permitting model, not only underestimates the strength of the Gulf Stream currents by ∼4-fold, it separates from the coast too far north and is too zonal in direction.The 1/4° ORCA substantially improves the speed, but it is only at 1/12° that its path is accurately modelled.While progresses has been made through subgrid scale mixing; see below) and topographic representation, they are far from the accuracy achieved by refined resolution, and also miss many of the nuanced processes such as non-local effects of eddies.Some caution is needed as increased eddy activity in a model can also result in spurious enhanced diapycnal mixing.The position of the large scale currents also impacts on the relevant water mass formation, overturning circulation and hence the solubility carbon pump.The model intercomparsion study by Treguier et al. suggests the meridional overturning is primarily influenced by deep overflows, while the horizontal circulation of the gyre is influenced by both deep overflows and deep convection.They suggest that difference in deep convection patterns in the Labrador Sea are related to differences in their barotropic transport at Cape Farewell.Aside from the Gulf Stream and sub-polar gyre, an important feature of the circulation on the western side of the North Atlantic is the coastal current from the northern Labrador shelf to Cape Hatteras, formed by freshwater from a combination of ice melt and riverine sources.While there is considerable freshwater loss to the open-ocean along this path there is also evidence of some continuity of flow.In contrast many of the shelf seas on the eastern side of the basin lack such a strong advective component, the Norwegian coastal current being a notable exception.Generally, coastal currents carry terrestrial influence far from their source and are an important inter-basin transport mechanism e.g. linking the Baltic, via the North Sea and Norwegian Sea with the Barents Sea in the Arctic.Their accurate representation, particularly the lateral transport by eddies, requires the resolution of the on-shelf Rossby radius and so challenges many model systems.The North Atlantic Drift joins the eastern boundary slope current in the Faero-Sheltland channel, another region of strong eddy activity.The stratified ocean is naturally full of eddies arising from baroclinic instability and the inverse energy cascade.The North Atlantic is a region of intense eddy activity and the growth of satellite based Earth Observation, particularly altimetry but also SST and ocean colour, over the last decades has lead to a substantial improvement in understanding of the eddy field in the North Atlantic.Bryan and Smith clearly demonstrate the importance of resolution in accurately reproducing this eddy field using models of 0.4°, 0.2°, 0.1° resolution.However, the role of subgridscale parameterisations and numerical methods is more subtle.There is a growing appreciation of the importance of the eddy field in determining the physical oceanographic properties of the basin, both the mean and fluctuating components, at the surface and at depth.A correct eddy field is crucial in setting key features such as the Gulf Stream separation, northward penetration, formation of the Azores current, the subpolar front and the general gyre circulation.Eddies play a particularly important role in mixing, for example determining mixing and stratification in the Labrador sea through baroclinic, baroclinic–barotropic and convective eddies, and in setting the flow of energy between the density field and the mean circulation.Models of resolution that permits or resolve motions at the Rossby Radius have the potential for a realistic eddy field and represent a ‘threshold to be crossed’ in ocean modelling capability, which has now been crossed in many dynamical studies.However, as is discussed further below, ocean models used for biogeochemical studies, and especially those used as the ocean components of an Earth Systems Model, have not generally crossed this threshold, despite the well-established link between mesoscale eddies and oceanic production.The computational constraints are simply too great, since the CPU costs increase as3 and storage costs as2.Hence, the subgrid scale parameterisation of mesoscale eddies represents an important area of research, and the North Atlantic has provided the natural laboratory for this.Of particular note are the parameterisation of Gent and McWilliams and Fox-Kemper et al., which attempt to account for the mean transport component of eddy flux and the up-gradient eddy transport.The use of GM has greatly improved the physical simulations of non-eddy resolving models, but many problems remain notably in the Gulf Stream Separation and the Northwards Gulf Stream excursion.The impact of this on the modelled biogeography and biogeochemical process in the North Atlantic has yet to be established and this is an important consideration in EURO-BASIN.In the case of eddy permitting models, subgrid scale parameterisation focuses on the submesoscale and is largely an element of model stabilisation and tuning, with the aim being to achieve both accurate statistics in the eddy field and well represented mean properties.Models tend to employ combinations of Laplacian and biharmonic operators; however, a well justified parameterisation based on submescoscale physics is currently lacking.Subgridscale parameterisation is a particular issue in coupled ocean–shelf models since the dominant scales change dramatically at the shelf edge to the extent that a model may change from being eddy permitting in the open-ocean to non-eddy permitting on-shelf.This has two specific implications: the interpretation of results in the two regimes needs to take this into account and the physical interpretation of ‘sub-grid scale’ changes, and so should the parameterisation use a simple depth dependent horizontal eddy diffusivity/viscosity).However, as noted by Holt and James, the treatment of horizontal diffusion is “one of the least well-established areas of shelf-sea modelling and has received scant attention compared with the extensive literature on vertical turbulent transport”.More than in any other ocean region the North Atlantic is characterised by its diverse range of mixing regimes, which largely set the scene for its biophysical interaction, and so need to be carefully considered in any model.The energetic mixing/vertical transport processes include tides, wind mixing, mescoscale eddies, deep winter convection, and coastal upwelling.The North Atlantic is a region of exceptionally energetic tides and these are amplified on the continental shelves of the North, Celtic, and Irish Seas and Bay of Fundy and Hudson Straits to give the largest tidal amplitudes globally.Shelf seas, e.g. North Sea and Georges Bank, show patterns of well mixed and seasonally stratified waters set by the criterion of Simpson and Hunter.This in turn sets the benthic/pelagic recycling characteristics of these seas and the balance between light and nutrient limitation.Modelling tides at a basin and shelf scale is comparatively straightforward given their approximation to coastal trapped waves under linear conditions, and basin scale tides are well established from inverse models derived from satellite altimetry.Tides, and other high frequency barotropic waves, are generally not included in global and basin scale models, but there inclusion directly or at least through a parameterisation is a prerequisite for a model that aims to simulate both the open ocean and shelf sea regimes.In a model with a fixed vertical grid including tides would be expected to result in spurious diapynal mixing, and hence deterioration of water mass properties.Time varying vertical coordinates and a re-mapping vertical advection approach may address this, and this approach has recently been incorporated into the NEMO model.A primary consideration in tidal modelling is that the benthic boundary layer is well resolved.In mid- and high latitude regions the cyclonic component of the boundary layer is very thin.Along with the need to resolve sharp pycnoclines, this is one motivation for the use of terrain following coordinate models in tidally active shelf seas, such as those bordering the North Atlantic.Difficulties tend to arise where the boundary layer meets stratification and accurately modelling the resulting sporadic diapycnal mixing, is problematic.The Northern North Atlantic is an exceptionally windy region, comparable to the northern North Pacific and Southern Ocean in annual mean wind stress.This leads to exceptionally deep mixed layers, which can be particularly challenging to model.While monthly mean winds stresses can provide a reasonable representation of the seasonal evolution of the mixed layer depth, it is well known that accurately representation of the mixed layer dynamics requires high frequency atmospheric forcing, ideally resolving the inertial period, otherwise wind stresses can be significantly underestimated and phenomena such as inertial shear spiking are not represented.Vertical mixing models fall into three categories: mixed-layer parameterisations, one-equation turbulence models with a single equation for turbulent kinetic energy and prescribed mixing length; and second-moment models with a second dynamic equation for some combination of TKE and mixing length.A particular feature of the North Atlantic is the deep convection in northern regions.In the first two of these categories this is treated by an iterative ‘convective adjustment’ process.While this is reasonably successful at modelling the mixed layer depths, the actual turbulence levels occurring with the convection are not necessarily appropriate, particularly at the surface where mixing lengths are limited by the ‘Law of the wall’.A second-moment model does not have this limitation, and so is likely to better represent critical turbulence levels; although it still only includes local down-gradient turbulent transport.All three classes of turbulence models have varying success in modelling mixed layer depth, and given its biological importance significant effort goes into tuning the model to better represent this property.This is a case where the more empirical models have an advantage; the models based on turbulence theory have stronger constrains on acceptable parameter values, for example the closure model of Canuto et al. has ‘no adjustable parameters’.While this is theoretically pleasing, it is problematic in practice and tends to lead to add-on such as mixing length limiters, arising because of the essentially non-isotropic nature of stratified turbulence.The local nature of the underlying formulation is also an issue; transport of turbulent properties is only treated as a simple vertical diffusion.While the non-local issue could be addressed with representation of transport processes such as Langmuir cells and convection, care is needed owing to a more pressing issue, namely numerical diffusion.Advection schemes that are non-dispersive are generally diffusive.This gives rise to spurious numerical vertical mixing than can exceed the realistic levels of physical mixing; the last thing many ocean models need is more vertical mixing.Hence, alongside the extensive observational effort in the North Atlantic to improve the parameterisations of mixed layer properties, e.g. in the UK OSMOSIS project, considerable modelling effort is required to minimise numerical diffusion so as to accommodate this improved knowledge, for example building on the methods of Colella and Woodward and Prather.While, the underlying processes determining the mixed layer depths are essentially vertical, they are modified by horizontal transport to the extent that the mixed layer depths are strongly sensitive to horizontal resolution.There is a clear improvement between the1/4° ORCA and 1° ORCA, accepting anomalously mixed layer in the Labrador Sea in the latter.The picture is further improved in the 1/12° ORCA model.The challenge of modelling ocean-shelf coupling lies in the superposition of first-order changes in water depth and a range of locally specific dynamical processes.From an ecosystem point of view coastal upwelling is the most prominent process in terms of ocean–shelf coupling.While the most productive eastern margin upwelling systems globally are not in this region, the West African and Iberian upwelling systems make an important contribution to the basin wide production.Again this is primarily an issue of scale.The primary upwelling circulation requires the first Rossby Radius to be resolved, whereas the complex secondary circulation, filements and eddy effects require significantly finer resolution.Internal tides provide an important mechanism for enhanced mixing at the shelf-edge, which have been particularly difficult to include in coupled ocean–shelf model.The difficulty arises because of spurious diapynal mixing at the steep topography.Other specific numerical issues for terrain following coordinate models are horizontal pressure gradient and horizontal diffusion calculations at the juxtaposition of sloping coordinates, topography and stratification.The relative strength of ocean–shelf exchange, riverine and atmospheric inputs, sets the elemental inventory on-shelf.These are augmented by biogeochemical processes such as denitrification and nitrogen fixation.The adjustment time of shelf seas to oceanic conditions depends on this ocean–shelf exchange and ranges from days in narrow up-welling shelves to many years on shelves with limited exchange and weak circulation.Holt et al., in a Northeast Atlantic model simulation find reasonable agreement to the steady state ‘LOICZ’ approach for nitrate.However, the assumption of a well mixed basin behind this are called into question when salinity is considered: the observed ocean–shelf salinity difference underestimates the ocean–shelf exchange by a factor of 4 compared with the values given by Huthnance et al.; indicating much of the transport occurs without significant lateral mixing with fresher coastal water.Open-ocean and coastal–ocean hydrodynamic models have had a somewhat different evolutionary history, resulting from the different scales and classes of problems in the two regimes.Coastal–ocean models have focused on the requirement of the vertical coordinate systems to resolve the benthic boundary layer, a free surface calculation that can accommodate large amplitude waves, turbulence models capable of simulating multiple boundary layers and the need for accurate open boundary conditions.Notable examples in the North Atlantic context are: the ∼1.8 km POLCOMS European shelf model of Holt and Proctor and the multiscale FVCOM model developed for US GLOBEC.In contrast, open-ocean models have focused on the need to preserve water masses during long integrations, the representation of meso-scale eddies, and horizontal coordinate systems on the sphere.These include both regional models such those used in the DYNAMO project and the NATL12 North Atlantic Model, and global models where the focus of analysis has been the North Atlantic.The choice of horizontal and vertical resolution remains a key determinant of model quality and also of computational and data handling costs.At the basin wide scale a clear improvement in eddy kinetic energy and Gulf Stream path has been demonstrated as grids are refined.However, the models we consider here are far from convergence, i.e. reaching the aspirational condition of computational fluid dynamics that the solution is no longer dependent on grid resolution or subgrid scale parameterisation.Those studies that have hinted at convergence have a substantially fine resolution than considered here.In the shelf sea context a systematic comparison of 9 models covering the North Sea with common forcing do not show a clear improvement with resolution when compared with temperature and salinity observation from the ICES data base.The introduction of stochastic properties into the model and the nature of the data means increasing resolution does not necessarily improve such model-data comparisons.Whether it leads to a ‘better’ model therefore depends on the questions being asked of the model, and requires a more detailed investigation.Global and basin scale models are now routinely run at resolutions similar to historical shelf sea models, and so are capable of representing on-shelf processes given appropriate process formulation.Similarly, larger area shelf sea models are now run nested within global models to investigate ocean–shelf coupling and basin scale response; and indeed their inadequacies in deep ocean regions are becoming more apparent.Hence, it is now appropriate to look to a unified ocean–shelf modelling system and to blur the distinction between the two.The scientific benefits of this are to remove the uncertainties associated with open-boundaries and to allow two-way exchange of information and material.The NEMO model system provides the opportunity for such an approach, owing to its recent developments for shelf sea applications.The practical benefits are through working with a common code structure, traceability between open-ocean and shelf sea model characteristics, and through the exchange of ideas between the two scientific communities.These benefits are inevitably offset by the challenges of unified modelling of two distinct marine environments that largely lead to the distinct evolution of ocean and shelf sea modelling in the first place.Simply having the modelling capability in place in a single system is not sufficient to address the ocean–shelf coupling issue.Moreover, computational issues still lend significant benefits to small area regional models, where these are sufficient for the problem at hand.We work with three configurations at two scales: 1/4° Global and North Atlantic and 1/12° Northern North Atlantic.The results above show significant improvement as resolution is increased and the focus of much of the modelling in EURO-BASIN will be a common 1/4° North Atlantic configuration based on NATL025; i.e. with significantly improved physics over the 1° model.However, while this configuration approaches the ‘resolution threshold’ identified above it does not cross it.Hence, novel physical model development in EURO-BASIN focuses largely on the development of a 1/12° Northern North Atlantic Model building on the ORCA083 NEMO configuration.This model will be used coupled to the European Regional Seas Ecosystem Model and able to explore the effects of crossing this threshold on biogeochemical processes and biogeography of the North Atlantic at Basin scales and with realistic forcing.Our starting point for NNAM is an extraction from global model spanning the North Atlantic 25°N to 70°N chosen to encompass the sub-polar gyre and a large part of the sub tropical gyre.Particularly, the Gulf Stream initiation provides a well posed south-western boundary condition.This model is initially configured in an identical fashion to ORCA083 apart from the use of lateral boundary conditions.Data for these are taken from the ORCA083 model.We then incrementally incorporate features appropriate to the improved representation of coastal seas, which are now described.The representation of the vertical dimension is a contentious issue across all of ocean modelling and one we specifically consider in EURO-BASIN, particularly in relation to ocean–shelf coupling.Geopotential coordinates are the mainstay of open-ocean models, but the refinement of these through partial steps and shaved cells to better represent the bathymetry and barotropic modes is an important development.In EURO-BASIN we exploit the generalised vertical coordinate system in NEMO to explore the use of hybrid terrain following- geopotential coordinates to gain the advantages of both in a basin scale model spanning the deep ocean to the coast.Tidal dynamics both from gravitational forcing and open boundary conditions will be implemented, following the NW European shelf application of NEMO, along with the Generic Length Scale turbulence model with the parameters suggested by Holt and Umlauf.The ERSEM ecosystem model will be forced by river and atmospheric nutrient inputs and inherent optical property specification following Wakelin et al.This will realise a fine resolution hybrid ocean–shelf model of the northern North Atlantic clearly traceable to state of the art ocean and shelf sea models.This will allow us to explore the impact of the many resolution dependent issues on the ecosystem.Fig. 6 shows an early stage in this process – a section across the eastern North Atlantic at 51°N, for the global and regional 1/12° models differing only in that the latter uses the GLS turbulence model and is forced by boundary conditions from the former.This demonstrates an improved thermocline depth and thickness using the GLS model and corresponding parameters.A further detailed investigation is required on the implications of this scheme particularly in the context of deep winter mixing and seasonality in tidally mixed waters, and whether it degrades other aspects of the solution.The overarching concept of BASIN requires the investigation of the biogeochemistry of both shelf seas and the open ocean along with the connections between them at the scale of the whole North Atlantic.Alongside temperature and currents, primary production at the base of the food web, zooplankton as a food sources for fish and dissolved oxygen concentration are important properties that need to be realistically simulated to drive higher trophic level models.A key question is how will climate variability and change, and its consequences, influence the seasonal cycle of primary productivity, O2, trophic interactions, and fluxes of carbon to the benthos and the deep ocean?,Representing biogeochemistry and ecosystems in ocean general circulation models and shelf sea models remains an ongoing challenge given the complexity and diversity seen in marine systems.Nowhere is this more the case than in the North Atlantic, with its seasonal mid- to high latitude regimes characterised by ‘boom and bust’ spring bloom dynamics, and oligotrophic subtropical gyres dominated by microbes.The basin is surrounded by diverse marginal regions and shelf seas.These include eastern boundary upwelling regions, regions strongly influenced by western boundary current intensification, broad tidally active shelves, polar seas where seasonal ice cover dominates the biogeochemical cycles, and regions dominated by riverine inputs and coastal current, where terrestrial inputs of nutrient and CDOM play an important role.Historically in a similar fashion to the physical modelling community, the open ocean biogeochemical and shelf seas ecosystem modelling communities have developed independently focused around different goals, but are now starting to converge.Driven initially by the international JGOFS program and more recently the climate change agenda, the open ocean modelling has primarily focused on biophysical interactions and the quantification of the biological carbon pump.At the same time, the shelf seas modellers were developing models with an initial focus on nutrient cycling and eutrophication in the coastal zone.Alongside this, the European Regional Seas Ecosystem Model was being developed as, what in today’s jargon is termed, an ‘end to end’ model for the North Sea, originally representing a foodweb that included plankton, benthic fauna and fish.Underlying all these models is a commonality of approach in that all the biological components have been aggregated and abstracted into functional groups, which represent the ecosystem in terms of pools of elemental mass, rather than individual organisms or species.Marine ecosystems are complex non-linearly connected systems with emergent behaviour that is not simply a function of their physical environment.Hence, an ecosystem model should ideally have sufficient ecological flexibility to allow this behaviour to manifest.In all the models considered here the trophic connections are fixed and the interactions are defined with fixed but uncertain parameters, which are strongly dependent on the definition of the functional groups.The models produce trophic interactions that adapt to their physical environment by channelling mass through different components of the model ecosystem, but are limited by the inability of a fixed foodweb to self-organise.The first attempt to meet the challenge of modelling basin-scale ecosystem dynamics in the North Atlantic using an explicit ecosystem model in combination with a GCM was carried out 20 years ago by Sarmiento et al.Using a NPZD model Fasham et al. coupled to a 2° resolution GCM, comparison of predicted phytoplankton with satellite-derived chlorophyll showed “excellent agreement … in terms of basin scale pattern”,.Results highlighted how physical forcing drives spatial patterns in marine ecosystems, as had been previously demonstrated in regional modelling studies.This early work supports the paradigm of biophysical interaction through physical controls of nutrient resupply, in this case by seasonally varying mixing and upwelling.Nevertheless, there were problems, including the timing and magnitude of the spring bloom in northern latitudes, phytoplankton concentrations an order of magnitude too low in the subtropical gyre and too high in the equatorial upwelling region.The authors attributed most of these model-data mismatches to problems associated with the physics of the GCM, and hence the focus in Section ‘State of the art and challenges for physical models of biophysical interaction in the North Atlantic’.The importance of the ecosystem representation was, however, also acknowledged.In an accompanying paper in which a detailed analysis of the GCM results for Bermuda station “S” was carried out, Fasham et al. noted the critical importance of the zooplankton in understanding ecosystem dynamics and of the need for observational data to underpin the associated parameterisations.All of these issues still persist with today’s models, in spite of higher resolution physics, more complex foodweb descriptions and improved parameterisations based on better understanding of the underlying processes.We explore below how biogeochemical modelling of the North Atlantic has progressed since this pioneering work, and what the new challenges are, given the need for an integrated approach that permits prediction of both lower trophic levels and associated biogeochemistry, and transfer to high trophic levels such as fish.Despite increases in the computing power during the last 20 years, most basin- or global-scale GCMs that incorporate biogeochemistry are still run at a resolution of ∼1°; this is particularly apparent in the array of Earth Systems Models used in the CMIP5 process.Most regional shelf seas applications are run at scales of order 1/10°, .i.e. an equivalent physical representation to 1° between 4000 m and 40 m given that the Rossby radii crudely scale with ∼H0.5.Hence, many of the problems whereby biogeochemical predictions are compromised by model physics remain, notably excess chlorophyll in equatorial upwelling areas, too low production in the oligotrophic gyres and in the shelf seas timing and depth of stratification.While, the paradigm that stratification controls nutrient supply and hence phytoplankton production generally holds on seasonal timescales, it breaks down on inter annual timescales in that there is ‘at most a weak correlative relationship’ between inter annual variability in upper ocean stratification and primary production in the subtropical gyre of the North Atlantic.It is not sufficient to just consider the barrier preventing nutrient resupply, the processes driving this must also be considered, namely the wind and buoyancy driven mixing and lateral transport.Given the importance placed on mesoscale features in the physics of the North Atlantic, one obvious solution is to increase the grid resolution.The importance of mesoscale physics in controlling new production and associated biogeochemistry is well-known.Oschlies and Garçon used a 1/3° North Atlantic GCM in combination with an NPZD model and found that, despite representing eddy-induced enhancement, primary production remained too low in the subtropical gyre.It is possible to go yet further, as it is known that submesoscale vertical motions can have profound effects on the structure and function of plankton ecosystems.Increasing resolution to represent submesoscale physics, Lévy et al. used a 1/54° circulation model to study gyre circulation in a closed rectangular section of the North Atlantic.A strongly turbulent eddy field emerged that significantly affected the overall circulation pattern.Furthermore, Levy et al. show that local increased phytoplankton growth induced by vertical sub-mesoscale dynamics can be compensated by large scale effects on the thermocline and nutricline depths resulting from non linear scale interactions.In this case the phytoplankton production is in fact decreased in the sub polar gyre and sub tropical gyres.Shelf sea simulations that permit eddies are rare, and when they have been conducted tend to be of limited area and duration.While mesoscale eddies are commonly found in shelf seas, their role and prevalence is less clear in these regions than in the open ocean, particularly away from fronts.This arises from a limited observational base, particularly as remote sensed methods are less effective in this case.Again computational restrictions prevent the routine use of eddy permitting/resolving resolutions and we must turn to subgridscale parameterisations, for example of submesoscale physics, in an attempt to represent these processes in both the open ocean and shelf sea contexts.Beyond improved resolution and eddy processes, it is also necessary to realistically parameterise vertical mixing and the associated boundary layer dynamics.For example, the timing and amplitude of the spring phytoplankton bloom, which is such a characteristic feature of the northern North Atlantic, is sensitive to wind in the late winter/early spring.The largest blooms are seen under conditions of decreased storm intensity, which give rise to an early stratification of the water column and favourable light and nutrients for phytoplankton growth.Accurate representation of synoptic scale atmospheric variability is required in order to simulate short-term variability in physics, which may help not only in predicting bloom dynamics but also other features such as realistic levels of primary production in the subtropical gyres and the timing of the spring bloom in shelf seas.Alongside the forcing, the vertical mixing processes themselves must be accurately modelled, for example to accurately simulate production in the ‘deep chlorophyll maximum’.While there has been substantial progress in turbulence modelling accurately modelling mixing in strongly stratified condition remains a challenge owing to its episodic and non-local nature.A process that is particularly difficult to parameterisation, and yet critical in the northern North Atlantic, is deep convection.Deep convection shows strong inter annual variability.It has been suggested that deep convection can sustain a viable phytoplankton population within the convective mixed layer during winter, a supposition that is supported by model studies and observations.Even though the water column within the deep mixed layer is general homogeneous, the variable nature of deep convection can introduce heterogeneity on shorter timescales.While the retraction of the mixed layer between two periods of deep convective mixing may take days, primary production can react much more quickly and lead to small localised blooms in the absence of stratification, prior to the deep mixing re-homogenising the water column.Process studies, using a 2D non-hydrostatic convection model coupled to a simple phytoplankton IBM have indicated that indeed low concentration of viable phytoplankton can be sustained in a convective regime with local short-lived growth events.These process studies further indicated that, while the reduction in mixing depth towards spring leads to the expected increase in surface phytoplankton concentration, the mixed layer integrated biomass does not increase, as the higher concentration is compensated by the reduction in volume.A comparable picture was found by Backhaus et al. at station M, who measured winter chlorophyll in 1999 of the same order of magnitude to that of the spring bloom in 1997.These findings point towards a potentially underestimated pool in the carbon budget that, being driven by submesoscale phenomena, is not well represent in basin-scale ecosystem models.To capture the winter phytoplankton dynamics and to improve the predictions of spring bloom onset, process-based parameterisations rather than state-based parameterisations could provide a way forward.In this context, net surface heat flux, commonly used to estimate conditions of deep convection, has consequentially been proposed to serve as a better indicator for phytoplankton growth conditions than the mixed layer depth.Sensitivity of ecosystem dynamics to model physics may be particularly acute for complex models, e.g. those that incorporate multiple plankton function types.Sinha et al. implemented one such model, PlankTOM5.2, separately into two 1° global GCMs, with identical ecosystem parameterisations and forcing in each case.Although globally integrated bulk properties, such as primary production and chlorophyll biomass, were similar, predicted distributions of individual PFTs varied markedly between the two simulations.Regarding the North Atlantic, relatively high mixing in one GCM led to dominance by diatoms, whereas a mixed phytoplankton community prevailed in the other GCM.The results highlighted that complicated models have more degrees of freedom, and so a greater variety of response to environmental conditions.A particular challenge then is how to assess the skill of the biogeochemical model independently of the physics.It is quite possible that inadequate physics is masking the skill of the biogeochemical models.One way forward is the retrospective analysis of large data sets to determine robust relationships between biogeochemical or ecological parameters, for example the robust empirical relationships between chlorophyll concentration and phytoplankton size classes.Phytoplankton lie at the heart of the marine biogeochemical system and the challenge of modelling such systems; they drive the transformation of C, N, P, Si and Fe from inorganic to organic forms, resulting in the decoupling of the carbon and nutrient cycles via heterotrophic biological activity and remineralisation processes.Changes in phytoplankton community composition alter the carbon pathways through the food web.The community structure also dictates the magnitude of the vertical flux of organic material to the mesopelagic and benthos, its structure and stoichiometric composition.Consequently the inclusion of multiple phytoplankton PFTs such as diatoms, coccolithophores and picoplankton is an obvious choice for modelling the diversity associated with the North Atlantic ecosystem.Splitting phytoplankton between diatoms and non-diatoms is a common strategy.Diatoms dominate the spring bloom of northerly latitudes in the North Atlantic and can lead to substantial particle export that is transferred efficiently through the mesopelagic zone.This phytoplankton group also provides food for mesozooplankton, which are in turn linked to higher trophic levels such as fish.Fortunately for modellers, diatoms are the one phytoplankton type which is relatively straightforward to parameterise in models because, uniquely, they utilise silicate for growth.The characteristic spring diatom bloom in the North Atlantic has therefore been, by and large, successfully reproduced in biogeochemical GCMs and shelf seas models.The details of amplitude, timing, and duration remain problematic to model as they are sensitive to the detailed juxtaposition of mixing and light.However, matters are considerably less straightforward when it comes to accurately simulating other phytoplankton groups.A case in point is the coccolithophores.Blooms of Emiliania huxleyi occur seasonally in the northern North Atlantic, appearing as milky seas on satellite images of ocean colour.These organisms flourish during high turbulence in the early stages of the spring succession, as well as during the stratified conditions that follow the spring bloom.Blooms of calcifying plankton can have significant impact on Total Alkalinity and air–sea fluxes.Using a parameterisation in which coccolithophores compete effectively at low nutrients, Le Quere et al. predicted coccolithophore blooms too far south in the North Atlantic.They concluded that an improved theoretical understanding is needed of the biogeochemical processes driving the growth and fate of PFTs in the ocean.Gregg and Casey used a global GCM to successfully reproduce coccolithophores distributions in the North Atlantic, although not in the North Pacific, where coccolithophores competed successfully with other phytoplankton when both nutrients and light levels were low.They concluded that “divergence among models and satellites is common for such an emerging field of research”.The coccolithophores example is illustrative of an ongoing tension in ecological modelling, namely the a priori requirement to increase complexity in order to achieve realism versus the need to acknowledge the unwelcome ramifications of complexity, which can impact the predictive skill of models.Difficulties include poorly understood ecology, lack of data, aggregating diversity within functional groups into meaningful state variables and constants, and sensitivity of output to the parameterisations in question and their physical and chemical environment.The computational cost of increasing biological complexity generally varies linearly with the number of state variables, compared with the cubic increase associated with refining resolution.Hence this is a secondary consideration compared to whether there is a demonstrable improvement in predictive skill and also whether the overhead in making scientific interpretations of more complex models is acceptable.An increase in complexity would generally be considered worthwhile if accompanied by a demonstrable and unambiguous improvement in model skill.However, such demonstrations are elusive and there is, as yet, no consensus as to how many PFTs are required to represent key processes.Hence, flexibility in approach is needed in order to select appropriate levels of complexity, depending on the question, geographical area, and research agenda.This suggests the construction of model frameworks in which models of different complexity can be compared in a traceable fashion is highly desirable.Zooplankton play a pivotal role in the marine pelagic ecosystem, yet representing them in 3-D biogeochemical models remains a major challenge.The most obvious division to make is between micro- and mesozooplankton, both groups being important in the North Atlantic.Microzooplankton may be responsible for consuming as much as half of the primary production in areas of the northern North Atlantic such as the Irminger Sea and the UK coastal waters and should therefore “be carefully parameterised in models of this region”.Mesozooplankton, and especially copepods of the genus Calanus, are central to food web dynamics of the North Atlantic, impacting on both the biological carbon pump and transfer to higher trophic levels.Given the enormous disparity between micro- and mesozooplankton in terms of rates of feeding, growth and reproduction, as well as in life history strategies, it is highly questionable whether, as in many NPZD models, they can be meaningfully aggregated into a single zooplankton state variable.Many aspects of the parameterisation of zooplankton in biogeochemical models are in need of attention including functional response formulations to describe grazing, stoichiometric aspects of nutrition and trophic transfer, mortality terms, and vertical migration with its potential impact on carbon export.One aspect of the zooplankton parameterisation that is of particular relevance to the North Atlantic is the formulation of nutrient excretion.When specified as a linear function of zooplankton biomass, this may lead to unrealistically low rates of nutrient remineralisation via grazers.This problem is felt most acutely in the oligotrophic gyres in GCMs and, in conjunction with issues related to model physics, leads to extremely low predicted primary production in these areas.Significant improvement in the prediction of primary production can be made if excretion is instead described as a function of intake, rather than biomass.However, partitioning the excretion between DOM and POM remains a challenge.All in all, modelling zooplankton represents a major challenge for the future, especially in end to end models where these organisms are important both as consumers of primary production and as prey for higher trophic level organisms.While single life-stage models of zooplankton are probably adequate for biogeochemical cycling, this is not generally the case when coupling to higher trophic levels.In which case consideration multiple life stages is needed, and this is increasingly studied in detail using individual based models; as is discussed below.One of the biggest challenges is the representation of the remineralisation processes in biogeochemical models specifically, the microbial loop including dissolved organic matter, remineralisation of export in the deep ocean, and benthic biogeochemistry in the shelf seas.The production and remineralisation of particle export production in the deep ocean is discussed in detail in so it not discussed here.The microbial loop is particularly important, especially in oligotrophic gyres and seasonally stratified shelf seas.It encompasses a range of, largely bacterially driven, processes, leading to the remineralisation of dissolved and particulate organic matter supplying nutrients to the euphotic zone to drive regenerated primary production.The dissolved component is by far the largest pool of organic matter in the sea.In the past DOM has been regarded as a large inert reservoir of carbon, which does not have a strong effect on the export flux of carbon and, below the oceans’ mixed layer, is excluded from the present day carbon cycle.However, from the first fieldwork in the JGOFS program studies have revealed that DOM is an active and highly dynamic component of carbon biogeochemical cycles and plays important roles in marine ecosystems; its contribution to the total export towards the deep ocean can reach 20%.However, modelling DOM has always been problematic because of the many processes associated with its production and fate, as well as the fact that it has varying composition and lability.Currently there are three main types of representation of microbial loop processes in models.The simplest is the implicit remineralisation form, whereby POM is directly re-mineralised to bioavailable nutrients according to a prescribed rate.The semi-implicit form includes those models that represent both DOM and POM, but bacteria are implicit in the DOM pool.For example, PISCES considers semi-labile DOM and particles of two size classes.This model provides multiple pathways and hence timescales for nutrient regeneration.Finally, the fully explicit model whereby bacteria is described along with POM and DOM, and bacteria are allow to compete with phytoplankton for nutrients.The choice microbial loop representation is function of the questions being asked by the models.Both MEDUSA and PISCES were designed to quantify the global ocean carbon cycle in both the global ocean and an earth systems modelling context, and thus require a relatively simple, computationally cheap representation.On the other hand if we wish to explore the ecological and biogeochemical consequences of microbial processes then we need to explicitly resolve bacteria in the model.Several modelling studies have suggested that inclusion of DON cycling can have important implications on the regulation of nutrient cycling.Salihoglu et al. showed that a missing bacteria component in the model can result in an important discrepancy between model and observations, specifically the simulated DON pool being too high during the period following the spring bloom, mainly due to the conversion of particulate organic matter to DON.Even the models that include bacteria compartments predict a strong annual DON cycle.This suggests that the remineralisation or the uptake kinetics of DON are not correctly represented and need to be re-evaluated as more observations become available.Benthic processes and the resulting benthic–pelagic fluxes are highly significant in shelf seas.Modelling studies have calculated that benthic–pelagic fluxes of nitrogen and phosphorus contribute to 33% and 35% respectively to the total nutrient budget on the northwest European Shelf and these compare well with observations.Many physical processes influence benthic–pelagic exchange.Particulate material, settling from the water column, can accumulate in an unconsolidated fluff layer, which is easily remobilised by bottom currents.Dissolved material is exchanged by diffusive processes in cohesive and non-cohesive sediments, whereas both dissolved and particulate material is exchanged by advective transport within non-cohesive sediments.All these processes are spatially dependent on sediment type and hydrodynamics, and affect the biogeochemical functioning of the benthic system.The extent to which they influence shelf-wide nutrient and carbon budgets is largely unknown.Currently there are two main approaches to modelling benthic processes.The first is a simple first order remineralisation of the detritus reaching the seabed to define a benthic nutrient flux.The second involves explicit models of benthic biota and benthic nutrient cycling, which have been developed for temperate European coastal waters.This has led to the development of coupled benthic–pelagic models, whereby the role of benthic nutrient cycling in controlling pelagic ecosystem dynamics can be explored.From a modellers perspective the benthic system is severely under-sampled and the benthic models suffer from a basic lack of information on even the seasonal cycles of the ecology and biogeochemical processes.While this knowledge gap is beginning to be addressed, this is the major limitation to benthic model evaluation and future development.Modelling biogeochemical cycling in the ocean is a complex business and a number of other factors are important.The use of multiple currencies, and associated stoichiometry, is an ongoing topic for model development.Most biological models used in GCMs include a single macronutrient, usually N or P.The exception being the ERSEM family of models, which have multiple currencies and variable carbon and nutrient stoichiometry.The case for including both is for modelling either shelf seas or regions where there are anoxic zones, with associated denitrification; but the latter are not generally observed in the open ocean of the North Atlantic.Nevertheless it may be the case that, unlike in the South Atlantic, the North Atlantic subtropical gyre is depleted in phosphate, possibly as a result of nitrogen fixation enhanced by iron deposition in Saharan dust.Utilisation of dissolved organic phosphate then becomes an important source of nutrients for primary production in this area.Aeolian dust fluxes have increased during the latter half of the 20th century and models predict that this trend may continue in future.The resulting stimulation of primary production may enhance the biological pump in iron-fertilised regions.However, matters are complicated in that, in order to reproduce biogeochemical feedbacks associated with aeolian dust, models should incorporate the contrasting effects of dust on different microbial groups, as well as associated competitive interactions with phytoplankton.Plankton is typically represented in models as belonging to one of two discrete trophic categories: autotrophic phytoplankton or heterotrophic zooplankton.However, the mixotrophs that are found in all aquatic environments, and play an important role in determining ecological and biogeochemical dynamics, are generally disregarded in ecosystem models.Zubkov and GA found that the photosynthetic phytoplankton accounting for more than 80% of the total chlorophyll in regions of the North Atlantic, were also responsible for 40–95% of the total grazing upon bacteria.These results may have profound implications for our understanding of carbon and nutrient cycling in the North Atlantic and provide a major challenge for future model development.An ever present concern of ecosystem studies is the availability of an appropriate observation base.However, in addition to scientific cruises and moorings, the last two decades have seen the emergence of new techniques, such as ocean colour satellite sensors and ARGO floats, which providing a continuous monitoring of key biogeochemical variables, and thus opens the possibility of assimilative approach to ecosystem modelling.Finally, we should take note of a comment made by the great marine ecosystem modelling pioneer Gordon Riley 60 years ago, that a “thorough knowledge of the physiology and ecology of particular species and ecological groups” is a perquisite for effective ecosystem models,.Although our understanding of the competitive interactions of PFTs, as mediated by environment, is improving, the extent to which we are in a position to formulate parameterisations for reliable prediction based on this knowledge remains an open question.A fundamental challenge, arising from the issues discussed above, is to find the appropriate level of complexity that will enable ecosystem models to have optimal skill in simulating and predicting biogeochemical fluxes, and also providing appropriate and accurate fields for coupling to HTL models.The ideal level of ecosystem complexity to study ocean biogeochemical processes is an ongoing debate, and as a result many contrasting models are used in the North Atlantic.These models differ not only in their structure, but also in their formulation and the parameterisation of key processes, such as phytoplankton growth, trophic transfer and export of organic matter to the deep ocean.Although diversity in approach can be desirable, a coordinated strategy for comparing models of different complexity should help improve the models, help identify key uncertainties, and ensure compatibility with parallel efforts.To try and untangle these problems, a traceable hierarchy of models is a useful concept to consider and this is the approach we adopt in EURO-BASIN.We use NEMO as the general circulation model, with common forcing to harmonise the physical environment for the various ecosystem models and so facilitate the analysis and inter-comparison.Following this approach we will make an ensemble of simulations using a range of simple and more complex ecosystem models.This will allow us to build up a multi-model, multi-scenario ‘super-ensemble’.To describe the planktonic ecosystem we have chosen to compare intermediate complexity with a more complex plankton functional type model.PISCES considers two phytoplankton and two zooplankton, with an explicit semi labile DOM and two particle sizes.Using N as the main currency, as well as P, Si and Fe, it also simulates the C and O cycles.The meso pelagic model takes into account particle dynamics between the two sizes, and exchanges between particles, DOM and inorganic pools.MEDUSA is a modestly complex ecosystem model, it includes two phytoplankton, two zooplankton and three nutrients, and is specifically designed for open ocean applications.ERSEM was developed as a generic lower-trophic level/biogeochemical cycling model.ERSEM is an intermediate/high complexity model originally designed for simulating shelf seas biogeochemistry and ecosystem function.ERSEM simultaneously describes pelagic and benthic ecosystems in terms of phytoplankton, bacteria, zooplankton, zoobenthos, and the biogeochemical cycling of C, N, P, Si.By running these different models in the same physical environment we can begin to quantify structural and parameter uncertainty.This diversity of models is required for two reasons.First they extend the range of scenarios and therefore give a constraint on the combined parameter and structural uncertainty.Second, and perhaps more importantly as we are still learning how to model these processes, they inform future model development through the comparison of approaches with an in-depth analysis of the biogeochemical fluxes involved and through validation against available in situ and remote sensing data.Here, we illustrate the approach using existing model simulations and compare results from three global applications of these three LTL models.Each exists within a similar, but not identical physical framework, so we limit our discussion here to a qualitative assessment.Fig. 9 shows a meridional surface chlorophyll transect of the North Atlantic for all three models and SeaWifs ocean colour based chlorophyll.In all cases, between 25°N and 50°N the models reproduce the spatial trends and concentrations of chlorophyll quite well, but underestimate the chlorophyll concentrations south of 25°N.The largest differences between the models occur north of 50°N; an explanation for this has yet to be established.Fig. 10 shows a comparison of annual mean surface chlorophyll and phytoplankton community structure for the three models in terms of diatoms and non-diatoms for the period 1998–2004 for the three models.In addition we also show the equivalent satellite phytoplankton community structure data product derived from SeaWifs.All the models produce the general observed north–south trend in chlorophyll concentration and diatom distribution, with both chlorophyll and diatoms dominating in the north of the domain.This suggests to a first order the emergent property of this simple community structure functions well in all three models.However, the modelled diatom fraction appears overestimated in all three models compared with the satellite product.The question remains whether or not these discrepancies are a function of the physical model, the biogeochemical models or some combination of both, alongside observational uncertainty.The impact of the coarse scale physics is apparent in all the simulations, an aspect that will be specifically addressed in EURO-BASIN.The satellite chlorophyll clearly shows that the high chlorophyll concentrations in the North Atlantic lie to the north of the Gulf Stream.In the models the high chlorophyll extends further south, showing a much more diffuse boundary with the sub-tropical gyre, which in turn is too far south in all the models.This is most likely due to the poor representation of mesoscale physics on the northern boundary of the gyre and highlights a major challenge: that of disentangling the performance of the biogeochemical model from that of the physics.It may be in many cases that the performance of the biogeochemical models is masked by that of the physics.There is a need for metrics that assess the fidelity of the biogeochemical processes independently of the physics, which points to the role of meta-analysis to define robust testable global relationships between biogeochemical variables.To illustrate this point we draw on a meta-analysis of over 3000 observations of collocated HPLC chlorophyll and accessory pigment data, which shows that there is a robust empirical relationship between chlorophyll concentration and the fraction of diatoms in the community.Diatoms dominate at chlorophyll concentrations above 1 mg chl m−3.Fig. 11 shows density plots illustrating the relationship between chlorophyll and the % diatom fraction for all three models, and SeaWifs as a reference.In all cases the models capture the observed response of increasing diatom fraction with increasing chlorophyll concentration; however MEDUSA and PISCES systematically over-estimate the diatom fraction.The crucial point is not the performance of the respective models per se, but the fact that we can see a general response of the plankton models that is independent of the hydrodynamic model.Alongside models focusing on biogeochemistry and LTLs, such as those considered above, are models that aim to capture other aspects of the ecosystem in some detail.Examples include models that represent foodwebs, species behaviour and interaction, and the structure and function of the whole ecosystem.As with physical models, the different characteristics and questions relevant to open ocean and coastal ecosystems have led to a diversity of modelling approaches that is still growing rapidly.Moreover, due to the societal and economical value of many exploited living marine resources, a substantial effort has been devoted over the last decades to the development of specific population models for the management of fisheries.In the open ocean, the focus is on large pelagic and highly migrant species, like tunas and billfishes, which feed opportunistically on a large range of micronektonic forage species.In shelf seas, exploited species include bottom, demersal and small to medium size pelagic species.These feed on benthic organisms as well as zooplankton.Past food web studies have tended to treat the upper and lower trophic levels separately; the use of detailed simulations of physical dynamics requires some limitation on biology.This led de Young et al. to propose that “rather than model the entire ecosystem we should focus on key target species and develop species-centric models”.The focus of benthos and the upper trophic level studies is often on predatory interactions based on fish diet data.Linear, steady-state, food-web models have been used to represent these complex interactions.This trophic–centric approach does not include the dynamics of individual species and neglects the physical processes.Steele and Gifford argue that these two sets of simplifying assumptions are complementary and answer different questions about the dynamics of individual populations and the productivity of ecosystems.Recently, in response to the desire to move towards an ecosystem-based approach to marine management, end-to-end models representing the entire trophic structure and physical components of the ecosystem at a fine spatial scale have been developed.One approach is to combine aggregated versions of existing food web models of the upper trophic levels, with NPZD formulations of the microbial web, and with simplified representations of the main physical forcing.The critical issue is whether the use of functionally defined groups or guilds, rather than species, as variables, can achieve portability, while retaining adequate realism.The small pelagic species group in particular is strongly dependent on the abundance of a few copepod species that dominate the mesozooplankton in the North Atlantic Basin.This motivates the development of specific models to study the complex life histories of these zooplankton species.Copepods have several developmental stages from eggs through nauplii and copepodites to adults, as well as a diapauses stage, in deep water over winter.Marked differences exist between species.For example, copepods that inhabit the North Pacific are relatively large and have a single generation per year, as compared to the smaller copepods in the North Atlantic, which undergo several generations per year.A complicating factor in the North Atlantic is that there are two dominant species: Calanus finmarchicus and Calanus helgolandicus, with distinct niches.The former is adapted to the colder temperatures of the northwest North Atlantic, in contrast to Calanus helgolandicus which prefers warmer temperatures and dominate further south and east.Changes in temperature, for example due to climate change and variability, could therefore significantly impact on the distribution of these two species, with potential impacts on the recruitment of Atlantic cod.A number of copepod population models have been developed that target the distributions and production of key species.For example, Carlotti and Radach studied the seasonal dynamics of Calanus finmarchicus in the North Sea using a one-dimensional water column model.Heath et al. used a Lagrangian 1D approach, using output from a 3-D hydrodynamic model, to study the dynamics of Calanus in the Fair Isle channel.Three-dimensional approaches have also been adopted, for example, Bryant et al.’s study of the seasonal dynamics of Calanus finmarchicus in the northern North Sea and Stegert et al.’s study of the population dynamics of Pseudocalanus elongatus in the German Bight.Regarding the North Atlantic, a major modelling study was undertaken by Spiers et al., examining the distribution and demography of Calanus finmarchicus.The model followed progression from eggs through six naupliar stages, five copepodite stages and adults.An interesting aspect of the study is that it explored the mechanisms controlling diapause, suggesting that irradiance may be an important queue for both the onset of, and awakening from, diapause.However, the application of population-based models, which represent life history in terms of age and developmental stage of body weight, within biogeochemical models is problematic.There are substantial technical challenges and computational requirements associated with highly resolved population models in 3-D.At a more fundamental level, a significant challenge in modelling species such as Calanus finmarchicus is that many aspects of its biology are poorly understood.The mechanisms involved with diapause provide one good example.Individual Based Models keep track of each individual in a population, in a primarily Lagrangian framework.In these models individuals can be characterised by state variables such as weight, age and length, and they may also allow behavioural strategies to be implemented in a spatial context.This allows the properties of a population to be described by the properties of its constituent individuals.Model validations against data can be done at the individual level; matching the observational approach.Moreover, models based on individuals benefit from having the same basic unit as natural selection.This makes IBMs appealing for addressing behavioural and life history tradeoffs and therefore for studying higher trophic levels, which can have a great behavioural repertoire, in particular in relation to motility.Consequently individual based modelling is used extensively for modelling higher trophic levels in EURO-BASIN.There have been several applications of IBMs to zooplankton in the North Atlantic.Early studies focused on simulating drift trajectories of individual plankton and their growth, survival and reproduction.Models have subsequently been fitted with adaptive traits in order to investigate the consequences for adaptation and population dynamics of different levels of environmental forcing.More recently there have been applications using super-individuals that allow entire populations of zooplankton to be simulated with an individual based representation.For basin or global scale modelling, an exhaustive representation of all mid-trophic level species is unrealistic and unnecessary.It is more appropriate to consider a hybrid approach combining functional groups of forage species and specific detailed population submodels for a few species of interest.On the top of this the approach should also consider the large oceanic predator species, fisheries and associated fishing mortality.Ideally, in such an integrated approach, each functional group would include specific population model representations, either based on Lagrangian or Eulerian approaches.While this vision may appear ambitious and technically challenging, the level of computation can be drastically reduced for these specific population submodels, using a 2D or layer-based 3D approach, and degrading the spatial resolution of the physical model.Key components of this integrated approach for MTL modelling already exist or are the subject of ongoing developments.Moreover, there are examples of modelling approaches of MTL functional groups that have been developed to link lower biogeochemical models to population dynamics of large oceanic predators that can be drawn upon.One such approach proposes a representation of basin-scale spatiotemporal dynamics of six functional groups of MTLs, here applied to the North Atlantic.The definition of these groups is based on the occurrence or absence of diel migration between the surface, subsurface and deep layers.Their dynamics are driven by temperature, currents, primary production and euphotic depth simulated by a coupled physical–biogeochemical model.The vertical structure is currently a simplified 3-layer ocean, and to obtain the biomass during the day and night in each layer, the components are summed according to their day and night position.Recruitment, ageing, mortality and passive transport by horizontal currents are modelled within an Eulerian framework, taking into account the vertical migration of organisms.The temporal dynamics are based on a relationship linking temperature and the time of development of MTL organisms, using macroecological principles that define the energy transfer through the biomass size spectrum.Since the dynamics are represented by this well established relationship, there are only six parameters in the model that need to be estimated.The first defines the total energy transfer between primary production and all the MTL groups, while the others are relative coefficients, redistributing this energy through the different components.A notable advantage of this simplified approach is that it facilitates the optimisation of parameters through the assimilation of acoustic data.In particular, the matrix of size distribution coefficients can be straightforwardly estimated using relative day and night values of acoustic backscatter, integrated in each of the three vertical layers of the model.This facilitates the use of different un-standardised acoustic profiles in constraining the model.Models simulating the drift of fish eggs and larvae using Lagrangian approaches have become commonplace in the last few decades, but there are still rather few comparable models for adult fish.The added complexity of addressing the greater behavioural repertoire of adult fish adds challenges to the modelling.With regards to the North Atlantic, models have been developed for the Barents Sea capelin, where the focus has been on simulating the movement from first principles; relatively few IBMs focus on simulating the entire life cycle of fish stocks.Initial attempts were made in this to study the Barents Sea capelin, which illustrated the flexibility of the individual based approach in coupling movement, behaviour with growth, survival and eventually recruitment under different climate scenarios.The distribution of micronekton is a prerequisite for modelling the spatial dynamics of their predators, i.e., the large pelagic species such as tuna and swordfish.The Spatial Ecosystem and Population Dynamics Model uses this distribution to simulate the full life cycle of the large pelagic species from eggs to oldest adults.The SEAPODYM model includes: a definition of spawning, local movements as the responses to habitat quality and also through basin-scale seasonal migrations, accessibility of forage for fish within different vertical layers, predation and senescence mortality and its change due to environmental conditions.Data assimilation techniques, based on an adjoint method and a maximum likelihood approach, are implemented to assist the parameterisation using historical fishing data.In the North Atlantic basin, albacore tuna has been one of the most exploited pelagic species, and shows a major and steady declining trend during the last 40 years.It is unclear if this decline is due to overfishing, shift of fisheries to other target species or changes in environmental conditions.The preliminary application of SEAPODYM model to this species suggests that the environment has been a strong driver in the observed trend of the last decades.In particular, the model predicts changes in biomass of micronekton in the tropical region that are linked to changes in temperature predicted by the ocean GCM; this still needs to be validated with observations.The example of Atlantic albacore tuna suggests a combined effect of fishing and bottom-up forcing; these are usually thought to be the main forcing in the open-ocean systems.Top-down effects, or trophic cascades, have as yet only been detected in the ecosystems of some shelf and enclosed seas, for example, the Black Sea, the Baltic Sea and parts of the shelf seas of the Northwest Atlantic.But there are now strong indications of top down control from planktivorous fish on zooplankton in the Norwegian Sea.This suggests that top down control can be important for basin scale ocean areas as well.Trophic cascades occur when the abundance of a top predator is decreased, releasing the trophic level below from predation.The released trophic level reacts by an increase in abundance, which imposes an increased predation pressure on the next lower trophic level, and so on.The occurrence of trophic cascades is dependent on temperature and diversity.Frank et al. stated that cold and species-poor areas such as the North Atlantic might readily succumb to structuring by top-down control and recover slowly.In contrast, warmer areas with more species might oscillate between top-down and bottom-up control, depending on exploitation rates and, possibly, changing temperature regimes.Nevertheless, the heavily exploited North Sea seems doe not to show any sign of trophic cascade.Different approaches are necessary to investigate and model the two-way coupling between lower and upper trophic levels within their physical and chemical environment.As noted above, the shelf seas of the northern Atlantic Basin are dominated by small pelagic species, for which the coupling should occur at the zooplankton level that provides the bulk of prey biomass to small pelagics.Then, sensitivity analyses simulating changes in fishing mortality of these commercial species can help explore the top down effect of these changes.However, there is often a group of a few species that share the same ecosystem, with their abundance fluctuating according to their own dynamics and in response to environmental variability and top-down factors.Thus, multi-species models of small pelagic populations appear to be necessary to achieve a minimum degree of realism.For the basin scale pelagic system, where exploited species are at a higher trophic level, a first necessary step would be to shift the closure term in the LTL model to the next trophic level, i.e. to MTLs.These new functional groups can be coupled to zooplankton and POC model variables directly through predation and mortality rates.However, since this parameterisation is very challenging, an alternative would be to use the spatio-temporal dynamics of MTL groups, as already simulated above, to introduce relative variability around the average parameters of zooplankton mortality and POC production that are already estimated in current biogeochemical models.For example, a high biomass of MTL would be translated through an increase of the average mortality coefficient of zooplankton, in a given range that guarantees the numerical stability of the simulation.From this extension of ocean ecosystem models to MTL functional groups, a first expected result would be a better representation of zooplankton grazing, integrating spatial and temporal shifts in grazing pressure due to the dynamics of MTL organisms themselves.In addition, a better dynamical representation of processes in detritus uptake and release by meso- and bathy pelagic organisms might be expected.Beyond this, spatial population dynamics models of large marine predator species and their fisheries would need to be coupled to MTL components through their predation on these groups.Here also the parameterisation of predation rates is challenging, especially if not all the predators species are included in the model.However, as with the coupling between MTL and LTL, a similar alternative could be to work, at least in a first instance, in terms of relative variability that does not prevent the exploration of the propagation of the top-down signal due to fishing pressure to the lowest trophic level.Modelling the top-down effects of fishing on oceanic mid-trophic and lower trophic levels requires not only the two-way coupling of these different components of the ecosystem, but first and foremost the correct quantitative estimate of biomass and spatial dynamics of higher trophic levels under the influence of both environmental variability and fishing impacts.Unfortunately, despite a large effort to develop quantitative approaches for stock assessment over the past 50 years, a large uncertainty remains on many exploited stocks concerning their total biomass and their spatio-temporal dynamics.There is still a long way to go to reconcile the recent progress achieved in physical and biogeochemical/LTL oceanography on the one hand and marine ecology, focusing on spatial dynamics and population dynamics on quantitative estimate of change of abundance in time, on the other hand.The Euro-Basin project is a strong pluridisciplinary effort towards this goal.Below we summaries the key higher trophic level models applied in the EURO-BASIN project.NORWECOM The Norwegian Ecological model system NORWECOM was originally a biogeochemical model system with two functional groups: diatoms and flagellates.This model has recently been coupled to an IBM for the copepod Calanus finmarchicus and the planktivorous fish stock Norwegian spring spawning herring, blue whiting and mackerel.These developments are part of an ongoing plan to develop this into NORWECOM.E2E, or a full end-to-end model system.This model system has recently been applied to simulate the interactions between fish stocks in the Norwegian Sea and their utilisation of common zooplankton resources.Within EURO-BASIN, NORWECOM will be used to address the trophic couplings in the Norwegian Sea and the Calanus component will be integrated with NEMO and ERSEM to study Calanus dynamics within its entire distributional range.APECOSM The Apex Predators ECOSystem Model is a spatially explicit size based model of open ocean ecosystems, based on a Dynamic Energy Budget approach.It is two-way coupled to the PISCES ecosystem model which in turn is coupled to the 1/4° NEMO North Atlantic physical model.APECOSM’s philosophy is to specify a very generic and robust structure of marine ecosystems from which particular regional ecosystem organisation emerges due to interactions with the environment.It relies on a very few general rules from which the structure of the model and the parameterisations are derived mechanistically.APECOSM represents the flow of energy through the ecosystem with a size-resolved structure horizontally and with time.The uptake and use of energy for growth, maintenance and reproduction by the organisms are modelled according to the DEB theory and the size-structured nature of predation is explicit.Distinction between the epipelagic community, the mesopelagic community and the migratory community that experiences nyctemeral vertical movements and hence transfers energy between the two other communities is also expressed; their habitat depends mainly on the light profile.Thus, size and spatiotemporal co-occurrence of organisms structure trophic interactions.SEAPODYM-MTL Spatial Ecosystem and Population Dynamics Model-Mid-Trophic Levels.As already described above in more detail, this is a three-layer bulk biomass functional type pelagic-ecosystem model combining energetic and functional approaches based on the vertical behaviour of organisms and following a temperature-linked time development relationship.How these models are brought together with the physical and LTL models is summarised in Section ‘Concluding remarks: integrating the EURO-BASIN models’.In order to define the envelope of response to climate change of marine ecosystem function, we must establish a range of scenarios that encompass possible future conditions that are scientifically and societally plausible.Coupled atmosphere–ocean general circulation models provide the best available source of information for this purpose on a global scale, but this information is generally on too coarse a grid scale to be relevant for many regional scale studies, and so limits the application of the models.Moreover, even on a basin scale, mesoscale activity makes up a crucial component of the dynamics of the North Atlantic, and hence potential changes to its physics; this activity is absent in the majority of the ocean components of the current generation of AO–GCMs.Similarly shelf sea processes are not generally represented.Hence, a downscaling procedure is required: the AO–GCM is used to provide boundary conditions for EURO-BASIN models of finer resolution and more appropriate process representation.Alongside the choice of AO–GCM forcing are two important considerations: the emissions scenario and the forecast horizon.The emissions scenarios prescribe the atmospheric concentrations of radiatively active constituents, which in turn determine the radiative forcing of the AO–GCM.These are either derived from a socio-economic ‘story-line’ or prescribed to specific values.The forecast horizon dictates how far into the future the model simulations will be conducted.The crucial issue here in climate change studies is whether a significant signal can be detected against the background of natural variability.This is a crucial factor for the North Atlantic, where this variability is exceptionally large.The uncertainty in future projections can then be thought of as being a combination a three factors: scenario uncertainty, model uncertainty and internal variability.This is well illustrated, in the global context, by the work of Hawkins and Sutton, which shows how model and “internal variability” uncertainty decrease with lead time, but scenario uncertainty increases, and that by moving from a global to a regional scale the model and internal variability uncertainty can substantially increase.They also show that the European region has particularly strong internal variability.When we move to the climate impacts arena we add other aspects of uncertainty arising from, and propagating through, the downscaled models.Practicalities limit our ability to at best span aspects of the uncertainty with a limited number of simulations.Such an approach is an important first step and allows us to explore the system’s response to the range of different drivers both qualitatively and quantitatively.However, the usefulness of the results as ‘forecasts of future conditions’ is questionable, as discussed by Skogen et al.The opening question for explorations of climate change impacts tends to be ‘how might anthropogenic climate change impact this process in the future?’,An issue that immediately arises is that the forecast horizon required for the answer to be relevant, to policy decisions being considered now, is generally much shorter than that required to give a clear answer; i.e. the policy relevant time scales more closely match those of the natural variability than the longer term trends.For example, the planning cycle for MSFD is 6-years, so only a projection of many such cycles ahead will give a clear climate change signal against the background of natural variability.This is especially the case in regions, such as the North Atlantic, where natural variability arising from the position of the storm track and atmospheric processes such as blocking are so important.Moreover processes that are themselves non-linearly dependent on this natural variability, such as aspects of ecosystem function, are likely to exacerbate this issue through an exaggerated sensitivity to the details of the variability.This mismatch between the time scales on which we can make clear statements on climate change, and the time scales over which decisions need to be made is a grand challenge in climate change impacts work.A possible mitigating effect is that ecosystems can act as integrators of their environmental conditions and so improve signal-to-noise ratios over their forcing, allowing for the detection of weaker climate change signals.Hence it is more appropriate to re-frame the question so that climate change and variability are on a more equal footing, and ask: ‘what is the range of possible impacts on this process, given present day statistics of variability and how they might change into the future?’.An appropriate forecast horizon for EURO-BASIN is out to 2040, since this is most relevant for the issues of ecosystem function and their relation to fisheries and climate change mitigation policy.On this basis it is appropriate to use transient simulations here, which run continuously from the present to the future, rather than the ‘time-slice’ approach that is common in many downscaling type simulations.The forcing we consider must, therefore, treat the atmospheric dynamics and consequent natural variability as accurately as possible, and the analysis needs to explicitly capture the modes of response of the system.For example, inadequacies in the representation of the North Atlantic storm tracks in the AR4 class have previously been identified, and whether this is rectified in the CMIP5 models needs to be critically examined.Such biases can have serious consequences when exploring the impact of climate change on the higher trophic levels of the ecosystem.A particular consideration for this study, is that the phase of the variability in AOGCM forced simulations is not constrained by observations, so the longer period modes; and Atlantic Meridional Mode; see Grossmann and Klotzbach, 2009) almost certainly will not be in the appropriate phase for a 2040 projection and the forecast horizon is not sufficient for these to average out in the statistics.The decadal climate prediction models used in CMIP5, whereby the climate model is initialised from present day observations, have the potential to address this.Recent investigations of the ensemble of these models suggests that they have some skill in retaining the AMDO, with correlations at around the 90% significant level out to 9 years lead time, but beyond this scenario forcing becomes increasingly important.For EURO-BASIN, we adopt two approaches.First, the conventional approach and conduct a series of simulations forced by a small number of free-running CMIP5 AOGCM simulations, accepting that the phase of variability will not be coincident with reality; the simulations will be long enough to average out some of this.The second approach also uses the CMIP5 outputs, but aims to correct the biases by perturbing a reanalysis based hindcast forcing set.The DFS5 atmospheric data is decomposed into realistic weather regimes, and analogs of these are defined in the AOGCM simulations of the present-day period.The evolution of these analogs is then statistically followed in future scenario IPCC simulations, and a forcing data for future simulations is constructed with these time evolutions, using the realistic weather regime previously defined.Hence, the realism of the spatial structure of the future forcing is maintained and the evolution of the future forcing is given by statistics from the IPCC runs.Moreover, there is continuity and consistency between the hindcast and forecast forcing.EURO-BASIN is focused on creating predictive understanding of key species and the emergent ecosystem and biogeochemical features of the North Atlantic basin in order to further the abilities to understand, predict and contribute to the development and implementations of the ecosystem approach to resource management.In order to link ecosystems and key species to carbon fluxes EURO-BASIN follows a trophic cascade framework, quantifying the flow of mass and elements between key species and groups, along with a size spectrum approach to establish and quantify the links between these trophic levels and assess the implications of changes in the players on the flux of carbon.To deliver this we draw on the state of the art in numerical modelling of the North Atlantic: high resolution ocean physics, biogeochemical models of differing complexity, and a range of approaches to modelling mid and higher trophic levels are employed.Fig. 14 illustrates how the various modelling tools for assessing ecosystem characteristics discussed in this paper relate to each other and the stressors influencing the trophic cascade from primary producers to top predators.How this will proceed in practice in EURO-BASIN can be summarised as follows:Physics Biogeochemistry Coupler: The three biogeochemical models have been coupled with NEMO.There are three configuration of NEMO in use in EURO-BASIN;,1/4°N Atlantic Basin: ERSEM, PISCES,1/4° Global Ocean: Medusa,1/12° NN Atlantic model: ERSEM,The 1/4° domains are used for the regional hindcast, climate forced and re-analysis forced simulations, climate-scenario forced simulations, top down control perturbation experiments and a fully coupled end to end ecosystem model.The 1/12 model is for use in assessing the sensitivity of ecosystem response to key processes relating to mesoscale physics, shelf seas physics and spatial scale.MTL model coupling 1 way: The suite of MTL models will be coupled off-line to the ensemble averages of the planktonic ecosystem states from the LTL reanalysis and future climate simulations.ERSEM–IBM coupler: 2 way coupling of ERSEM with the Calanus IBM.PISCES–APECOSM coupler: 2 way coupling of PISCES with the APECOSM.Parameterisation Convection IBM: The Convection IBM model is being developed to explore the impact of deep convection on phytoplankton growth.The goal is to inform the parameterisation of these processes in the Eularian frameworks of the biogeochemical models.Parameterisations of C export: An analysis of existing algorithms for particle flux and based on historic observations and fieldwork is being undertaken.Based on the recommendations from this work, parameterisations of particle flux will be amended and tested in the LTL models as appropriate.Habitats and estimates of top down control: To assess the sensitivity of biogeochemical cycles to changes in grazing pressure, we will draw on information on habitats predation rates from other components in EURO-BASIN to design sensitivity experiments.Specifically, the development of habitat models will provide information for the validation of modelled biogeography, and estimates of herring, blue whiting and mackerel predation on LTL to help parameterise sensitivity experiment to top down control on biogeochemical cycles.Model outputs to drive economic and management models: The integrative modelling will provide model outputs for facilitate other activities in EURO-BASIN, specifically,MTL biomass estimates to drive tuna models.LTL biomass estimates to drive herring, blue whiting and mackerel models.Primary production to drive bioclimatic envelope models of fish.Carbon budgets to estimating the economic value of the N Atlantic C pump.Hydrodynamic and biogeochemical information to drive the models underpinning the comparative analysis of foodweb structure.LTL biomass estimates for the integrative analysis of past and future ecosystem change, using Artificial neural networks.Habitat information for advancing fisheries management.Hence, these tools will be used both singly and in combination to assess the emergent properties of the ecosystems, to create metrics for the prediction of future states and to contribute to the assessment and implementation of an ecosystem approach for the management of exploited resources.Full details of the on-going Basin-scale Integrative Modelling work in EURO-BASIN and the results as they emerge can be found at http://www.EURO-BASIN.eu/.
It has long been recognised that there are strong interactions and feedbacks between climate, upper ocean biogeochemistry and marine food webs, and also that food web structure and phytoplankton community distribution are important determinants of variability in carbon production and export from the euphotic zone. Numerical models provide a vital tool to explore these interactions, given their capability to investigate multiple connected components of the system and the sensitivity to multiple drivers, including potential future conditions. A major driver for ecosystem model development is the demand for quantitative tools to support ecosystem-based management initiatives. The purpose of this paper is to review approaches to the modelling of marine ecosystems with a focus on the North Atlantic Ocean and its adjacent shelf seas, and to highlight the challenges they face and suggest ways forward. We consider the state of the art in simulating oceans and shelf sea physics, planktonic and higher trophic level ecosystems, and look towards building an integrative approach with these existing tools. We note how the different approaches have evolved historically and that many of the previous obstacles to harmonisation may no longer be present. We illustrate this with examples from the on-going and planned modelling effort in the Integrative Modelling Work Package of the EURO-BASIN programme.
88
Characterization of cracks formed in large flat-on-flat fretting contact
Micrometer-level relative movement between contacts under normal loading may lead to fretting fatigue and severe damage in machine components.Cyclic fretting movement causes surface degradation and wear and promotes crack nucleation.Cracks may nucleate even at relatively low nominal cyclic stress levels.It is difficult to observe cracks that propagate inside contacts, which makes fretting an especially dangerous damage mechanism.Although many variables affect fretting, slip, coefficient of friction and normal load are typically considered as the most important .The magnitude of slip in fretting is typically from a few micrometers up to some hundreds of micrometers.In gross sliding, the entire nominal contact area is slipping and in partial slip, certain areas are stuck while the rest of the contact is slipping.As discussed widely in the literature, gross sliding typically results in surface damage and wear.Such damage and wear is minimized in completely stuck contact without slip between the surfaces.Further, from a cracking point of view, the partial or mixed slip regime is typically regarded as the most dangerous, as cracks are known to form readily .The COF can reach high values in fretting experiments in gross sliding conditions.Cyclic slip wears off oxide and contamination films leading to an initial increase in COF.After the initial increase, the COF typically stabilizes at values in the range of 0.8–0.9; though some material such as QT produced ‘friction peak’ where COF can reach values up to around 1.5 , followed by a decrease and stabilization at values in the range of 0.8–0.9 similar to most materials.In addition, COF can have a distribution along the contact interface .High COF can be a prerequisite for crack nucleation since it is needed to cause the high contact shear stresses that promote crack nucleation .Fretting is known to have a major role in nucleating cracks.It tends to nucleate multiple cracks, which may coalesce , propagate or arrest .Apart from the influence of contact stresses, crack propagation is made possible due to bulk loading.Although the principles of fracture mechanics concepts have been employed to study long crack propagation, the fretting crack nucleation mechanism is still not fully understood.Fretting contact induced stresses affect the surface and its vicinity, and this is typically the place for crack nucleation.Severe plastic deformation can exist in fretting contacts and at the crack nucleation site .Contact loading may orientate grains and decrease their size and flatten them .It is usually reported that most of the fretting fatigue life is spent in crack propagation.Cross-section samples have revealed cracking within only a few hundreds or thousands of fretting cycles, even without bulk loading .Typically, cracks nucleate at surface points where stresses are highest, such as at the edges of the contact , close to stick-slip boundary or at adhesive cold weld junctions .Fretting-induced cracks typically propagate close to the contact interface at an some oblique angle , like 30 or 45 degrees , due to the shear loading induced by fretting.Then, outside the influence of contact loads, the crack turns to a direction corresponding to mode I stress intensity.Fretting fatigue tests with quenched and tempered steel have been done using a complete contact fretting test device and also a bolted joint setup .A marked decrease in fatigue life has been observed due to fretting.Adhesive material transfer spots including visible cracks on the contact surface have been observed in large flat-on-flat contacts , indicating that material transfer spots carry normal and tangential forces which will introduce stress localizations.These spots have been related to non-Coulomb friction , where friction force increases during a loading cycle as the reversing point is approached, resulting in a hook-shaped form in measured hysteresis loops.Fretting fatigue studies relating to quenched and tempered steels are somewhat scarce in the literature because fretting fatigue is mostly studied using aluminum and titanium alloys.However, quenched and tempered steels are widely used in fatigue prone machine parts in mechanical engineering.Further, mostly used Hertzian type fretting contacts with some geometric form leads to specific distributions of contact tractions and slip.In addition, the contact size is typically relatively small, leading to fretting fatigue “size effect” and also normal pressure may be high.In Hertzian-type contact the stresses are sufficiently high that fatigue failure may be predicted using multiaxial failure criteria .However, in certain types of flat-on-flat contacts , where such macroscopic geometry-induced localization of the stresses does not occur, the initiation of cracks and fretting damage has been shown to occur even when these fatigue criteria do not predict failure.However, in practice fretting may prevail in large flat-on-flat contact under a modest normal pressure.An annular flat-on-flat surface produces quite even contact tractions and sliding distribution, as shown in , with no geometrical edges in the sliding direction and makes it possible to use relatively low normal pressures.The material alterations in the specimens analyzed here have been studied previously using cross-sections and some of the test results have been published, mainly in relation to frictional behavior and surface degradation .Microstructure at the adhesion spots was significantly plastically deformed .Three degradation layers were found at the adhesion spots and their immediate vicinity; the general deformation layer, the tribologically transformed structure and the third body layer.The very hard TTS itself contains numerous cracks, as it has been under excessive plastic deformation.These cracks are oriented at an angle of approximately 45 degrees to the contact surface.The objective of this study was to characterize fretting cracks formed in a wide variety of operating and running conditions in the large-scale flat-on-flat surfaces using quenched and tempered steel specimens.All experimental fretting tests were carried out using an annular flat-on-flat fretting test device, which is described comprehensively by Hintikka et al. .Two axisymmetric fretting specimens are under normal load with one specimen rotated in oscillating manner, while the second is fixed, leading to slip and fretting between the specimens.The specimens clamped together create a large annular flat-on-flat contact with no edges in the sliding direction and have a nominal contact area of 314 mm2.Fig. 1 presents the test specimen and cross-sections of the two specimens and their holders, contact surface description and normal pressure distribution.The annular flat-on-flat contact has somewhat linear normal pressure distribution which deviates radially a maximum of about 18% from the average normal pressure value, being highest at the inner annulus.The normal load, as well as the rotation can be adjusted continuously.These are also measured together with the frictional torque during the entire experiment.From these measurements, the COF is determined in gross sliding conditions.Even though rotation is measured some distance away from the contact, rotation and sliding amplitude at the contact is obtained and presented by ruling out the specimen elastic deformation.The rotation is displacement-controlled by an actuator with feedback from the measured signal.The material used in the tests was quenched and tempered steel EN 10083-1-34CrNiMo6+QT having a totally tempered martensitic microstructure.Table 1 shows the chemical composition of the used steel.The composition was measured with energy dispersive spectrometry of a scanning electron microscope, which does not allow accurate quantification of carbon.For this reason, the amount of carbon is not presented in Table 1.The yield strength of the material is 994 MPa and the ultimate tensile strength 1075 MPa.A plain fatigue limit of 517 MPa has been measured for the same steel .The surface roughness of the ground specimens varied between 0.20 and 0.32 µm.Before fretting testing, the specimens were cleaned in an ultrasonic device with solvent.Most of the analyzed tests were carried out in gross sliding regime having sliding amplitudes of some dozens of micrometers.In gross sliding condition, the contact area is experiencing sliding in its entirety.The running condition is determined using ideal contact conditions.Tests below fully developed friction load were also carried out.In these tests, only limited amount of friction was utilized.In addition, a few short length tests in gross sliding regime were made.In all tests, the nominal normal pressure was between 10 and 50 MPa and the sliding amplitude from close to zero up to 65 µm.The normal load remained constant in the individual tests.The normal pressure distribution was checked and adjusted before each test using a pressure sensitive film.Elastic deformations of the specimen and test device have been removed in these reported sliding amplitude values.The fretting loading frequency was 40 Hz.The rotation amplitude was ramped up during the first 400 loading cycles and correspondingly ramped down during 100 loading cycles at the end of every test.The main test duration was three million loading cycles but short length tests in gross sliding regime with 100, 1000 and 10,000 loading cycles were also carried out.The temperature and humidity in the lab were 19–25 °C and 14–24% in the short length tests and in the other tests between 25 and 30 °C and 22–44%, respectively.The test matrix is shown in Table 2, presenting the section where the tests in question can be found, as well as the series name, the amount of loading cycles NLC, the nominal normal pressure p and the sliding amplitude ua.The characterization methods are described in more detail by Nurmi et al. .In short, the specimens for cross-sections were cleaned after fretting testing with acid detergent.At this point Leica MZ75 optical microscope was used to image the contact surface.Fig. 2 shows a fretting contact surface and the location of a cross section that was made.The cross-section samples were made in parallel with the sliding direction from the visually determined most severely degraded scars, i.e., adhesion spots, as shown in Fig. 2.It was assumed that the longest cracks would be found here.The cutting was performed approximately at the centerline of an adhesion spot.One cross-section covered roughly 13% of the circumference of the specimen.After grinding, polishing and etching, Leica DM 2500 M optical microscope and Philips XL 30 scanning electron microscope were used to document the cross-sections.Crack lengths were determined from microscope images using ImageJ software.Fretting damage on the contact surfaces was observed in every test.The most severe fretting scars in terms of damaged area were observed in the full length gross sliding tests, where in some tests the whole specimen nominal area was damaged.In addition, the longer the test duration, the larger the area of fretting damage was.Fig. 3 shows the effect of different operating parameters on surface damage.The total amount of loading cycles was 3 × 106 in all cases presented in Fig. 3.The higher the sliding amplitude, the larger the area of fretting damage.Normal pressure has a similar effect.The higher the normal pressure, the larger is the area of fretting damage.Most of the tests produced millimeter-scale adhesion spots, as shown in the surfaces in Fig. 3.Adhesion spots were already observed with a normal pressure of 10 MPa.Inspection of surfaces revealed protrusions and dents, which are evidence of adhesive material transfer .During the initial stage of the tests, adhesive wear and material transfer dominate, while debris creation changes the major wear behavior from adhesive to abrasive wear.Visible cracks on the contact surface were found in many gross sliding samples by using optical microscopy.In the full length gross sliding tests with the amount of loading cycles of 3 × 106, the sliding amplitude ranged between 5 and 65 μm and the nominal normal pressure between 10 and 50 MPa.In the results presented here, the COF is calculated from the measured maximum frictional torque amplitude during a loading cycle and the normal pressure distribution , representing the maximum COF during one loading cycle.In all gross sliding tests, the COF peaked at the beginning of the tests with the maximum COF of about 1.4 and the stabilized, steady state COF after decrease was about 0.8.Overall, clear and extensive cracking was observed from the cross-sections.Under the most degraded areas, adhesion spots, the longest cracks appeared as pairs, as shown in Fig. 4.This can be explained by the cyclic and reversing loading of the fretting contact.The crack pair is typical for all gross sliding tests analyzed, but also for some tests where sliding amplitude was much less.The focus of this study was these longest cracks created around adhesion spots, as seen in Fig. 4.The cracking of the TTS-layer has been studied earlier .Major cracks nucleate at the surface, most likely at the edges of the adhesion spot, where local stress is expected to be high.However, no correlation could be made between visually measured adhesion spot size on the contact surface and crack dimensions.The obvious explanation is that the fretting scar is still evolving after the crack pair has been formed.Moreover, the determination of fretting scar features from the degraded contact surface is no easy task.Fig. 5 shows a more detailed view of a major crack in another specimen with sliding amplitude 35 µm and normal pressure 10 MPa.Cracks propagate at an oblique angle to the contact surface towards the base material.This is typical behavior and often found in the literature but in those cases cracks form often at the edges of the contact, whereas here a crack pair forms at nominally flat surfaces inside the contact.As shown in the Fig. 5, the crack changes its orientation during propagation.Close to the contact surface, the angle to the contact surface is quite small but after a few dozen micrometers, the angle gets bigger and stays quite constant thereafter.In the full length gross sliding specimens, multiple smaller and arrested cracks of some dozens of micrometers in length having a slight angle were observed close to the contact surface.These might contribute to delamination and the creation of fretting debris.Most of the cracked area is also martensite.Significant plastic deformation can be observed at the adhesion spot between the crack pair.It seems that the crack size is limited to the size of the fretting induced plasticity region.Hardness is increased here by 50–70% compared to the base material and EBSD results also reveal severe plasticity .From EBSD images it was observed that grains had flattened and those near the cracks had orientated in the same direction as the cracks.Plastic deformation was less severe near the crack ends and in the area between two crack ends.Crack lengths and depths were measured from the microscopic images of cross-sections.The longest crack lengths with various sliding amplitudes and normal pressures are shown in Fig. 6.Crack lengths clearly increase as the sliding amplitude is increased, regardless of the normal pressure, up to the sliding amplitude of 35 μm.At higher sliding amplitudes, 10 MPa and 30 MPa values show much smaller crack lengths, but the rest of the points still support the increasing trend.In addition, the average value taking into account all normal pressures still suggests the increasing trend up to the sliding amplitude of 50 μm.The relation between normal pressure and crack length is less obvious, but the average of each normal pressure value suggests that crack length is increased with the normal pressure.However, these can be within statistical scatter due to the low number of tests.With the lowest values of sliding amplitude, the crack lengths measured are some dozens of micrometers, which is within the size scale of the material grain size.The longest crack lengths were well over a millimeter, which are notably over the grain size.Thus, the principles of fracture mechanics may be applied.Fig. 7 shows the correlation between crack length and crack depth.Correspondingly to crack length, the crack depth is the deepest measured within one cross-section.The depths varied from a few micrometers up to a little over half a millimeter.The crack length and crack depth have a linear correlation.The average angle for crack propagation to the contact surface is 26.2 degrees determined from these results.This angle is approximately the same regardless of test parameters.The distance between crack pairs at the nucleation point on the surface was measured from the cross-sections and is presented in Fig. 8.In some cases severe plastic deformation and wear debris at the crack nucleation site make it difficult to determine the crack nucleation point.The wider the distance between a crack pair is, the longer the cracks are.Crack depth also correlates linearly with the distance between cracks.Crack length, depth and distance between a crack pair thus have a linear correlation with each other.Even if fretting test parameters and running conditions change, the relative crack geometry remains relatively constant.It seems that cracks nucleate at the edges of an adhesion spot, in the same manner as at contact edges often reported in the literature.However, in this case adhesion spots and cracking occur between nominally flat surfaces without any macroscopic geometrical shape.As noted earlier, a gross sliding regime is typically considered as a regime producing surface damage and wear rather than cracking.However, these results revealed that significant cracking also occurred in nominally gross sliding samples.One test specimen from a full length gross sliding test having sliding amplitude of 50 µm and normal pressure of 30 MPa was prepared for fracture surface inspection.As specimens do not completely crack in the test device, the fracture surface needs to be opened.First, a thin 1.5 mm section was cut from a chosen sample having an adhesion spot as big as possible.The section was torn open at the point of the adhesion spot, weakened by the crack.The fracture surface is shown in Fig. 9.In, the fracture surface and a part of contact surface is shown.The sliding direction and the crack growth direction are marked.Magnified images are shown on the right.Red boxes indicate the areas of the magnified images.The cracks approximately normal to the sliding direction may have been formed in the opening process.The fracture surface close to the nucleation area does not clearly indicate fatigue, ductile or brittle fracture behavior.However, fatigue striations corresponding to cyclic fatigue crack growth were found after some propagation, as shown in.Interestingly, the measured crack growth rate from the striations is about 0.45 µm/cycle, thus roughly corresponding to the value determined in the short length gross sliding tests.Similar results have also been observed in the literature , where fatigue striations were observed at some distance from the actual nucleation site.Short length gross sliding tests were made to study early formation of fretting damage, crack formation and initial phases of crack propagation in more detail.Test durations were 100, 1000 and 10,000 loading cycles.The sliding amplitude was 35 μm and normal pressure 30 MPa.Fig. 10 shows the COF curves during these tests and test lengths.The ramping cycles are marked with a solid gray area.The COF increases markedly during the initial cycles having the peak value about 1.4, before it starts to decrease and stabilize.Approximately identical friction behavior is observed between short and full length gross sliding tests during the corresponding loading cycles and thus their comparison is relevant.Already in the 100 loading cycle case, the COF has started to decrease after peaking.Thus, in all the reported tests here, the peak in friction has already occurred.Fig. 11 shows fretting scars and cracks at selected locations.It can be clearly observed how the area of fretting damage on the surface increases during loading cycles.Adhesion spots form after only a small number of loading cycles.Noticeable tensile force was needed to pull specimens apart in some short length tests, which is a clear indication of adhesion and cold welding between the specimens.Already in the 100 loading cycle case plastic deformation and cracks were observed.The peculiar shape of the contact surface in the 100 loading cycle case may be due to the pulling-induced tension after the test.However, these cracks in short length tests were in average smaller than in full length tests.As cracks had already nucleated after a relatively small amount of loading cycles, it seems that cracks are caused by heavy overstressing .A clear crack pair having individual lengths of hundreds of micrometers was formed within 1000 loading cycle case, and the crack dimensions were already at the same level as in the full length tests.Crack nucleation and propagation has therefore been rapid.Taking into account the ramping cycles of a test, the average crack growth rate can be almost 0.5 µm/cycle.In addition, the plasticity region resembles the corresponding regime in the full duration tests.These tests were performed with load levels below fully developed friction , i.e., with limited utilization of friction, resulting in sliding amplitudes in the range of only a few micrometers.As COF is the coefficient of friction in gross sliding condition where all surface points are sliding, it cannot be used in stick and partial slip conditions.Therefore, torque ratio was used to analyze frictional properties instead of COF.TR is the ratio between tangential traction amplitude and normal traction, which corresponds to COF in gross sliding conditions and is also valid in partial slip conditions.Fig. 12 shows the TR curves of these tests.TRM is the maximum TR value observed during the test.Tests having TRM values 0.28, 0.35, 0.43, 0.51, 0.75 and 0.93 were analyzed.The normal pressure applied was 30 MPa.The maximum of average sliding amplitude uave during the tests is shown.The average sliding amplitude is determined from rotation amplitude and average radius of the specimen.The rotation amplitude at the zero torque is used to remove elastic deformations .The average sliding amplitudes varied from close to zero up to 3.5 µm.For comparison, one gross sliding curve of a test having maximum and stabilized TR values of about 1.4 and 0.8, respectively, is presented.Fig. 13 shows fretting scars and SEM images from cross-sections of tests having TRM values of 0.35, 0.75 and 0.93.Overall, these tests led to significantly less severe fretting damage and surface wear compared to the gross sliding tests, especially when TR was small.The higher the TR value and sliding amplitude, the more severe the fretting scar was.With TRM = 0.93, the fretting scar resembles the surface damage seen in gross sliding tests, though less severe.With small TR values and sliding amplitudes, crack pairs did not form, as shown in Fig. 13.Identification of micrometer-level cracks is challenging, and the cracks can be mistaken for material defects.These small cracks were not within the scope of this study.When TRM value was 0.93, in addition to the increased level of fretting-induced damage, a clear crack pair was formed, Fig. 13.In addition, crack length increases having lengths of dozens of micrometers.However, the cracks are still much shorter than the cracks in the gross sliding tests.According to the results, adequate utilization of friction and sufficiently large slippage are essential for cracks to form.The results show the great tendency of fretting to create cracks.Cross-sections were made focusing on the most severe looking fretting scar.Multiple small cracks having dimensions of material grain size were observed close to the contact surface, which is typical in fretting, but mainly the observations were characteristics of two major cracks around the adhesion spot.The analysis of fretted surfaces by imaging may reveal cracks but a more precise way to find cracks is to use cross-section analysis.Almost in every analyzed test cracks of at least dozens of micrometers were observed, the biggest cracks being over a millimeter in length.The size of cracks increased linearly up to the sliding amplitude value of 35 µm, but after that, in some tests the size decreased as the sliding amplitude was increased.It may be that higher slip leads to increased wear that may play a role, since embryonic cracks can be worn off before propagating further, or higher slip may affect the contact adhesion response by means of shearing and/or fatigue of asperity tip junctions.According to the few short length tests, cracks form very early in the loading history, thus representing low cycle fatigue conditions rather than high cycle conditions.Broader study of this appears to be important further topic focusing, in particular, the relation between observed frictional behavior and crack forming.The focus of this study was to study the formation of fretting-induced cracks whose formation and propagation is promoted by the contact stresses.Rather than the presence of a pre-existing flaw, cracking was due to damage accumulation because of the high local stresses induced by fretting.In a fretting contact, the stress state and slip should affect the nucleation of cracks, although further the growth in the propagation phase is determined by the conditions at the crack tip.In the majority of the measured cracks the lengths were multiple times longer than the grain size of the material, thus their behavior can be described by the principles of fracture mechanics.Severe plastic deformation and cracking were observed already with a nominal normal pressure of 10 MPa.Therefore, very high local loading conditions must exist at the adhesion spots that cannot be predicted using nominal loadings on the nominally flat-on-flat contact surfaces without considering microscale grain structure.The formation and localization of adhesion spots may be affected by minor deviations in manufacturing of flat surfaces or surface topography.The nominal stresses due to the frictional torque in the contact are some dozens of megapascals and are thus very low compared to the fatigue strength of the used steel, so cracks leading to complete specimen fracture was not expected.Obviously, if adequate cyclic bulk stress is applied, these cracks are expected to propagate.Fig. 14 shows a schematic presentation of the evolution of the annular flat-on-flat fretting contact using quenched and tempered steel.Initially the Q/P ratio, i.e. COF in gross sliding, increases due to the evolution of adhesion leading material transfer.Cracking may already be present at this point.At its peak, frictional behavior is highly non-Coulomb in nature due to tangential fretting scar interactions , as shown in the hook-shaped fretting loop, presenting the energy dissipated by the friction.Already at this point excessive plastic deformation and relatively big cracks were observed, so crack nucleation has been relatively rapid.A tribologically transformed structure and third body layer have been observed to clearly develop within about 10 000 loading cycles .The COF value decreases markedly from the peak value and stabilizes after some thousands or ten thousands of loading cycles.Non-Coulomb conditions are revealed, as shown by the rectangular-shaped fretting loop.Fretting wear produces oxidized debris that can be partly entrapped within the contact and partly ejected from the contact.According to Hintikka et al. , the transition from partial slip to gross slip in this annular-type of flat-on-flat contact occurs with COF of 1.0 at average sliding amplitude of 0.5 µm.Experimental tests start with ramping up the rotation.Thus, at least at the very beginning of a test the contact is in partial slip conditions, but it will quickly move to gross sliding, when enough rotation is applied.In the majority of tests presented here, the gross sliding condition is prevailing.These ideal conditions were followed in this study.However, it is clear that the adhesion spots do not follow these ideal conditions.As a result, although contact is seemingly in gross sliding regime, it is likely that these local contact areas are stuck at least momentarily in the loading history, and therefore a local partial slip condition exists.It may also be possible that a localized high COF in adhesion spots results in similar type of crack pair shown here.Crack may nucleate at the boundary between the resulting stick and sliding regimes, as often reported in the literature in the partial slip conditions.However, it is emphasized that in those cases Hertzian contact is often used, whereas here a crack pair forms at nominally flat surfaces inside the contact.Regardless, the true behavior at the adhesion spots remains unknown and the determination of the local conditions at the adhesion spots for evaluating cracking behavior is important and warrants further study.Fretting-induced crack formation was studied in large flat-on-flat contact by making cross-section samples from fretting scars.The material used was self-mated quenched and tempered steel 34CrNiMo6.Test specimens with different sliding amplitudes, nominal normal pressures, test lengths and loading conditions were analyzed.The following conclusions can be made:Annular flat-on-flat contact creates local adhesion spots which revealed significant fretting-induced cracking and plastic deformation, especially in tests having enough high sliding amplitude.The formation of cracks can be explained with local stress concentrations around the adhesion spots.Even the contact is nominally in gross sliding conditions, localized stick areas likely exist at least at some moment in the loading history.Mostly two major cracks with lengths at least of hundreds of micrometers were formed at an adhesion spot.The cracks nucleated on the surface and grew towards each other at an average crack angle to the contact surface of about 26 degrees, regardless of fretting test parameters.Smaller arrested cracks were also observed close to the contact surface.Linear correlation existed between crack length and the distance between the nucleation points between a crack pair.Clear cracking was observed when the maximum value of the ratio between tangential traction amplitude and normal traction during testing was larger than 0.8.Cracks seemed to form at the initial stages, as the thousand cycle test shows similar crack lengths as the full length tests of three million cycles.Cross-sections made from fretting scars are an efficient and precise way to study cracking and material degradation.Visual inspection of fretting damaged surface using optical microscopy may find cracks but does so less comprehensively.Thus, the suggested method is to use cross-section samples.
Fretting fatigue may lead to severe damage in machines. Adhesive material transfer spots in millimeter scale have previously been observed on fretted surfaces, which have been related to cracking. In this study, fretting-induced cracks formed in a large annular flat-on-flat contact are characterized. Optical and scanning electron microscopy of the fretting scar cross-section samples of self-mated quenched and tempered steel specimens revealed severe cracking and deformed microstructure. Two major cracks typically formed around an adhesion spot, which propagated at an oblique angle, regardless of the test parameters used. Millimeter-scale cracks were observed already within a few thousand loading cycles.
89
Attention deficit hyperactivity disorder symptoms as antecedents of later psychotic outcomes in 22q11.2 deletion syndrome
22q11.2 Deletion Syndrome is diagnosed typically on the basis of its clinical presentation together with laboratory evidence of a deletion or Copy Number Variant at band q11.2 on chromosome 22.Approximately 1 in 4000 individuals are affected by 22q11.2DS, rendering this the most common chromosomal microdeletion syndrome.While the microdeletions range in size from 0.7 to 3 million base pairs, the majority of patients have a 3 MB deletion.The physical, cognitive and psychiatric manifestations associated with 22q11.2DS are variable and involve multiple systems including immune, cardiac, palatal, gastrointestinal and endocrine deficits, regardless of deletion size.An association between the 22q11.2 deletion and psychosis has long been recognised.Approximately 1 in 4 individuals with 22q11.2DS develop schizophrenia) and around 1 in 100 individuals with schizophrenia have been found to carry the 22q11.2 deletion.This means that those with 22q11.2DS are at substantially elevated risk of developing schizophrenia spectrum disorders with an onset typically occurring after mid-late adolescence.Numerous studies of individuals with 22q11.2DS have observed a range of psychiatric and cognitive problems in childhood and adolescence; that is, prior to the typical age of onset for psychosis.These include anxiety disorders, Autism Spectrum Disorder and intellectual disability.Attention Deficit Hyperactivity Disorder is one of the most prevalent psychiatric disorders in childhood occurring in around 40% of individuals with 22q11.2DS.Although the psychosis phenotype in 22q11.2DS is largely similar to individuals without the deletion), the ADHD phenotype differs.Those with 22q11.2DS show more pronounced inattention symptoms than individuals with ADHD from clinically ascertained and general population samples).Attentional impairments are a central characteristic of schizophrenia and ADHD.Also, inattention symptoms have been shown to be antecedents of psychosis in studies of childhood-onset schizophrenia) as well as in studies of individuals with prodromal clinical psychosis and of those with subclinical psychotic symptoms).Prior cross-sectional investigation also suggests that ADHD inattention symptoms are associated with subthreshold psychosis in 22q11.2DS.To address the hypothesis that ADHD is an antecedent of psychosis in children and adolescents with 22q11.2 DS, we employed the first multisite and largest longitudinal study of 22q11.2DS to date to investigate this question.Here, in a sample that was first assessed at age 18 years or younger, we investigate whether childhood ADHD diagnosis and inattention symptoms are early indicators of psychosis.We also assess whether changes in inattention symptom levels and ADHD diagnosis are associated with later psychosis.The International 22q11.2 Deletion Syndrome Brain Behavior Consortium was established in 2013 with the aim of harmonizing existing well-characterized cohorts of participants with 22q11.2DS with both phenotypic and genotypic data available.For the current study, participants were recruited from 6 IBBC sites.All participants had 22q11.2 microdeletion that was confirmed via the IBBC quality control procedures and heat-map data from microarrays).Participants were included if they underwent a comprehensive structured psychiatric assessment using a validated instrument that would provide information on ADHD diagnosis/symptoms and psychotic symptoms, if longitudinal data were available and if their age at the first-time point was ≤18 years old.This study focuses on the development of psychotic symptoms; therefore, to adopt a clearer design, those who reported any subclinical psychotic symptoms at T1 were excluded from the main analyses but included in sensitivity analyses.The study was approved by the appropriate local ethics committees and institutional review boards.Each participant and his or her caregiver, when appropriate, provided informed written consent/assent to participate prior to recruitment.Assessments were conducted using well-validated structured diagnostic instruments.ADHD symptoms and diagnoses compatible with DSM-IV-TR diagnostic criteria were obtained using standard approaches for this age group -i.e., parent reported interviews.Psychotic symptoms were assessed using self- and parent-reports.If either participants or their parents reported psychotic symptoms, then these were counted as present.We only included the positive symptoms of psychosis in our analyses.Phenomena were not coded as psychotic symptoms if they were attributed to hypnagogic and hypnopompic states, fever or substance use.Due to the different assessment methods and range of questions asked between the different sites, presence of any subclinical/clinical psychotic symptom was coded as 1 vs. 0, instead of using a continuous scale.The ratings of psychotic disorders were harmonized across the sites as part of the IBBC initiative.Data on ADHD symptoms were also harmonized, with sites completing information on a specific list of symptoms.A total inattention symptom score was obtained by summarizing the number of inattention symptoms that were reported as present.If at least one missing value was present, the total inattention symptom score was reported as missing, hence the different numbers reported for ADHD diagnosis and inattention symptoms.The primary predictor variables were inattention symptoms and ADHD diagnosis.The outcome variables were 1.psychotic symptoms and 2.psychotic disorder.Psychotic disorder included schizophrenia, schizophreniform disorder, schizoaffective disorder, psychotic disorder NOS, delusional disorder, and brief psychotic disorder with at-risk status as defined by the DSM-5.For the purpose of sensitivity analyses, hyperactive-impulsiveness and total ADHD symptom scores were also considered as predictor variables.Standardized IQ scores were available across the sites using age-appropriate Wechsler scales and were examined as confounders.Taking into account that intellectual disability is frequently present in 22q11.2DS, some sites assessed the presence/absence of ADHD symptoms taking into account whether the individual with 22q11.2DS had intellectual disability.To account for this, we included ‘site assessment differences’ as a covariate variable.We also included age at baseline and sex as covariates.Logistic regressions were conducted to examine whether T1 inattention symptoms and ADHD diagnosis predicted T2 outcome status.We repeated the analyses by also including age, sex, IQ and the variable ‘site assessment differences’ as covariates.We also examined whether longitudinal change in inattention symptoms and ADHD diagnosis were associated with psychotic symptoms and/or any psychotic disorder at T2.Change in inattention symptoms in relation to psychotic symptoms and psychotic disorder was examined using principal components analysis following previously reported methods.The PCA included inattention symptoms at T1 and inattention symptoms at T2.Two factors were identified from the PCA, one corresponding to the average of ADHD symptoms at T1 and T2 and the other one representing change over time.Logistic regression analyses were used to examine the associations between change over time and psychotic symptoms/psychotic spectrum disorders at T2 after adjusting for average ADHD symptoms as well as sex, IQ, age and site assessment differences.The advantage of the PCA method is that the two factors are uncorrelated in the regression model.To examine change of ADHD diagnosis between T1 and T2 we constructed a categorical variable.Logistic regression analyses were used where the categorical variable was the predictor and psychotic symptoms/psychotic disorder were the outcome variables while adjusting for sex, IQ, age and site assessment differences."Where psychotic disorder was used as an outcome, due to the small sample size of cases with psychotic disorder, the maximum likelihood estimates tended to infinity, and in this case we used Firth's method instead.As a sensitivity analysis, we repeated these analyses by also examining hyperactive-impulsiveness symptoms and total ADHD symptoms.We also examined whether there were differences in inattention symptoms and ADHD diagnosis in individuals who at T1 reported psychotic symptoms compared with individuals that did not report psychotic symptoms at T1.The sample started with a total of 323 individuals aged 11.8 years at the first assessment and 15.1 years at the second assessment.Excluding 73 participants who reported psychotic symptoms at T1, our final sample included 250 individuals with complete data on psychotic symptoms and ADHD diagnosis and 188 individuals with complete data on psychotic symptoms and inattention symptoms.The mean age at assessment for individuals at T1 was 11.2 years and at T2 was 14.3 years.The mean follow-up time across sites was 3.19 years.Of those with psychotic symptoms at T2, 71% also had an ADHD diagnosis at T2 and of those with psychotic disorder at T2, 63% also had an ADHD diagnosis.ADHD inattention symptoms and ADHD diagnosis at T1 were associated with development of psychotic symptoms at T2.There was no evidence for associations between inattention symptoms at T1 and psychotic disorder at T2.ADHD diagnosis at T1 was associated with psychotic disorder at T2.Sensitivity analyses revealed no association between T1 hyperactive-impulsiveness symptoms at T1 and psychotic symptoms at T2 or with psychotic disorder at T2.There was weak evidence for associations between total ADHD symptoms and development of T2 psychotic symptoms but no evidence for associations with psychotic disorder at T2.As a further sensitivity analysis, we also compared individuals with and without psychotic symptoms at T1.We found significant age and IQ differences, with the group reporting psychotic symptoms at T1 being older than the group without psychotic symptoms at T1 and of lower mean IQ score.There were no significant mean differences in terms of inattention symptoms and ADHD diagnosis.Table S4 shows the summary statistics for change in inattention symptom levels and ADHD diagnosis over time in relation to psychotic symptoms at T2.There was no evidence that change over time in inattention symptom levels or ADHD diagnosis was associated with psychotic symptoms or psychotic disorder at T2.Results were the same for our sensitivity analyses with hyperactive-impulsive and total ADHD symptoms.In the largest longitudinal study examining the presence of ADHD symptoms and diagnosis in individuals with 22q11.2DS to date, we observed that inattention symptom levels and ADHD diagnosis predicted the development of psychotic symptoms and were weakly associated with psychotic disorder.Moreover, we found that the presence of inattention symptoms at any time point rather than the change in inattention symptom levels over time was associated with psychotic symptoms.The evidence was weaker for the outcome of psychotic disorder, but this is likely due to low power, since in this young sample only 6% were diagnosed with psychotic disorder.These longitudinal findings are in accordance with our previous study that found cross-sectional associations between inattention symptoms and psychotic symptoms in individuals with 22q11.2DS.Previous population-based) and clinical studies also have observed that childhood inattention symptoms are an antecedent to psychosis.There are a number of potential explanations for our findings.One is that the 22q11.2 deletion increases risk for inattention symptoms and ADHD, which in turn increase risk for psychotic outcomes.Another is that inattention symptoms or an ADHD diagnosis in the context of 22q11.2DS are prodromal or premorbid forms of schizophrenia rather than ADHD per se.For instance, Fletcher and Frith suggested that psychotic symptoms are the result of an abnormal formation of beliefs about the world.In this way, individuals with inattention symptoms might direct their attention to less relevant or too many environmental cues and in turn might perceive and interpret environmental stimuli as more unusual and salient, which would predispose them to having psychotic symptoms.Therefore, these inattention symptoms might be indicators of abnormal probabilistic learning that has been observed in schizophrenia).However, we cannot exclude the possibility that the associations between inattention symptoms and psychotic outcomes also reflect shared genetic variance, especially given evidence for genetic overlap between ADHD and schizophrenia.On the other hand, we did not observe cross-sectional associations between inattention symptoms/ADHD diagnosis and psychotic symptoms at T1.ADHD diagnosis was equally comorbid in those with and without psychotic symptoms and the mean levels of inattention symptoms were similar between the two groups.A previous study that examined the dimensional structure of a wide spectrum of psychopathology in 22q11.2DS has found evidence of a general psychopathology factor in addition to more specific factors."Therefore, one potential explanation of our findings is that at an earlier age, ADHD symptoms and psychotic symptoms also indicate the individual's general propensity for psychopathology and as individuals approach the age of onset for risk of psychosis, certain symptoms become more specific to psychosis.Finally, taking into account that psychiatric conditions can co-occur, it may not be ADHD per se, or any other psychiatric disorder, but rather the severity of presentation and/or the cumulative contributions of increasing psychiatric conditions/severity that increases risk for psychotic outcomes.We did not observe association between hyperactive-impulsive symptoms and psychotic outcomes, which accords with cross-sectional studies of 22q11.2DS and population-based studies.As has been previously suggested, this could be due to differences in the way dopaminergic function acts between schizophrenia and ADHD, with dopamine hypo-activity being more likely linked to the hyperactivity-impulsiveness aspects of ADHD and dopamine hyperactivity to schizophrenia.Our study is the first to show that inattention symptom levels and ADHD diagnosis in those with 22q11.2DS are associated with later emerging psychotic outcomes.If inattention and ADHD are risk factors for future psychosis, then effective treatment is a priority for reducing the risk of psychosis in this high-risk group.However, if ADHD is a prodromal feature of psychosis in this group, then taking into account that stimulant medication is often prescribed for ADHD, future studies are needed to examine potential effects of such treatment in individuals with 22q11.2DS.A randomized controlled trial in thirty-four children with 22q11.2DS and ADHD indicated that methylphenidate can be safe and effective after a 6-month treatment and led to a 40% reduction of ADHD symptoms as reported by parents.Interestingly, all subjects had at least one side effect and approximately 40% exhibited depressive-like symptoms after treatment.Although informative, the sample size as well as the follow-up time were limited in this study.Further studies are warranted to examine the effect of stimulants and other ADHD-treatments in 22q11.2DS.Although the link between ADHD and psychosis is not adequately studied, our findings support those from previous patient registry and high-risk studies in populations without the 22q11.2 deletion that have observed associations between ADHD, early attentional impairments and later psychosis, as well as comorbidity between ADHD and psychosis.Taking into account that 22q11.2DS is a rare, large effect size mutation that serves as a powerful model for examining early antecedents of psychosis, our findings further point to the possibility that some patients with ADHD might present with psychotic symptoms at follow-up.The findings also highlight the need for further studies in order to better understand the relationship between ADHD and psychosis.Although this study benefitted from recruitment of a number of sites of the 22q11.2 IBBC, resulting in a relatively large sample, the study may have been underpowered for some analyses.Moreover, the mean age at follow up was 14.3 years and therefore the individuals with 22q11.2DS had not yet passed the peak age of onset for schizophrenia.Therefore, the associations that we report are likely to represent an underestimate.Another limitation is that we could not consider the impact of medication, since at the time of this analysis medication information was not consistently reported from all sites.However, failing to adjust for medication is more likely to have attenuated the magnitude of association between ADHD symptoms and psychotic outcomes.Although the assessments were conducted by experienced clinicians and psychologists, we cannot exclude the possibility of diverse diagnostic practices across sites that might have influenced our findings.However, we attempted to account for differences between centres in our analyses and did not find that site assessment differences explained our findings.Ascertainment bias is also likely, considering that genetic testing was conducted on the basis of a phenotype that was sufficient to warrant genetic testing.Finally, taking into account that comorbidity is common in 22q11.2DS, it could be that other disorders, in addition to ADHD, might be longitudinally associated with psychotic outcomes in 22q11.2DS.However, this question was outside the remits of this study.Also, it could be argued that ADHD is more easily amenable to symptom reduction by treatment than other potential clinical risk factors for psychosis in 22q11.2DS.Interestingly, a recent study on 89 children with 22q11.2DS did not find longitudinal associations between autism spectrum disorders and psychosis.Our study is the first to examine the longitudinal associations between ADHD symptoms and psychotic outcomes in 22q11.2DS.Our findings that inattention symptoms and ADHD diagnosis were associated with subsequent psychotic symptoms and psychotic disorder in 22q11.2DS have important clinical implications.Future studies examining the effects of ADHD medication in individuals with 22q11.2DS are warranted.Study concept and design: Maria Niarchou, Anita Thapar.Acquisition of data: Maria Niarchou, Marianne B.M. van den Bree, Samuel J.R.A. Chawner, Ania M. Fiksinski, Jacob A.S. Vorstman, Maude Schneider, Stephan Eliez, Marco Armando, Maria Pontillo, Donna M. McDonald-Mcginn, Beverly S. Emanuel, Elaine H. Zackai, Carrie Bearden, Vandana Shashi, Stephen Hooper, Michael J. Owen, Raquel E. Gur.Analysis of data: Maria Niarchou, Naomi Wray, Marianne B.M. van den Bree, Anita Thapar.Interpretation of data: Maria Niarchou, Naomi Wray, Marianne B.M. van den Bree, Anita Thapar, Michael J. Owen, Raquel E. Gur.Critical revision of the manuscript for important intellectual content: All authors.This study was funded by the National Institute of Mental Health grants USA U01MH101719, U01MH0101720, Wellcome Trust Fellowship, R01 MH085953, U54 EB020403, Swiss National Science Foundation to SE, National Center of Competence in Research “Synapsy-The Synaptic Bases of Mental Diseases” to S.E.The funding sources had no participation in any of the aspects of this study.
Individuals with 22q11.2 Deletion Syndrome (22q11.2DS) are at substantially heightened risk for psychosis. Thus, prevention and early intervention strategies that target the antecedents of psychosis in this high-risk group are a clinical priority. Attention Deficit Hyperactivity Disorder (ADHD) is one the most prevalent psychiatric disorders in children with 22q11.2DS, particularly the inattentive subtype. The aim of this study was to test the hypothesis that ADHD inattention symptoms predict later psychotic symptoms and/or psychotic disorder in those with 22q11.2DS. 250 children and adolescents with 22q11.2DS without psychotic symptoms at baseline took part in a longitudinal study. Assessments were performed using well-validated structured diagnostic instruments at two time points (T1 (mean age = 11.2, SD = 3.1) and T2 (mean age = 14.3, SD = 3.6)). Inattention symptoms at T1 were associated with development of psychotic symptoms at T2 (OR:1.2, p = 0.01) but weak associations were found with development of psychotic disorder (OR:1.2, p = 0.15). ADHD diagnosis at T1 was strongly associated with development of psychotic symptoms at T2 (OR:4.5, p < 0.001) and psychotic disorder (OR:5.9, p = 0.02). Our findings that inattention symptoms and the diagnosis of ADHD are associated with subsequent psychotic outcomes in 22q11.2DS have important clinical implications. Future studies examining the effects of stimulant and other ADHD treatments on individuals with 22q11.2DS are warranted.
90
Surface-based 3D measurements of small aeolian bedforms on Mars and implications for estimating ExoMars rover traversability hazards
The surface of Mars hosts various types of aeolian bedforms, from small wind-ripples of centimetre-scale wavelength, through larger decametre-scale “Transverse Aeolian Ridges” to kilometre-scale dunes.To date, all mobile Mars surface-missions have encountered recent aeolian bedforms of one kind or another, despite being located in very different ancient environments: the Sojourner rover explored a megaflood outwash plain, the Mars Exploration Rovers “Spirit” and the ongoing “Opportunity” investigated the interior of Gusev Crater and the sedimentary Meridiani plains respectively, and the Mars Science Laboratory “Curiosity” rover is studying fluviolacustrine and other sediments within Gale Crater.Hereafter, when we refer to aeolian bedforms and deposits, we refer to recent bedforms consisting of loose sediments, rather than lithified or indurated bedforms, or bedforms preserved in outcrop.Aeolian deposits consisting of loose unconsolidated material can constitute hazards to surface mobility of rovers: sinkage into the aeolian material and enhanced slippage can hamper traction and hence prevent forward progress, forcing the rover to backtrack or, in the worst case, leading to permanent entrapment and end of mission.Being able to estimate the depth of loose aeolian material before a rover drives over them is therefore clearly of great advantage.Although measurement of bedform heights can be performed in situ, this provides no scope for forward planning, nor for assessing the traversability of a candidate landing location prior to final site selection.What is needed is a way to estimate aeolian hazard severity in a given area using remote sensing data alone.The aim of this paper is to find a way to estimate the heights of aeolian bedforms that are too small to be measured using HiRISE DEMs, in order to increase our knowledge of the hazards they pose to rovers.In 2020, the European Space Agency, in partnership with the Russian Roscosmos, will launch the ExoMars rover to Mars.The rover has the explicit goal of looking for signs of past life.The ExoMars rover will be equipped with a drill capable of collecting material both from outcrops and the subsurface, with a maximum reach of 2 m.This subsurface sampling capability will provide the best chance yet to gain access to well preserved chemical biosignatures for analysis.However, drilling on a planetary surface is difficult, time-consuming and not without risk.Hence, selecting scientifically interesting drilling sites, and being able to reach them, is vital for the mission; the ExoMars mission was conceived as a mobile platform to ensure that the drill can be deployed at the best possible locations.The rover has a mass of 310 kg and is expected to travel a few km during its seven-month primary mission."The rover's locomotion system is based on a passive 3-bogie system with deformable wheels.Lander accommodation constraints have imposed the use of relatively small wheels.In order to reduce the traction performance disadvantages of small wheels, flexible wheels have been adopted.However, the average wheel ground pressure is still ∼10 kPa.This is a concern for traversing unconsolidated terrains.To mitigate this risk the ExoMars team is considering the use of ‘wheel walking’, a coordinated roto-translational wheel gait in which the wheels are raised and lowered in sequence, that can improve dynamic stability and provide better traction for negotiating loose soils.The plan would be to engage wheel walking in case a certain predetermined wheel slip ratio limit is exceeded.In other words, wheel walking would be considered an “emergency” means to negotiate a challenging situation, after which the rover would revert to “normal” rotational driving motion.A key requirement of the locomotion system is the ability to traverse aeolian bedforms without becoming stuck, or, if bedforms are too large, steep, or high to traverse, to have the flexibility to plan a route around them.While larger bedforms such as TARs and dunes will simply be avoided as far as possible, smaller aeolian features such as meter-scale ripple-like bedforms identified in the MER Opportunity site in Meridiani Terra, provide a traversability hazard that is likely to be encountered, but the degree of severity of which is hard to assess from orbit.These sub TAR-scale bedforms are similar in many ways to terrestrial “megaripples”: they are linear sandy deposits that are tens of cm to several metres across and are often armoured with coarser granules or coarse sand-grade material in a monolayer on top of the sandy material that composes the greater volume of the bedform.While smaller examples were safely crossed by MER Opportunity, larger examples resulted in excessive wheel slippage, and could have led to a mission-ending situation.Even MSL, the most capable Mars rover currently operating, has found it hard to traverse aeolian features that appeared to be megaripples, sinking into one example at ‘Moosilauke Valley’ by about 30% of its 50 cm wheel diameter, and with slippage reaching ∼77%.Hence, understanding whether the majority of the aeolian bedforms are, or are not, traversable at a given landing site is essential, both in the first instance for landing site selection, and ultimately for efficient rover surface operations.Although a variety of material properties–including notably grain size and degree of armouring by coarse grains–alter the traversability characteristics of bedforms, a knowledge of the size and shape of aeolian bedforms is a primary question for any given rover site.Considerable effort has been made in modelling the ability of rovers to traverse loose sand and aeolian bedforms, but it is difficult to assess what scale of bedforms will be a hazard without understanding the shape of bedforms, which is hard to measure until the rover is in situ.Although remote sensing studies of Mars are able to detect and measure metre-scale landforms on the basis of 25 cm/pixel HiRISE data and, using stereo imaging-derived elevation models, to determine their heights to a precision of about 30 cm, this is still not precise enough to understand the detailed shape of bedforms that, while small, might still form hazards to rovers.In addition, local areas as textureless as dunes or sand sheets, or which contain repeated, similar morphologies such as TARs, are challenging for the stereo matching process, so the quality of DEMs can be poor for such terrains.Any knowledge of the scale of features that are traversable is particularly important when attempting to cross bedforms that are longer than the rover wheel-base; that is, when all six of the rover wheels are on the bedform.If the bedforms are set on top of bedrock, the height of the ridge crest provides a maximum depth to which the rover wheels can sink.In general, therefore, taller bedforms are a more serious concern.At time of writing, the landing site for ExoMars rover has yet to be determined."The mission's landing location will be chosen from two final candidates: Mawrth Vallis and Oxia Planum.Both sites contain aeolian bedforms such as TARs, but preliminary studies by our team have found little evidence for discrete, large dunes, although some dark sand-sheets are present.What is clear, however, is that the Oxia Planum site in particular contains zones with a very high density of very small, aeolian bedforms, smaller than the size range generally defined for TARs, and morphologically similar to the plains ripples or smaller TARs seen in the MER Opportunity site.These small features can only be seen in HiRISE images viewed at full resolution.We have not thoroughly searched for these meter-scale, TAR-like bedforms at the Mawrth Vallis site, but preliminary observations show that they are present here too.Although the plan-view shape of meter-scale aeolian bedforms can be measured in HiRISE images, their height cannot: they are generally lower relief than the precision of HiRISE-produced digital elevation models, and are also too small for other methods used to estimate the height of aeolian bedforms on Mars to be applicable.Hence, it is not possible to determine the extent of the hazard from orbital remote sensing directly.However, we can instead examine a different dataset of morphologically similar ripples from the MER Opportunity rover traverse and use measurements of height vs. bedform length from these as an analogue dataset.This will not only help us to determine whether such features are likely to be hazardous to ExoMars, but could also provide information about their origin by comparison with similar terrestrial data.Note that in this study we refer to bedform length as the cross-bedform distance parallel to the bedform-forming wind; for further explanation, see Fig. 3 in Balme et al.In this study, we present new observations of meter-scale, ripple-like, aeolian bedforms gathered during the Opportunity rover traverse, and present data for their height and length.The key result of our study is that an approximation of the height of mini-TARs can be obtained by measuring their lengths in plan-view using high resolution remote sensing data.We also present example bedform length data from two of the proposed ExoMars landing sites: Oxia Planum and Aram Dorsum.These data are included to provide an illustration of how the height-length data can be applied to the question of rover traversability.We do not aim to investigate every aspect of aeolian hazard to rover traversability only the most generic measures that can perhaps be obtained from obit, namely bedform height and planimetric size.Finally, having obtained height/length data for mini-TARs in the MER Opportunity site, we compare them to previous measurements of TARs.The results indicate that they are part of the same population of aeolian bedforms.At the time of writing, the Opportunity rover is still functioning on the surface of Mars, having travelled >45 km, and been active for more than 4850 sols.For much of the first 30 km of its voyage, Opportunity moved across flat plains with metre-scale ripple-like aeolian bedforms superposed upon them.Opportunity acquired numerous stereo imaging data of its surroundings using both its scientific Pancam camera system and the navigational NavCam system.Using these data, and newly developed PRo3D™ software, we are able to produce 3D models of many areas of the surface at sufficiently high resolution to reliably establish heights and lengths of many aeolian bedforms.The data used span the period from sol 550 to 2658.The Planetary Robotics 3D viewer was developed as part of the EU-FP7 PRoViDE project to visualise stereo-imagery collected by rovers on the martian surface.Mosaics taken from the left and right eye of the camera systems are reconstructed using a Semi-Global Matching technique for the Pancam and Navcam, and a Hierarchical Feature Vector Matching technique for the Mastcam data, using the PRoViP tool, developed by Joanneum Research.These are then globally oriented using SPICE kernels and Planetary Data Systems labels and converted to Ordered Point Clouds for visualisation and analysis in PRo3D, directly in the IAU Mars-centred coordinate frame.PRo3D allows for measurement and interpretation of the dimensions and geometries of features in the landscape, using simple point, line, polyline, polygon, and best-fit plane features, from which relevant attributes can be extracted.For a detailed summary of the PRo3D software and its application to geological analysis of martian rover-derived stereo-imagery, refer to Barnes et al.Whilst there are some inherent spatial measurement errors in the photogrammetric reconstructions, due to matching artefacts, camera calibration, and temperature variations, the current version of PRo3D does not incorporate these values specifically into the outputted measurements.There is however, a quantifiable error in pixel range determination, the MER Pancam having 5.7 mm of range error at 5 m distance, 23 mm at 10 m distance and 92.4 mm at 20 m distance.The lateral error is smaller, especially at longer range.These discrepancies can result in distortion of the 3D surface, but are overall rather small.We therefore do not include this error in our PRo3D measurements of aeolian bedform dimensions.Forthcoming versions of PRo3D are embedding the expected metrology error as known from image geometry and scene distance into the measurement tool directly, such that every measurement will have an associated measurement error attached.Physical calibration of PRo3D has not been done with the MER camera system, but is being performed for the ExoMars PanCam system.Nevertheless, some calibration can be done in-situ on Mars.To provide general calibration data for PRo3D, and to verify that the tool gives accurate measurements, we used PRo3D to measure the spacing between MER rover tracks and the diameters of holes drilled by the MER Rock Abrasion Tool.The RAT holes are 4.5 cm wide, and the lateral spacing between the rover wheels is 1.06 m.We made eight measurements of five RAT holes, on an outcrop with a slope of >20° at a range of only a few meters from the rover.We obtained a mean RAT hole diameter of 4.8 cm, with a standard deviation of 0.03 cm.We measured rover wheel track separation at two sites, in each case using right-edge to right-edge measurement of only very well defined tracks from the rear wheels.This was done to avoid uncertainty caused by estimating the centre of a track, or using tracks overprinted by the rear wheels.At the first site, where the tracks were very well defined, and in the 3–5 m range from the rover, ten measurements yielded a mean spacing of 1.061 m with a standard deviation of 0.004 m. For the second site, where the tracks were slightly less well defined, and in the 2.5–7.5 m range from the rover, 15 measurements were made, giving a mean spacing of 1.066 m, with a standard deviation of 0.007 m.We conclude that PRo3D measurements of metre-scale objects in the 2.5–7.5 m range are accurate to within 1% of their true value, suggesting that measurement error is more likely to derive from manual digitization error, rather than inherent errors in the measurement tool.Finally, the good agreement of the RAT hole PRo3D measurements with their true size, even on steep slopes, provides reassurance that the vertical scaling is correct.Our measurements were made with the aim of generating a dataset that could be compared with orbital plan-view HiRISE measurements of bedform length.Hence, the approach was developed to generate a single representative value of height and length for individual bedforms such that they could be compared with orbital data.To minimise possible errors inherited from the 3D-model, we measured only bedforms that were close to the rover when observed.Some bedforms could not be well-resolved by the stereo matching used to produce the PRo3D dataset, leading to gaps in the 3D mesh, and in some PRo3D scenes only part of a given bedform was imaged.This meant that there were sometimes no bedforms that could be sampled in a given scene.The sampling strategy was therefore to measure all bedforms within ∼7.5 m of the rover that had near-continuous 3D model coverage in PRo3D, and for which reliable digitization seemed possible.For each candidate bedform analysed with the PRo3D software, the bedform ridge crest was first identified.Then, using a plan-view viewing angle in PRo3D, line objects were constructed extending perpendicularly from the centre of the ridge crest to the margin of the bedform.Each measurement line was refined using the full range of 3D viewing angles.We generally digitized to the edge of the sediment-covered area for isolated bedforms, using both contact with bedrock and visible breaks in slope to determine the edge of the bedform.For bedforms that coalesced with one another horizontally, we often used an oblique view to determine where the bedform slope ended by examining the cross sectional shape, or the lowest point between bedforms.Line construction was done for both sides of the bedform, starting at the same point on the ridge and ensuring that both lines were parallel to one another.The bedform length was found by adding the horizontal length of these two lines, and the height by averaging the difference in vertical heights along the lengths of each line – thus accounting for a gently sloping substrate.Five such measurements of h and l were made for each bedform and combined to give mean values for height and length, as well as a sample standard deviation.Each of the five measurements was made slightly apart from the others to provide an estimate of variability: ΔH and ΔL.This was done due to the difficulties in generating a representative measure of height and length of a bedform from a single measurement, and the inherent possibility of a single measurement having a higher possibility of digitization or 3D model error, and the converse problem: trying to generate representative a simple height and length measurement from a complex 3D model in a timely fashion.The five measurement method was used as a compromise between the two.Hence, we were able to identify bedforms that had consistently measureable heights and lengths by their small relative ΔH and ΔL, as well as potentially poorly digitized bedforms, or those poorly resolved in PRo3D, which had larger values of ΔH and ΔL.From these data, flank slope and plan-view asymmetry can also be extracted.An example of these construction lines is shown in Fig. 5.In addition, each bedform was classified into one of three classes Type 1: sharp-crested, ripple-like features that show clear zones of substrate or bedrock between the bedforms, Type 2: sharp-crested, ripple-like features that are coalesced, such that no substrate or bedrock can be seen between them, and Type 3: uncommon, ripple-like bedforms with a more rounded crest shape.Examples of these classes are shown in Fig. 6.As a further check of the accuracy of the PRo3D measurements, and to test whether digitizing these features in HiRISE images would provide plausible data for bedform length, we plotted length of each bedform as measured in PRo3D against length for the same bedform measured in HiRISE images."To do this we created a Geographic Information System project including a shapefile describing the MER Opportunity rover path from the Opportunity Rover Analyst's Notebook and the HiRISE images that covered the path.We then used this GIS to identify individual bedforms in the HiRISE images that matched those measured using PRo3D.Finally, we digitized each bedform in the GIS to obtain an equivalent length measurement to that made in PRo3D.To illustrate the frequency distribution of bedform lengths, and the possible effects this may have on rover traversability, we also constructed five circular study areas in the Oxia Planum candidate ExoMars landing site.The aeolian bedforms present in these study areas are shown in Fig. 7 at a scale of about 1:2500–which is about a factor of 2–3 lower than full-resolution HiRISE images.Fig. 7a–d shows individual study areas and Fig. 7f shows the local context for this region.Nearly all the impact craters in Fig. 7e contain TARs, so one study area was specifically chosen to illustrate the distribution of TAR length in these areas, whereas the others focus on the smaller mini-TAR bedforms.The eastern part of the candidate landing site contains many such bedforms, over a much wider region than the topographically confined areas where TARs are found, so we chose four other sites in areas with varying densities of these mini-TARs to investigate the distribution and variability in the length of this type of bedform.Bedforms were digitized in ArcGIS® software using a simple line drawn perpendicular to the bedform ridge crest across the longest part of the bedform–in a similar way to that done for the comparison with PRo3D described above.Most of the bedforms were digitized at full HiRISE resolution, but often we had to “zoom in” to a scale of ∼1:500 to properly digitize the smaller features.Only one measurement was taken per bedform, but where bedforms appeared to comprise multiple, merged aeolian forms, one measurement was taken for each arcuate component of the compound form.119 bedforms were measured in the initial data set.Some of the bedforms analysed had significant variability in height, but variability in measured length was much smaller.The standard deviation of the five length measurements that were taken for each bedform was less than 10% of the mean length, L, in more than 90% of the cases, but about 30% of the height measurements made had standard deviations of more than 50% of the mean height, H.We therefore provide two datasets: a raw dataset and a “filtered” dataset where measurements with large standard distributions were excluded.Fig. 8a shows mean height H of the measured bedforms plotted as a function of their lengths L for all 119 measurements in the initial dataset.The vertical error bars show the standard deviation on the height, based on five measurements, but horizontal error bars are not shown, being very small.A simple, unweighted linear regression is provided, together with 95% prediction limits for the data, based on that regression.In Fig. 8b, the same plot is provided for the filtered dataset, in which only those bedforms for which ΔH/H was less than 0.5 were used.Fig. 9 shows both initial and filtered datasets split by bedform class.Other than class 2 bedforms appearing to be slightly larger and taller, the data for the three classes plot on the same trend, and thus seem to represent the same population of features.The lengths of bedforms measured in PRo3D were compared with the same measurements obtained using HiRISE remote sensing data.Error on PRo3D length was taken to be equivalent to the standard deviation of the five measurements used to obtain length, L, for each bedform and was estimated to be equal to 1 pixel in the GIS-based measurement.These data plot on a straight line of gradient 1 with little deviation.This provides confidence both that the PRo3D measurements are accurate and, because there is no noticeable change in how well the data fit the trend line for the smallest features, that measuring bedform length in HiRISE images is an acceptable sampling method, at least for bedforms > 1 m length.If this were not the case, and the HiRISE data had substantial measurement errors for smaller bedforms compared to the more precise PRo3D measurements, it might be expected that there would be considerable scatter in the shorter bedform-length region of the plot, but this is not observed.Two additional parameters were extracted from this dataset: flank slope and asymmetry.Flank slope is defined simply as tan−1 , calculated in degrees and shown in Table 1.This measure obviously does not constrain the full range of slopes seen at individual bedforms, but instead provides a gross estimate of the types of slope a rover might encounter if trying to traverse across and through the loose material.Asymmetry was measured by making use of the fact that the length of each bedform was constructed from two lines, each measured from the same point on the bedform crest, but in opposite directions.Hence, these two measurements are “half-lengths”, such that the asymmetry value is simply defined as the magnitude of the ratio of the longer half-length to the shorter.Symmetrical bedforms will have an asymmetry of 1, with larger values showing increased asymmetry.An asymmetry value of 2 indicates a bedform that has one side twice as long as the other.If the bedforms are generally symmetrical, the asymmetry values will be strongly clustered around a value of 1.A population of asymmetrical bedforms will show a broader distribution with many higher values.As it is, the data are indeed strongly clustered around 1, with more than 50% of all bedforms having an asymmetry value of <1.15.Asymmetrical bedforms are, in fact, rare, with less than 5% of the bedforms having an asymmetry of more than 1.5.This result remains true even when considering the smaller morphology-specific subsets of the data on their own.Flank slopes for classes 1 and 2 are generally 6–7°, with the class 3 bedforms being slightly shallower at 5.2°.However, only seven measurements were made of class 3 bedforms, and the mean slope is little more than one standard deviation away from that of the whole population, so we do not consider this to be a significant result.Five study areas in the Oxia Planum candidate ExoMars rover landing site were examined.The size frequency distributions and a comparative summary plot for these data are shown in Fig. 11.As the distributions of the measured length data are not normal, the mean and standard distribution are not used to summarise the populations.Instead, the data in Fig. 11f are shown as comparative box plots.A summary of the collected data is shown in Table 2.As can be seen in Fig. 11, there is a clear difference between the length distribution of the TAR-like bedforms and the mini-TARs.Although there are many small bedforms in area Oxia1, there are also a few tens of bedforms with lengths greater than 10 m. Bedforms of this scale are not found in the other areas; most of the bedforms found in those areas are less than 5 m in length.The data shown in Fig. 8 provides an approximation for the expected height of a martian mini-TAR style bedform as a function of its length: the mini-TARs are generally about 15 times longer than they are high.Interestingly, the length/height trend of our data match the length-height relationships for larger TARs.For example, when the data from Fig. 8b are plotted against the TAR data from Hugenholtz et al. the linear regression line passes through the approximate centre of the distribution of the TAR trend.This observation is consistent with the interpretation that mini-TARs are simply small TARs as there is no evidence that they plot on a different trends.Another line of evidence to support the interpretation that mini-TARs are simply small TARs is that the mini-TARs generally have high plan-view symmetry.This is consistent with observations of TARs, which also have highly symmetric profiles, although it should be noted that these two datasets were acquired in slightly different ways: Shockey and Zimbelman relying on topographic profiles obtained from photoclinometry, rather than on the combination of photogrammetric 3D models and overlain imagery as we did here.Finally, the conclusions that the bedforms studies here are simply small TARS is reinforced by their morphology, the mini-TARs being almost identical in form to “simple” TARs as described by Balme et al., but two to three times smaller.Some TAR studies yield slightly divergent comparisons: for example, Shockey and Zimbelman also measured heights and lengths of many TARs using more than 60 topographic profiles.They found TAR length/height ratios of 3.4–125 with a mean of 8.3, compared to our study result of ∼15.Similarly, although our length/height trend is visually a good match for the Hugenholtz et al. dataset, they find that, on average TARs in their study are slightly steeper: ∼13 times longer than they are tall, compared to ∼15 times in our study.This could be due to the effects of slightly larger TARs, which appear to be steeper in their dataset, decreasing the mean.Hence, while our data appear to be consistent with an interpretation of mini-TARs being TARs, there is a possibility that TAR length/height ration is not scale independent, and so smaller bedforms might represent a slightly different population.It has been postulated, for example, that TARs of different scale may form in different ways: smaller ones as megaripples, larger examples as small reversing dunes, although Hugenholtz et al. find little evidence for this in the population they studied.It should be noted that the TARs measured by Shockey and Zimbelman and Hugenholtz et al. have mean lengths and heights that are nearly an order of magnitude larger than the mini-TAR bedforms examined in this study, and that relative measurement error in vertical height will always be fairly large when using from-orbit photoclinometry or photogrammetry to try to determine the height of bedforms that are only 5–10 times higher than the pixel size of the imaging data from which the topographic data are generated.Hugenholtz et al. present several lines of evidence to show that simple-morphology TARs formed in a similar way to megaripples on Earth, and we also find that there is little evidence to show that mini-TARs are not simply small TARs.The small TAR-like bedforms examined here are also similar to terrestrial megaripples in their length/height ratios.Example terrestrial megaripples length/height ratio data include ∼4–20 and ∼12.5–50.The mini-TARs are also similar to terrestrial megaripples in that they have low plan-view asymmetry.Our study adds further support to the idea that TARs form in a similar way to megaripples on Earth, which would confirm the suggestion that neither the reduced gravity nor the reduced atmospheric pressure should greatly alter the cross-sectional shape of aeolian bedforms on Mars compared to Earth.In addition to comparisons with orbital data, the mini-TAR style bedforms can also be compared to other surface-based observations of similar features.For example, the ripples analysed by Lapotre et al. have mean wavelengths of 2.1–3.6 m, similar to those analysed here, but occur in a different setting.Unfortunately, height information is not available, so these are less useful for direct comparison with our data.Bedforms described as “megaripples” were traversed by the MSL rover.One particular example is a large bedform located at the mouth of a shallow valley.The valley was referred to as “Moon Light Valley” and the bedform as the “Dingo Gap megaripple”.The Dingo Gap megaripple is described as being ∼1 m high and having a wavelength of ∼7 m, although the rover elevation plot of the traverse across the bedform seems to suggest that the length of the feature is more like 12 m and its height about 0.6 m. Also, these elevation data neglect sinkage into the dune by the rover, which would tend to reduce the measured height.The length/height ratio from these two estimates of height and length give a range of 7–20, comparable to our mini-TAR data.Again, though, it should be noted that the setting of the Dingo Gap megaripple is dissimilar to that of the flatter, ‘plains’ setting of the MER Opportunity traverse – it appears perched on a saddle-like area at the mouth of the valley.The megaripples that MSL encountered within Moon Light Valley, slightly farther along the rover traverse than Dingo Gap, are arguably more similar to the MER Opportunity examples.Although they are confined within a valley, they are within a locally flat-lying area.These are described as having heights of 0.1–0.15 m with wavelengths of 2–3 m.Assuming that the wavelength is equivalent to the length for these ripplee, the length/height ratios are ∼ 20 so plot within the bounds of our dataset in Fig. 8.Megaripple fields reported in two other parts of the MSL traverse are described by Arvidson et al.One ripple at Moosilauke Valley is described as being ∼0.4 m high, with a wavelength of ∼6 m.This is equivalent to a length/height ratio of 15–very close to the data we have collected from the MER Opportunity traverse.Other ripples in these two areas are described as having heights of 0.15–0.2 m with wavelengths of 2–10 m and heights of 0.15–0.2 m with wavelengths of 2–3 m.Images of the Hidden Valley megaripples show these ripples are ‘saturated’; i.e., the length measurement is approximately equivalent to the wavelength.The length/height data for these are also consistent with our data.The Moosilauke examples appear to be more widely-spaced, with some bedrock between the bedforms.Hence wavelength is a different measurement to length and we cannot determine the length/height ratio.Nevertheless, these examples are of similar height to those seen in our study, and are quantitatively of similar length.To summarise, we suggest that there is no clear morphometric distinction between the populations of bedforms we have called mini-TARs, and those of TARs in general, but that there may be a gradational change in steepness from mini-TARs and moderately sized TARs to larger TARs which appear steeper.It is also possible that the population of even smaller bedforms seen at Meridiani Planum, denoted “plains ripples” are essentially the smallest part of the population of TARs in general.We find good agreement between the heights and lengths of the bedforms we measured at the MER Opportunity traverse, and published data for ripple heights and lengths seen at the MSL Curiosity site.An important task for future work would be to use PRo3D to measure similar features at the MSL site to test whether small aeolian bedforms here really do have similar shapes to those measured here.As discussed above, aeolian bedforms can constitute a formidable hazard or barrier to rover locomotion.The measured height of an aeolian bedform can provide an estimate of its traversability: low bedforms with shallow slopes being less dangerous to traverse than taller, steeper ones."From the comparison with terrestrial and other martian megaripples above, and knowing the approximate height of similar aeolian bedforms predicted to be a hazard for a rover's safe operation, we could use such data to infer what lengths of aeolian bedforms – as measured in high-resolution plan-view remote-sensing images – are likely to correspond to heights of bedforms that pose a significant risk to a rover.Conversely, we could also use these data to infer what maximum lengths of bedforms in a given area are likely not to pose a risk to a rover – which can then be used strategically to help select landing sites in the first place, or to aid in long term planning for rover operations.Although measurement of bedform heights can be more easily and accurately performed using in situ stereo observations, as demonstrated here, such a technique provides no forward planning capability.The measurements we have made, though, can be used to infer the heights of features seen on the surface from orbital data, and hence can help provide information for landing site selection, or for strategic mission operations.For example, in Fig. 13, two hypothetical bedform heights are shown: 15 cm and 25 cm, along with the bedform length that they are equivalent to in measurements from MER Opportunity data.Here, we can see that if the threshold for a bedform being “too high” to cross by a rover is 25 cm, then the average length that matches this height is just under 4 m.This means that, assuming the bedforms at a given site are of a similar shape population to those seen in the MER Opportunity traverse, a bedform recorded from orbit as having this length has a 50% chance of being higher than the traversability threshold.The green shaded area in Fig. 13 provides the 95% prediction limits of the data.Thus, again assuming the bedforms are similar to those seen in the MER Opportunity traverse, this means that bedforms longer than about 4.75 m have a 95% chance of being too high for the rover to cross, while bedforms with lengths less than about 3 m have a 95% chance of being successfully negotiated.However, if the rover were only capable of crossing bedforms of 15 cm height, then ripples with lengths of just ∼2.5 m would have a 50% chance of being uncrossable, and those longer than about 3 m would have a 95% chance of being too high to traverse safely.Bedforms recorded from orbit as having a length of ∼1.75 m, however, would only have a 5% chance of being too high for the rover to traverse.Hence, based on our study, we can suggest that the “95% traversability bedform length” is ∼1.75 m for a rover that can cross bedforms up to 15 cm in height, but improves to ∼3 m for a rover that can cross 25-cm-high bedforms.In practice we note that the material properties of the bedform, will also be of importance, and adds an element of uncertainty to this prediction.However, if laboratory-based simulations can mimic the grain size, material properties and morphology of martian aeolian bedforms, then the results of experiments used to determine the height of bedform a rover can cross will be directly applicable to this approach.The small aeolian bedforms in Oxia Planum that we measured are similar in morphology and size to the plains-ripples/mini-TARs observed by MER Opportunity.The variations in length of these bedforms are similar to those seen in Meridiani Planum, with the exception of the TARs in area Oxia1.These data allow us to test the traversability of the various study areas in Oxia Planum, based upon hypothetical rover bedform traversability and the assumption that the Meridiani Planum bedforms are similar in shape to those at Oxia Planum.If we again assess the same two idealized cases – a rover capable of traversing 15 cm high bedforms and a rover capable of traversing 25 cm high bedforms – we can use the data presented in Fig. 13 to assess the likelihood that a rover has, for example, a 95% chance of traversing the bedforms in a given region.This is illustrated in Fig. 14 which compares the length data from Fig. 11 with the equivalent traversable length criteria from Fig. 13.Fig. 14 Shows that if the rover can traverse 25 cm high bedforms safely, then almost all the bedforms in the Oxia2, Oxia 3 and Oxia 4 are smaller than the “95% traversability bedform length”.This suggests that this would be a relatively safe place for the rover to manoeuver.On the other hand, if the rover could only traverse 15 cm high bedforms safely, then the majority of the bedforms that it would encounter in these regions would be longer than the “95% traversability bedform length”, and so this would be a region where the rover would encounter many un-crossable aeolian bedforms.This study provides a framework for a method to determine the degree of hazard posed by aeolian bedforms for regions of Mars using only remote sensing data.Of course, there are some assumptions inherent to the method, including:that the reference population of bedforms is representative of bedforms of this size and morphology on Mars in general.The similarity in shape and scale of the Meridiani bedforms to other 1–5 m long aeolian bedforms seen on Earth and Mars suggests this may not be a bad assumption, but clearly more measurements are needed.This could be done for bedforms at the MSL and MER Spirit sites and can perhaps be augmented in the future from ExoMars rover measurements.Such measurements could be used to test whether differences in bedform morphology and local bedrock geology between study areas affect the height-length relationship of aeolian bedforms.that there is a quantifiable scatter in the height-length distribution that allows the probability of a bedform having a certain height to be predicted from its length.Such a relationship is shown in Fig. 8, but the reliability of this relationship could be improved by adding more data.As for point, this could be done by adding in observations from other martian rover and lander sites.that there is a well-constrained set of laboratory or field investigation data that describes how successfully a given rover can cross aeolian bedforms, and that this can be simplified into a single “crossable height”.Such experiments should be performed to mimic the grain sizes, slopes and heights of martian bedforms as closely as possible.material properties such as grain size and armouring by coarse grained deposits on the upper surfaces or interdune areas are also likely to effect traversability, and the results of ongoing rover trials will help reduce uncertainty created by this.Hence, bedform height/slope alone cannot be the sole focus of traversability studies.The illustrative remote sensing data we collected provides some predictions about the probable size distribution of aeolian bedforms in the Oxia Planum region, but requires the results of rover testing to be able to constrain the likely traversability of a given area.Alternatively, this same method could be used to present rover builders or testers with data that could feed into the design of their vehicles, or the implementation of driving techniques.As with all population studies, the approach can be improved with a larger dataset, and these should be collected from more diverse landing sites, or from diverse locations within a rover traverse.The very first measurements conducted by the ExoMars team at a dedicated rover locomotion test facility in RUAG show that, on the type of soils a vehicle may expect to encounter when traversing aeolian bedforms on Mars, the ExoMars rover is likely to experience excessive wheel slippage, leading to significant sinkage and slow locomotion progress on 8° slopes.However, when engaging the wheel walking gait on the same terrain, the rover can safely negotiate 23° slopes in steady state —that is, irrespective of the slope length— with no appreciable wheel sinkage.These test results confirm that wheel walking can be an important asset for improving slope traversability and mission safety in general.However, as discussed before, wheel walking—if implemented—would be an “emergency” mode to be commanded from the ground, since progress is slower than with “normal” locomotion under moderate slip rate.The method described here provides a useful means to estimate under what circumstances and how often a vehicle can expect to experience locomotion difficulties based on the presence of aeolian bedforms that could be risky, or mission-resource expensive to negotiate.This would be the case when dealing with an extensive field of such bedforms, where the ripples would have to be navigated one after the other over considerable distances; for example, if the mission landed in the middle of a region of TARs.Small, meter-scale aeolian bedforms observed by the MER Opportunity rover in the Meridiani Planum region of Mars have lengths parallel to the bedform-forming winds that are ∼15 times their crest-ridge heights.They are generally symmetrical in terms of flank lengths and have gross flank slopes of approximately 6–7°.These measurements are generally similar to those made for megaripples on Earth, and are within the distributions of the same measurements made for TARs on Mars.The data are in agreement with the hypothesis that these martian bedforms formed by the same processes as terrestrial megaripples.We conclude that these smaller bedforms are therefore likely to be small TARs, and part of the same continuum of aeolian bedforms.The measurement of bedform length, made either from the surface or in remote sensing data, can therefore be used as a proxy for bedform height.Assuming that all morphologically similar bedforms of similar size on Mars follow this distribution, our results provide a means of assessing the height of small aeolian bedforms on Mars from plan-view orbital data alone.As the traversability of aeolian bedforms depends partly upon their crest ridge heights the measurement of bedform length can provide a first order approximation of the traversability of aeolian bedforms.Further, the distribution of measured bedform heights for a given measured bedform length can be used to constrain the likely range of bedform heights at a given site.Thus, if the maximum height of aeolian bedforms that a rover can traverse is known, this can be converted into an equivalent length of bedform, and hence be used to derive the probability that a rover can cross the bedforms seen from orbit at a given location.Aeolian bedforms similar in scale and form to those seen in Meridiani Planum occur in abundance on the plains regions of ExoMars Oxia Planum candidate landing site.Slightly larger aeolian bedforms, TARs, are also present at this site, but are mainly confined to topographic depressions such as impact craters.The TARs here have lengths of up to 15 m, whereas the bedforms measured in the other sites generally have lengths less than 4 m.The other candidate site, Mawrth Vallis, also contains extensive aeolian bedforms of similar size.Combining these measurements with the length-height distribution measured from Meridiani Planum, and assuming that this also applies to other regions of Mars, allows a prediction of whether the bedforms in Oxia Planum will be traversable by the ExoMars rover.If the ExoMars rover can safely traverse aeolian bedforms with 25 cm high crest-ridges, then our measurements suggest that most of the bedforms found on the flat plains of Oxia Planum will be lower than this threshold height.Conversely, if the ExoMars rover will only be able to cross aeolian bedforms with 15 cm high ridge crests, then most of the bedforms will have ridge crests higher than this."If wheel walking is implemented in the ExoMars rover design, this technique will boost considerably the rover's capacity for negotiating unconsolidated terrains and challenging slopes, but at a cost in time and energy.Our method could be adapted to provide an indication of under what circumstances, and how often, it would be likely that wheel walking would need to be engaged at a given candidate site.Although this study involved relatively few bedforms, the approach provides a template for how one aspect of rover traversability of aeolian bedforms could be assessed from orbital data.This could be important both for assessing future landing sites, and for strategic planning for active rover missions.The technique could be improved by collecting more data on aeolian bedform length-height relationships from as many in-situ observations as possible, across a diversity of landing sites on Mars.As a starting point, the study could be broadened to use data from the MSL and MER Spirit missions, during which other examples of aeolian bedforms were observed.
Recent aeolian bedforms comprising loose sand are common on the martian surface and provide a mobility hazard to Mars rovers. The ExoMars rover will launch in 2020 to one of two candidate sites: Mawrth Vallis or Oxia Planum. Both sites contain numerous aeolian bedforms with simple ripple-like morphologies. The larger examples are ‘Transverse Aeolian Ridges’ (TARs), which stereo imaging analyses have shown to be a few metres high and up to a few tens of metres across. Where they occur, TARs therefore present a serious, but recognized and avoidable, rover mobility hazard. There also exists a population of smaller bedforms of similar morphology, but it is unknown whether these bedforms will be traversable by the ExoMars rover. We informally refer to these bedforms as “mini-TARs”, as they are about an order of magnitude smaller than most TARs observed to date. They are more abundant than TARs in the Oxia Planum site, and can be pervasive in areas. The aim of this paper is to estimate the heights of these features, which are too small to measured using High Resolution Imaging Science Experiment (HiRISE) Digital Elevation Models (DEMs), from orbital data alone. Thereby, we aim to increase our knowledge of the hazards in the proposed ExoMars landing sites. We propose a methodology to infer the height of these mini-TARs based on comparisons with similar features observed by previous Mars rovers. We use rover-based stereo imaging from the NASA Mars Exploration Rover (MER) Opportunity and PRo3D software, a 3D visualisation and analysis tool, to measure the size and height of mini-TARs in the Meridiani Planum region of Mars. These are good analogues for the smaller bedforms at the ExoMars rover candidate landing sites. We show that bedform height scales linearly with length (as measured across the bedform, perpendicular to the crest ridge) with a ratio of about 1:15. We also measured the lengths of many of the smaller aeolian bedforms in the ExoMars rover Oxia Planum candidate landing site, and find that they are similar to those of the Meridiani Planum mini-TARs. Assuming that the Oxia Planum bedforms have the same length/height ratio as the MER Opportunity mini-TARs, we combine these data to provide a probabilistic method of inferring the heights of bedforms at the Oxia Planum site. These data can then be used to explore the likely traversability of this site. For example, our method suggests that most of the bedforms studied in Oxia Planum have ridge crests higher than 15 cm, but lower than 25 cm. Hence, if the tallest bedforms the ExoMars rover will be able to safely cross are only 15 cm high, then the Oxia Planum sites studied here contain mostly impassable bedforms. However, if the rover can safely traverse 25 cm high bedforms, then most bedforms here will be smaller than this threshold. As an additional outcome, our results show that the mini-TARs have length/height ratios similar to TARs in general. Hence, these bedforms could probably be classified simply as “small TARs”, rather than forming a discrete population or sub-type of aeolian bedforms.
91
Next generation in vitro liver model design: Combining a permeable polystyrene membrane with a transdifferentiated cell line
The use of membranes as cell scaffolds is of key interest in the development of in vitro drug screening assays.Cells cultured in membrane bioreactors experience a more in vivo-like environment than those in traditional two-dimensional cell culture .Culturing cells under physiologically relevant conditions can create more realistic and accurate metabolic responses to drug testing .By culturing cells on one side of a membrane, with bulk fluid flow on the opposite side, mass transfer rates become independent of the shear forces experienced by the cells .At the same time, fluid flow simultaneously allows for a constant and uniform supply of fresh media to the cells and offers efficient removal of waste metabolites and other extracellular products.The use of membranes in hollow fibre bioreactors also allows for simulation of specific organ functions: for example, human liver and kidney HFB models have been demonstrated .To exploit HFBs for cell culture applications, there is an intrinsic dependence on the consistency and quality of the membrane scaffolds themselves.Given the large number of possible bioartificial models which could be recreated in HFBs, reliance on a commercially available supply of hollow fibres does not currently offer the degree of refinement required in terms of material stiffness, pore size, porosity and surface chemical properties.Both biodegradable and non-biodegradable polymers have distinct and complementary properties when used as tissue culture scaffolds.In regenerative medicine, biodegradable scaffolds allow for the culture of cells as the eventual degradation of the scaffold into non-toxic constituents leaves behind the tissue engineered construct.However, non-biodegradable polymers are more appropriate for long-term cell expansion and bioartificial organs, where constant environmental support for an indeterminate amount of time is key.Traditionally, adherent tissue culture flasks are made from polystyrene, a durable, inexpensive non-biodegradable polymer that is established as a mechanically stable and biocompatible scaffold material .However, other polystyrene substrates are used extremely rarely, probably as a result of their hydrophobic and non-porous nature.There are limited examples of porous polystyrene membranes reported in the literature and to the best of the authors’ knowledge, no reports of polystyrene hollow fibre membranes suitable for cell culture at all.The existing reports either require the use of high pressures and supercritical fluids for membrane fabrication, or produce membranes with dimensions unsuitable for cell culture .Membrane production by phase inversion relies on polymer precipitation upon immersion in a non-solvent.The consequent de-mixing of the solvent-polymer dispersion into the non-solvent creates the characteristic porous network of a polymeric membrane .To enhance the formation of pores, incorporation of pore-forming agents into the solvent-polymer solution is a well-documented strategy .The chosen porogen normally has limited solubility in the solvent-polymer solution, but is readily soluble in the non-solvent, enabling removal by dissolution upon phase inversion.Typical porogens are inert and readily soluble in water, and include polyvinylpyrrolidone and polyethylene glycols.Salt crystals are often used as porogens in tissue engineering scaffolds, most typically when the aim is to create large pores, in the order of 200 µm, to enable cells to infiltrate and migrate into the scaffolds .The previously successful use of sodium chloride as a porogen in tissue engineering gives confidence in its use in our application – the production of a microporous polystyrene membrane using a salt porogen.Here we produce microcrystalline sodium chloride, using a method developed by Marshall , which to date has not been used as a porogen for polystyrene.We describe the manufacture of polystyrene flat sheet and hollow fibre membranes and analyse their physical properties.To demonstrate biocompatibility of the resulting membranes, we compare the viability and transdifferentiation potential of the pancreatic AR42J-B13 cell line on flat sheet porous polystyrene membranes, flat sheet non-porous polystyrene membranes, and traditional tissue culture polystyrene.B13 cells can be induced to convert from pancreatic to hepatocyte-like cells following culture with the synthetic glucocorticoid dexamethasone.The phenotype of the cells can be enhanced by co-culture with Dex and oncostatin M .The use of transdifferentiated B13 cells as a liver model has advantages over using hepatoma cancer cell lines or primary hepatocytes.Primary hepatocytes are difficult to obtain, cannot be expanded in vitro, and dedifferentiate rapidly in suspension .Meanwhile, hepatoma cell lines such as HepG2 cells can be sub-cultured successfully but show extremely low drug metabolism activity .However, B13 cells readily proliferate in vitro, and following transdifferentiation into HLCs they function at a level similar to freshly isolated rat hepatocytes .Recently published work also suggests B13 culture can be adapted to serum-free conditions, removing barriers to their clinical use .In this work we describe the culture of transdifferentiated B13 cells on our novel porous polystyrene membranes – a combination likely to help generate better in vitro liver models by creating more in vivo-like culture environments using physiologically relevant cells at high densities.Sodium chloride crystals were prepared as detailed by Marshall .Briefly, a saturated solution of sodium chloride was prepared and a 5% additional volume of reverse osmosis water was added.Four 25 mL aliquots of this solution were frozen in dry ice, then broken apart and shaken vigorously in 2 L pure ethanol at − 20 °C.Once the frozen salt was completely melted, the precipitate was collected by vacuum filtration, prior to lyophilisation.This product is referred to in this paper as microcrystalline sodium chloride.For comparison, dry sodium chloride crystals were thoroughly ground in a pestle and mortar.Sodium chloride crystal face lengths were assessed by analysing light micrographs, obtained using an IX51 microscope.Face lengths were measured using Cell^P software.Casting solutions were formulated using various mass ratios of polystyrene and microcrystalline sodium chloride, as detailed in Table 1.The casting solutions were prepared by first dispersing the appropriate mass of microcrystalline sodium chloride crystals in 20 g of n-methyl-2-pyrrolidone.5 g of polystyrene was then added to the mixture and agitated until fully incorporated.Flat sheet membranes were produced by immersion precipitation, using RO water as the non-solvent.Casting solutions were spread on 100 × 200 mm glass panes using rollers, then fully immersed in RO water and left to soak to allow for membrane precipitation.The spacing between roller and glass was fixed using 340 µm wire.The water was changed twice a day for 3 days, for solvent and salt removal.The membranes were air-dried and stored in a desiccator prior to use.Hollow fibres were prepared by a wet spinning technique detailed elsewhere .The casting solution-containing tank was well mixed prior to spinning to ensure uniform salt dispersion.RO water was used as the non-solvent, and the resultant fibres underwent the same washing and drying regime as the flat sheet membranes.To hydrophilise the membranes for cell culture applications, samples were exposed to oxygen plasma under a vacuum.Samples were placed in a capacitively-coupled plasma chamber and treated with oxygen plasma at a power of 25 W for 30 s with a flow rate of approximately 40 cm3/minute.Rat pancreatic AR42J-B13 cells are a sub-clone of the parent line AR42J."Cells were maintained in standard culture conditions, in complete medium consisting of Dulbecco's Minimum Essential medium, 10% foetal bovine serum, 1% L-glutamine and 1% penicillin-streptomycin as previously described .Medium was changed every 2 days thereafter.For transdifferentiation to hepatocyte-like cells, B13 cells were seeded at a density of 10,000 cells/cm2 and initially cultured under maintenance conditions for 24 h after seeding.After this time period, to induce transdifferentiation the maintenance culture medium was additionally supplemented for 14 days with 1 μM dexamethasone and 10 ng/mL Oncostatin-M.The supplemented medium was replaced every 2 days.A custom flat sheet membrane bioreactor was used to fix oxygen plasma treated membranes into place for cell culture applications.The bioreactor was designed to clamp membranes between two plates: one solid and one perforated with wells to allow for cell culture on the membrane surface.Construction of the flat sheet membrane bioreactor was performed in a Class II safety cabinet and the module components and silicone gaskets were sterilised by autoclaving prior to assembly.PX membranes, initially treated with 70% ethanol, were sealed into the custom 24-well plate polycarbonate modules, between silicone gaskets, leaving a membrane surface area of 1.9 cm2 per well exposed for cell culture.Membranes were allowed to fully dry, and were then sterilised by immersion in 1% antibiotic-antimycotic solution in phosphate buffered saline at 4 °C for 24 h, and were subsequently rinsed 3 times in PBS prior to cell seeding .In order to gauge the effect that microcrystalline sodium chloride had on membrane mechanical integrity, flat sheet membranes were destructively tested on an Instron 5965 universal testing machine fitted with a 1 kN load cell.Dumbbell-shaped sections of the membranes were cut with a width of 4 mm and a narrow section length of 30 mm.The thickness of each sample was measured using a micrometer.The wider ends of the dumbbell shapes were clamped, and extension tests were performed at a rate of 0.5 mm s−1 until sample failure.Ultimate tensile strength was calculated by dividing the maximum force at break by the sample cross-sectional area.Extension at break was gauged by finding the difference between the zero point and the distance moved at sample failure."The apparent Young's modulus was calculated from the gradient of the stress-strain curve.Contact angles of flat sheet membranes were measured on an OCA15 goniometer.1 µL of RO water was placed on the surface of each membrane at room temperature.Contact angles were measured using integrated droplet image detection software and calculated based on the Laplace-Young equation.Approximately 10 mg of dry PX sample was placed in an open crucible in a TGA instrument, which was heated to 700 °C in an atmosphere of dry air, at a flow rate of 20 mL/minute.The heating rate was 10 K/minute.The mass signal was corrected to remove the contribution of buoyancy effect, by subtracting the data from an identical run with the sample holder left empty.Two replicates were analysed for each membrane.Scanning electron microscopy was performed on membrane samples and sodium chloride samples.Skin layer top-down membrane samples and sodium chloride samples were prepared by lyophilising and gold sputter-coating.Membrane cross-sections were prepared by immersing the samples in liquid nitrogen, fracturing across the structure, and lyophilising, followed by gold sputter-coating.All samples were imaged at 10 kV using an SEM 6480LV microscope.SEM images of membrane surfaces were used to gauge size distributions of surface pores.Images were converted to binary and analysed using ImageJ software."Measurements of area, perimeter and Feret's diameter were used to calculate circularity of the pores. "The product of circularity and Feret's diameter was used to give geometric pore diameters .Histograms were produced from data and fitted using a 3 parameter Gaussian distribution.For each membrane formulation, measurements were taken from 8 representative images from 4 separate membranes.Hollow fibre permeability was tested using nitrogen gas permeation at room temperature.Hollow fibre membranes were fixed into stainless steel modules by sealing the extracapillary space between the fibres and module walls at the inlet and sealing the fibre lumens at the outlet.This enabled a flow of gas into the lumen.Transmembrane pressure was increased from atmospheric levels up to approximately 1.0 bar.Gas flow rates into the module were measured using a rotameter.Measurements were recorded at arbitrary TMPs for at least 4 comparable modules.Mean pore size was calculated using a previously developed method and correlation .The permeability of PX0 and PX40 hollow fibre membranes was also tested using water permeation at room temperature.For each experiment, a bundle of three fibres was fixed inside a glass module which allowed for two directions of outflow: through the fibre lumen and through the fibre walls.A clamp was placed over the retentate line, and the permeate flow output was measured by mass.The fibres were pre-wetted with 70% ethanol, which was then washed out with distilled water before recording data.The system was filled with distilled water and a degree of permeate flow was induced, to ensure the system was stabilised prior to recording data.Distilled water was pumped through the system as the clamp was closed in increments, and pressure and permeation were recorded.Cell viability was visualised using the LIVE/DEAD Viability/Cytotoxicity Kit.Cells were seeded at a density of 20,000 cells/cm2 then incubated for 48 h at 37 °C in 5% CO2.After incubation, the cells were washed gently in PBS before adding 1 μM calcein AM and 1 μM ethidium homodimer-1 in PBS, and incubated for 30 min.Fluorescence was visualised on an inverted microscope.The number of cells fluorescent with either calcein AM or ethidium homodimer-1 was counted for 6 independent fields of view per replicate and normalised against the total number of cells in the FOV.A mean and standard error of percentage live and dead cells for each culture substrate were calculated from 3 independent experiments.B13 cells were cultured as previously described in Section 2.4 and cultures were maintained under both standard and transdifferentiation conditions, for 4 or 14 days respectively.After this time, cells were immunostained as previously described .Briefly, samples were washed in PBS and fixed with 4% paraformaldehyde in PBS for 20 min at room temperature.The cells were permeabilised with 0.1% Triton X-100 in PBS for 20 min and blocked in 2% blocking buffer for 30 min before incubation with primary antibodies overnight at 4 °C, followed by secondary antibodies, and subsequent staining with 2--6-indolecarbamidine dihydrochloride diluted 1:1000 in PBS.Antibodies were diluted as follows: rabbit anti-amylase 1:100, mouse anti-glutamine synthetase 1:300, rabbit anti-carbamoylphosphate synthetase-1 1:300 and rabbit anti-transferrin 1:100.Anti-mouse and anti-rabbit Alexa Fluor 488 conjugated antibodies, and anti-rabbit Alexa Fluor 594 conjugated antibodies were used at a 1:500 dilution.B13 cells were cultured as previously described in Section 2.4.On day 13 of transdifferentiation treatment, the culture medium was changed to serum-free medium, supplemented with Dex and OSM."On day 14, the serum-free medium was removed from the cells and assayed for secreted albumin from the transdifferentiated HLCs, using a rat albumin enzyme-linked immunosorbent assay according to the manufacturer's instructions.Albumin secretion data was normalised by total protein.To quantify this, cells were washed with PBS, then lysed with RIPA Lysis and Extraction Buffer containing a 1:100 dilution of protease inhibitor cocktail.Total protein in the subsequent cell lysate was quantified using the Pierce BCA assay kit.Data is quoted as mean ± standard deviation unless otherwise stated.Statistical analysis was performed using one-way analysis of variance with a post-hoc Holm-Sidak test, using SigmaPlot 12.3, unless otherwise stated.A value of p < 0.05 was considered statistically significant.Microcrystalline sodium chloride samples were prepared as described, and analysis of the size data shows the reproducibility of the method across different batches.The mean nominal face length in different batches ranged from 4.5 ± 1.2 µm to a maximum of 4.9 ± 1.4 µm, with an overall mean of 4.7 ± 1.3 µm.A representative image of the type used for sizing the crystals is shown in Fig. S1a.To confirm the absence of salt in the final membrane products, the samples were analysed by TGA.Between 300 °C and 400 °C, 95% of the mass of the membranes is lost, with the remaining 5% lost by 500 °C.This in line with literature reports suggesting polystyrene is entirely burnt by 500 °C .The remaining mass of the samples is low and in the order of the detection limit of the equipment.It is extremely unlikely any salt remains in the membranes.In addition, there is no difference between the TGA curves of PX0 and PX40 flat sheet membranes, nor between the TGA curves of HF membranes produced using different quantities of salt.The thicknesses of the membranes were measured as described in Section 2.6.1.Membranes were of consistent thicknesses with a mean of 249 ± 20 µm.There was no significant difference in the thicknesses of membranes cast with different proportions of salt content.The membranes were cast at a thickness of 340 µm, indicating that a one-dimensional contraction of around 27% occurred during precipitation and drying.Ultimate tensile strength was calculated for each membrane type.The strongest material was the membrane containing no salt – this corresponds to the expectation that the membrane with no salt would be the least porous.Though the differences in ultimate tensile strength were only significant for PX40 and PX60 membranes with respect to the control PX0, the data shows a trend of decreased strength with increased salt incorporation.ANOVA analysis revealed an overall trend significance of p < 0.001.The extension at break of the membranes also decreased with increased salt incorporation.The overall trend significance of p < 0.05 was lower than for the ultimate tensile strength, and discrete comparisons revealed no significance.The brittleness of the polymer was such that no sample yielding was apparent in the force curves generated."Finally, Young's moduli were calculated for the membranes, based on the elastic region of the stress-strain curves. "The Young's modulus values for PX20, PX40 and PX60 were all significantly lower than the PX0 value. "Young's modulus appeared to decrease with increased salt concentration, suggesting the higher the salt concentration, the less stiff the resultant membrane.This again supports the assumption that increasing salt in the casting solution resulted in more porous membranes and hence decreased mechanical integrity.Membrane hydrophobicity, before and after oxygen plasma treatment, was determined from the contact angle measurements performed on flat sheet membranes.The membranes cast from salt-containing solutions had significantly lower contact angles than the PX0 control membranes before oxygen plasma treatment.Following treatment, there was a significant reduction in the contact angles measured for all of the membranes, indicating a decrease in surface hydrophobicity.We determined the morphology of the flat sheet membranes by SEM.For PX0, the structure shows the clear narrowing of pores from bottom to top and a distinct thin top layer is apparent.This skin layer, when viewed from above, has no pores.The sub-structure shows some uniformity in macrovoid width, with the bottom most cavities having a maximum diameter of approximately 25 µm.For the membranes cast from salt-containing solutions, there is more distinction between the top skin layer and the porous structure below.There are larger macrovoids in the sub-structure, and the skin layer has been narrowed as a result, resulting in pores on the skin surface.There is increased pore interconnectivity, with PX40 showing particularly elongated pores spanning the whole membrane section, compared to the well-defined dual layer structure shown in PX0.Analysis of the top-down SEM images revealed surface pore size distributions with distinct similarities for the membranes cast from salt-containing solutions.The PX0 membrane showed no evidence of surface pores from the SEM images analysed.Normal distribution fits of the data show that pore size distribution is consistent for PX10, PX20, PX40 and PX60, with peaks around the 2 µm diameter range and the different membranes all overlap in their pore sizes.The slightly smaller pore size measured in the PX10 membranes could be due to the lower proportion of salt in the casting solution, and thus a smaller outflow of saline solution escaping through the membrane skin.This would be consistent with a liquid-liquid de-mixing hypothesis, as opposed to the salt templating hypothesis, but further investigation would be required to confirm this possibility.This is considered further in the discussion.Surface pore density was calculated by SEM image analysis.The number of surface pores was clearly seen to increase with salt proportion.As the pore size remains consistent across the different membranes while pore frequency increases, the use of microcrystalline sodium chloride as a pore-forming agent at different concentrations provides a simple method to tailor membrane porosity while decoupling porosity from pore size.The morphologies of the PX10, PX20, PX40 and PX60 hollow fibres were examined using SEM.The lumens are central in the fibres, and as the polymer is in contact with the non-solvent on both the inner and outer surfaces during production, the macrovoid/skin structure observed for the flat membranes is mirrored across the fibre with a skin layer on both surfaces.Pore connectivity is observed throughout the cross-section, and increased salt concentration appears to result in a less ordered structure overall.This may be due to the process of formation being more stochastic in nature due to salt dissolution, as well as the resultant saline outflow from the polymer.While fibres can be made using PX0 they are not analysed here due to their lack of porosity, which makes them unsuitable for use as membranes in a permeation-based system.Dimensions of the fibres displayed consistent uniformity.However, caution must be exercised as small differences in lumen diameter result in sizeable differences in lumen surface area.This parameter is critical to the measurement of flux in pressure-driven membrane filtration, and for modelling properties such as solute diffusion through the structure.Nitrogen gas permeation of the hollow fibres was measured in order to quantify differences in fibre permeability.The flux of nitrogen gas through the fibres was measured at various pressures.The fibres produced from the different casting solutions gave distinguishable responses to this test and were tested to failure.The PX10 membrane was able to withstand transmembrane pressures of 0.9 bar.The maximum tolerated pressure decreased with increasing salt content, in line with the mechanical integrity data obtained from the flat sheet membranes.Linear regression analysis of the best-fit lines through each data set in Fig. 9 gives a measure of overall fibre specific permeability.In line with the expectation that increased salt content in the casting solution leads to a more porous membrane, the permeability of the fibres increases in line with salt content.It is possible that the value recorded for PX40 is an underestimation of the true permeability, as a result of more variance in the fibre dimensions.The value for PX40, at 2.9 × 105 Lm−2 h−1 bar−1, might be expected to be higher, midway between the values for PX20 and PX60.The mean pore size values show that while there is a direct correlation between increased salt content and mean pore size, the dimensions are considerably smaller than those measured in the top layer of the skin.This suggests that the mean pore size results from a combination of the thermodynamic interactions of the ternary system and the dissolution of the salt crystals.This is further supported by the fact that the mean pore sizes are four times smaller than the salt crystal size.Based on the permeation data and physical properties of the membranes, PX40 was identified as the most suitable hollow fibre material for cell culture applications.To confirm the permeability of the membranes in a liquid system, water permeation of the fibres was carried out.The data shows that PX40 membranes were water permeable even at low pressures.PX0 was again found to be non-permeable.Having identified PX40 membranes as the most suitable for cell culture applications, the cell compatibility of the membranes was examined using pancreatic B13 cells.Standard tissue culture polystyrene plates and PX0 membranes were included as controls.Pancreatic B13 cells seeded onto TCPS, PX0 and PX40 membranes were stained for viability and representative images are shown in Fig. 11.All membrane surfaces showed the presence of attached cells with very high levels of viability, indicated by the presence of calcein AM staining and the absence of ethidium homodimer-1 staining.No significant differences in viability between the B13 cells on the different culture substrates was observed.Cells do not adhere to untreated PX or polystyrene surfaces due to their hydrophobicity, so only oxygen plasma treated membranes can be examined for cell viability and transdifferentiation.The ability of the B13 cell line to transdifferentiate to HLCs on PX membranes was assessed, using glass coverslips as a control substrate.Immunofluorescent staining showed that untreated B13 cells maintained expression of amylase, a pancreatic marker, but that B13 cells treated with Dex and OSM lost amylase expression and gained expression of the hepatic marker transferrin, the periportal hepatic marker carbamoylphosphate synthetase and the perivenous hepatic marker glutamine synthetase on all culture substrates.Treated cells also displayed an enlarged, flattened morphology, and populations of mono-nucleate and bi-nucleate cells, indicative of HLCs.This is consistent with previous observations of treated B13 cells on glass coverslips .No TFN, GS or CPS-1 expression was observed in the control untreated samples.Dex and OSM treated B13 cells were also shown to secrete albumin on both PX membranes and TCPS.Levels of secreted albumin were highest on PX40 membranes, followed by PX0.However, the differences between the substrates were not significantly different.This study has demonstrated that it is possible to tailor the porosity of polystyrene membranes, both as flat sheets and hollow fibres, by varying the concentration of microcrystalline sodium chloride in the polymer casting solution.The varied mechanical properties of the membranes can be attributed to the reduced material-to-pore ratio with increased salt content.The pores within the membranes show similar arrangements to those in other phase inversion-cast membranes described elsewhere .The structural differences between PX0 membranes and those produced from the salt-containing solutions could either be a result of undissolved salt crystals bridging layers during the precipitation process, or of salt dissolution producing a saline solution which de-mixes in the membrane at a different rate to pure water.It is uncertain which of, or in what ratio, these mechanisms drive the pore structure formation, though a contribution by one or both leads to a disruption of the more organised, stratified layers identified in the salt-free membranes.In the templating hypothesis, pores are created as a result of polymer precipitation around porogen ‘templates’.It follows that the size of the pores should therefore be related to the size of the porogen.However, the surface pores observed in Fig. 5 are much smaller than the ~ 5 µm microcrystalline sodium chloride.This could be due to salt crystals merely protruding into, or out of, the polymer surface, rather than occupying it completely and thus leading to smaller pores.It is also possible that the salt begins to dissolve before the polystyrene is fully coagulated, and therefore the observed pores are smaller than the measured size of the dry salt crystals.On the other hand, the difference in porosity between the different membranes may be due to the different de-mixing mechanisms of water-NMP compared to saline-NMP.The energy change caused by the dissolution of sodium chloride in water is also likely to have an effect on the de-mixing of the solvents.For a clearer understanding of this system it would be necessary to investigate the specific thermodynamics of this process.While determining the absolute molar ratios of the respective solvent components within a dynamic system is difficult and cannot be easily measured, it may be possible to elucidate a trend between varied concentrations of the system components.Within the membrane sub-layers, the observed macrovoids are much larger than the microcrystalline sodium chloride porogen, and the templating hypothesis is unlikely to be dominating the structure formation here.The macrovoids also increase in size with increasing concentration of salt in the casting solution.The membranes themselves are formed by phase inversion, which has previously been suggested to be caused by one of two mechanisms:instantaneous liquid-liquid de-mixing in the immersed dissolved polymer;,delayed liquid-liquid de-mixing in the solubilised polymer whereby film properties are not affected by phase separation .The transition between these thermodynamically-dictated states is a factor in the occurrence of macrovoids .One factor contributing to the enhancement of macrovoids is the specific pairing of solvent and non-solvent, with high mixing affinity contributing to greater macrovoid formation .This can also be achieved with the inclusion of solvent in the coagulation bath.In short, shifting the ternary system to a state of instantaneous de-mixing contributes to macrovoid formation.While macrovoids in membrane structures can sometimes be seen as unfavourable, as they may result in mechanical weaknesses in high pressure operating systems, for the in vitro liver model applications described here the membranes would be kept in low shear, low pressure environments .In these environments, an open, macrovoid structure is desirable to maximise perfusion across the membrane.Viability staining of B13 cells on TCPS, PX0 and PX40 showed attachment to all biomaterial surfaces after 48 h, demonstrating excellent viability and very low numbers of dead cells.Oxygen plasma treatment of the polystyrene membranes significantly decreased the water contact angle measurements, indicating an increase in hydrophilicity and therefore allowing good cell attachment.Treatment of PX membranes with the antibiotic-antimycotic solution previously recommended for sterilising PLGA membranes prior to culture is a suitable treatment for sterilisation as no infections were detected over the 14 day culture period .Treatment of the B13 cells with Dex and OSM on PX membranes over 14 days induced transdifferentiation towards a hepatic phenotype.There was a distinct loss of the pancreatic phenotype shown through loss of expression of the pancreatic marker amylase, replicating the response observed on glass.Furthermore, expression of the hepatic markers TFN, CPS-1 and GS were found to be induced in the Dex and OSM treated cultures, and not the untreated samples on all culture substrates.This is a significant observation as it shows that the loss of pancreatic phenotype coincides with induction of hepatic markers, as previously described in the literature ; and secondly, the culturing of B13 cells on PX membranes in complete B13 culture medium alone does not induce transdifferentiation of B13 cells to HLCs.Transdifferentiated HLCs cultured on PX membranes were also able to demonstrate functional capability by secreting serum albumin into the culture medium.The amount secreted was slightly higher from cells cultured on PX membranes than on TCPS controls, but this difference was not significant.Overall it was shown that PX membranes supported B13 attachment, viability and function at levels equivalent or greater than glass and TCPS controls, suggesting that these materials are ideally suited for use in cell culture applications – specifically for the generation of bioartificial liver devices based on membrane bioreactors.Indeed, PX40 hollow fibres have already been applied in such a system .The fibres could be of interest for incorporation into commercial HF systems such as FiberCell, Terumo or Cellab, and in theory, any HF application where cells are currently cultured on standard tissue culture polystyrene.This work describes for the first time the use of microcrystalline sodium chloride as a porogen in the development of a porous polystyrene membrane.Porous membrane formation was achieved under mild and economic conditions, resulting in a cost-efficient process.Varying the concentration of the porogen in the casting solution allowed control over the final membrane porosity, with a higher concentration resulting in a more porous membrane.However, average pore size was not affected by the change in porogen concentration, nor were the dimensions of the resultant membranes.Oxygen plasma treated polystyrene flat sheet membranes have been shown to support cell attachment and viability comparably to TCPS.The ability of the B13 cell line to transdifferentiate to HLCs when cultured on the developed PX membranes has also been established.Further work is necessary to investigate B13 cell biological function and drug metabolism behaviour on PX hollow fibres, but the work presented here suggests the combination of B13 cells with PX membranes could be a valuable tool in the development of improved bioartificial liver models and devices.
Herein we describe the manufacture and characterisation of biocompatible, porous polystyrene membranes, suitable for cell culture. Though widely used in traditional cell culture, polystyrene has not been used as a hollow fibre membrane due to its hydrophobicity and non-porous structure. Here, we use microcrystalline sodium chloride (4.7 ± 1.3 µm) to control the porosity of polystyrene membranes and oxygen plasma surface treatment to reduce hydrophobicity. Increased porogen concentration correlates to increased surface pore density, macrovoid formation, gas permeability and mean pore size, but a decrease in mechanical strength. For tissue engineering applications, membranes spun from casting solutions containing 40% (w/w) sodium chloride represent a compromise between strength and permeability, having surface pore density of 208.2 ± 29.7 pores/mm2, mean surface pore size of 2.3 ± 0.7 µm, and Young's modulus of 115.0 ± 8.2 MPa. We demonstrate the biocompatibility of the material with an exciting cell line-media combination: transdifferentiation of the AR42J-B13 pancreatic cell line to hepatocyte-like cells. Treatment of AR42J-B13 with dexamethasone/oncostatin-M over 14 days induces transdifferentiation towards a hepatic phenotype. There was a distinct loss of the pancreatic phenotype, shown through loss of expression of the pancreatic marker amylase, and gain of the hepatic phenotype, shown through induction of expression of the hepatic markers transferrin, carbamoylphosphate synthetase and glutamine synthetase. The combination of this membrane fabrication method and demonstration of biocompatibility of the transdifferentiated hepatocytes provides a novel, superior, alternative design for in vitro liver models and bioartificial liver devices.
92
Miniaturized whole-cell bacterial bioreporter assay for identification of quorumsensing interfering compounds
Quorum sensing is a bacterial communication system that coordinates cooperative behaviors in bacteria in a population density-dependent manner by means of small chemical signals.QS has been shown to affect virulence factor production and biofilm formation in several bacterial species, including clinically relevant human pathogens,.In contrast to conventional antibiotics, interference with QS is believed to put lower selective pressure on bacterial pathogens, reducing chances of resistance development,.Bacteria utilize a diverse set of QS systems.Whereas many QS signals are specific for a certain group or even species of bacteria, autoinducer-2 can be produced and detected by multiple bacterial species.Therefore, AI-2-mediated QS inhibitors potentially represent broad-spectrum antivirulence agents.AI-2 signal synthesis is catalyzed by the enzyme product of luxS gene, which is widely distributed throughout bacterial kingdom, including both Gram-positive and Gram-negative bacteria,.In this reaction AI-2-precusor,-4,5-dihydroxy-2,3-pentanedione is formed from S-ribosyl-l-homocysteine.Whereas luxS gene is highly conserved between different bacterial species, the AI-2 detection and signal transduction systems are more diverse.To date, three classes of AI-2 receptors have been described.The two best characterized are the members of LuxP family, limited to Vibrio spp., and LsrB family, found in many Gram-negative and in Gram-positive bacteria.As some bacterial species lacking a known AI-2 receptor respond to the externally added signal, additional receptors must exist.The gene for LsrB receptor is a part of the AI-2-regulated lsr operon encoding for the proteins involved in regulation of gene expression, as well as internalization, processing and degradation of AI-2 molecules.This system is more widespread than LuxP system and is present in human pathogens such as Shigella dysenteriae, Shigella flexneri, Salmonella spp. and Escherichia coli.These species share common mechanism of signal detection which has been studied in detail in E. coli.AI-2 signal accumulation correlates with bacterial population increase, reaching the maximum level at the middle-late exponential phase.When threshold concentration is reached, the signal triggers expression of lsr operon.This results in accelerating the expression of Lsr transport system and the rapid decline of AI-2 signal in the medium,.Lately, significant amounts of effort have been directed towards the discovery of compounds interfering with the AI-2 QS pathway,.The activity of compounds can be evaluated in cell-free systems.In addition, the effect on the expression of virulence factors, motility or biofilm production by the pathogen of interest is typically demonstrated.However, these methods are indirect, as virulent behaviors are in many cases not solely regulated by QS.To facilitate drug discovery process, the use of reporter bacterial strains is highly beneficial, as it allows the detection of compounds showing activity through specific, QS-mediated mechanism.As bioreporters are whole-cell systems, the compounds with toxic properties or low cell permeability can be ruled out.A number of reporter strains have been established to identify novel molecules interfering with AI-2-mediated QS.Although the majority of these strains are designed to detect QS interference with the components of LuxP system, there are a few using clinically more relevant LsrB system,.However, none of them has been used in high-throughput screening format.Here we report the optimization and validation of high-throughput whole-cell bioreporter assay for the identification of novel small molecules interfering with AI-2 quorum sensing pathway.The E. coli LW7 pLW11 strain has lacZ gene under lsr promoter and therefore produces β-galactosidase in response to the externally added DPD.The assay reveals agonistic activity or antagonistic activity.This strain has been previously utilized to measure the QS response of DPD analogs.However, the originally reported method requires high amount of test compound.Moreover, the β-galactosidase expression was measured by traditional Miller assay, which is a time-consuming multistep process.Here we adopt the simplified single-step detection procedure introduced by Schaefer et al. and scale the assay down to 96- and 384-well plate format.As emphasized in the review by Defoirdt et al., one of the main limitations of using bioreporter strains as instruments to detect QS-interference is their inability to exclude compounds with unspecific mode of action.To overcome this limitation, we incorporated into the assay a control strain where β-galactosidase is expressed in QS-independent manner.E. coli LW7 pLW11 bioreporter strain was kindly provided by Prof. William E. Bentley, University of Maryland, USA.The strain does not produce either its own AI-2 or β-galactosidase.pLW11 plasmid is a pFZY1 derivative, containing lacZ gene under the control of quorum sensing-related lsrACDBFG promoter.The control strain E. coli pBAC-LacZ was a gift from Keith Joung.It is derived from E. coli DH5α by introducing low copy number β-galactosidase plasmid under lac promoter.These strains were cultured in Lysogeny Broth medium supplemented with ampicillin 100 μg/ml and kanamycin 50 μg/ml for E. coli LW7 pLW11, or with chloramphenicol 12.5 μg/ml for E. coli pBAC-LacZ.DPD analogs A1, A2, A3, A4, A5, A6 and A7 were synthesized according to the procedure reported in the.The 4-chloro-2-phenylamino-benzoic acid was purchased from Molport.The compounds were first dissolved in DMSO at 10–100 mM concentration and stored at −80 °C.DPD was purchased from Carbosynth, PopCulture™ reagent and rLysozyme™ from Millipore.Minimal essential medium, β-galactosidase, o-nitrophenyl-β-D-galactopyranoside and inorganic salts for buffer preparation were obtained from Sigma.The assay was performed in flat bottom clear polystyrene 96- and 384-well plates.Lysogeny Broth and Tryptic Soy Broth media were obtained from Becton Dickinson.Overnight liquid culture of E. coli pBAC-LacZ was centrifuged at 3500g for 10 min and resuspended in PBS.Bacteria were diluted to the concentration of 5 × 108 cfu/ml and added to a 96-well plate, 100 μl/well.The plates were analyzed for the β-galactosidase expression immediately or after one freeze-thawing cycle, following the procedure described below.Chicken egg white lysozyme or rLysozyme™ was added to the detection mix.The assay outline is represented in Fig. 1.Overnight culture of E. coli LW7 pLW11 was diluted in fresh antibiotic-supplemented LB medium 1:50 and incubated at +30 °C, 200 rpm for 4–5 h to medium/late logarithmic growth phase, as controlled by turbidity measurements using DEN-1B densitometer.After centrifugation bacteria were diluted in appropriate assay medium to the 2× final concentration.DPD was added to a half of the suspension.Then DPD+ and DPD- suspensions were distributed into vials, 96-well plates or 384-well plates, depending on the assay format.Test compounds were prepared in appropriate assay medium to the 2× final concentration and then added to vials or well plates.Vials/plates were incubated for 2 h, 37 °C, with shaking at 200 rpm or 500 rpm.The samples from vials were then transferred into 96-well plates, 100 μl/well for analysis.Other samples were analyzed in the same plate which was used for the assay.Absorbance at 600 nm was measured, and the plates were frozen at −20 °C overnight before the analysis.For unspecific β-galactosidase inhibitory activity measurement the same procedure was used with the E. coli pBAC-LacZ strain.All the samples were tested in the absence of DPD.In the equations above s = compound-treated sample and c = DMSO control.In the equations above MU = Miller units, max = sample in the presence of DPD and min = no DPD sample.Bacterial QS is a potential target in antivirulence drug discovery that has been intensively investigated over the past decade,.To facilitate the discovery of small molecule inhibitors of AI-2 QS system we aimed to set up an E. coli-based bioreporter QS interference assay in HTS format.In the course of optimization process we incorporated into the assay the β-galactosidase detection procedure compatible with polystyrene plates; selected optimal assay conditions to be used in 96- and 384-well plates; evaluated assay compatibility with different media; introduced a control strain for false-positive detection; and validated the assay performance using a set of known AI-2 QS inhibitors.Efficient lysis of bacterial cells is an essential prerequisite for adequate evaluation of reporter gene expression in both bioreporter and control E. coli strains used in this study.To adapt the assay for multiwell plate format and to reduce the number of steps, we tested the β-galactosidase detection procedure reported by Schaefer et al.In this method all assay steps are conveniently performed in the same 96-well plate, and no sample transfer is required.The lysis step is combined with β-galactosidase detection and is achieved by a combination of PopCulture™ reagent and lysozyme added into the detection mix.However, in our hands this procedure was not sufficient for complete lysis of bacterial cells.Only about 2 times increase in β-galactosidase expression was observed with 0.5–1 μg/ml of chicken egg white lysozyme.Similarly, only minor increase in signal was observed when rLysozyme was used.We also tested whether cell lysis is more efficient when treatment with lysozyme was performed before detection as a separate step, but no difference to the samples where lysis and detection step were combined was observed.In contrast, performing a single freeze-thawing cycle prior to the addition of the β-galactosidase detection mix increased the signal 5–6 times.The presence of lysozyme in the detection mix did not further improve the signal of the samples subjected to freeze/thawing.Therefore, single freeze/thawing cycle in combination with the PopCulture™ reagent present in the detection mix resulted in most efficient bacterial lysis and was used in further experiments.As E. coli LW7 pLW11 bioreporter strain does not produce its own AI-2 signal, the expression of β-galactosidase is induced when external DPD is added.However, it must be noted that background β-galactosidase expression can be detected also in the absence of DPD.The fold increase in expression upon DPD addition is a native response to DPD.The higher native response results in the increased sensitivity of the method.We furthermore questioned how the native response to DPD is affected by the number of bacteria/well and DPD concentration.At all tested bacterial cell numbers the native response was gradually increasing with the DPD concentration.Bacterial concentrations of 2 × 108 or 5 × 108 cfu/ml demonstrated similar results, whereas at the 0.5 × 108 cfu/ml the DPD response was lower."The assay performance was monitored throughout experiments by calculating the screening window coefficient. "It is suggested that the assay is suitable for HTS if the Z' is >0.5.This criterion is met at DPD concentrations between 20 and 40 μM and bacterial numbers 5 × 108 cfu/ml."Although Z' was growing with the DPD concentration, at 40 μM increased variation was observed between biological replicates.Therefore, 20 μM DPD concentration and bacterial concentration 5 × 108 cfu/ml were selected for the final protocol.We furthermore compared the performance of the assay in vials, 96-well plates and 384-well plates."In PBS, similar results were obtained for all assay formats, and Z' values obtained for multiwell plates were even higher in comparison to those in vials.When the assay was performed in LB, larger variation between repeats was observed for both types of multiwell plates in comparison to vials."However, in all cases the assay window was high enough to enable the assay. "In the original method utilized by Sintim's group, the assay was performed in phosphate buffer.However, in the buffer cells are under nutrient-deprived conditions.Therefore, we investigated whether the assay can be performed in other media, under more physiological conditions."Our data demonstrate that for multiwell plates the response to DPD is smaller when the assay is performed in LB medium, when compared to PBS, and Z' values are not optimal: 0.3 in LB vs. 0.6 in PBS for 96-well plates and 0.3 vs. 0.9 for 384-well plates. "However, Z' > 0 indicate that the assay can be performed in LB medium as long as adequate number of replicates are used, which was further confirmed by our validation experiments.It must be noted, that the assay is not compatible with glucose-containing media, as glucose is known to negatively regulate lsr operon.Indeed, no increase in the β-galactosidase expression in response to DPD was observed in MEM cell culture medium, TSB and glucose-supplemented LB at concentrations 0.1 g/l and higher.Assay validation was performed with a set of DPD analogs with known QS interference properties.To exclude false-positives, the E. coli pBAC-LacZ control strain was included.In this strain, the expression of β-galactosidase is under the control of lac promoter.The strain is used to verify that the compounds do not interfere with β-galactosidase expression in QS-independent manner.4-chloro-2-phenylamino-benzoic acid was demonstrated to be a false-positive compound in one of the screening campaigns performed by our group.Therefore, it was added to this study as a representative example.All the analogs demonstrated AI-2 QS antagonistic activity.The lowest activity was demonstrated for compound A5.These results are in line with the data of Roy et al., 2010.None of the analogs inhibited β-galactosidase expression in the control strain, suggesting their specific action via QS-mediated mechanism.In the bioreporter strain, CBA showed strong antagonistic activity.However, it also strongly reduced β-galactosidase expression in the control strain, which proves the unspecific mechanism of action for this compound and shows it to be a false-positive.When added to the bioreporter strain in the absence of DPD, no increase in β-galactosidase expression was observed for any of the compounds with the exception of compound A4, demonstrating some agonistic activity.Similar results were obtained in LB and in PBS for all DPD analogs, but not for CBA.This compound inhibited β-galactosidase expression when the assay was performed in PBS, but in LB showed much lower activity in both bioreporter and control strains, demonstrating an example of the effect of the assay medium on the assay outcome.Comparison of the results obtained in 96- and 384-well plates in PBS as an assay medium shows the same activity profile for all tested compounds.The same result was obtained in LB medium.Here we report the optimization and validation of HTS-compatible bioreporter assay for screening of small molecule libraries for the interference with AI-2 quorum sensing pathway.The assay is based on E. coli strain containing a β-galactosidase reporter gene under lsr promoter, and therefore can be used for the detection of molecules targeting LsrB-type QS system, found in a number of clinically relevant bacterial pathogens, including E. coli, Shigella dysenteriae, Shigella flexneri and Salmonella spp.However, it must be noted, that the results obtained in one bacterial species cannot be always extrapolated to the other species with the QS system of the same type.For example, although LsrB receptor and AI-2 processing proteins are homologous in E. coli and S. typhimurium, these organisms respond differently to most of the DPD analogs.The assay can be performed in either 96- or 384-well format, therefore enhancing the discovery of new antivirulence compounds.The methodology is compatible with PBS or LB medium, but not functional in glucose-containing media, due to repression of lsr operon by glucose.The control E. coli strain is incorporated into the assay to verify that the compounds do not interfere with β-galactosidase expression in QS-independent manner.
The continuing emergence and spread of antibiotic-resistant bacteria is worrisome and new strategies to curb bacterial infections are being sought. The interference of bacterial quorum sensing (QS) signaling has been suggested as a prospective antivirulence strategy. The AI-2 QS system is present in multiple bacterial species and has been shown to be correlated with pathogenicity. To facilitate the discovery of novel compounds interfering with AI-2 QS, we established a high-throughput setup of whole-cell bioreporter assay, which can be performed in either 96- or 384-well format. Agonistic or antagonistic activities of the test compounds against Escherichia coli LsrB-type AI-2 QS system are monitored by measuring the level of β-galactosidase expression. A control strain expressing β-galactosidase in quorum sensing-independent manner is included into the assay for false-positive detection.
93
Developing benders decomposition algorithm for a green supply chain network of mine industry: Case of Iranian mine industry
Over the recent decades with rapid industrialization and expansion in production, consumption and trading levels in the world, the necessity for supply chain management in several industries has increased.Manifold factors influence the necessity to utilize SCM, importantly costs and environmental issues.Firms face challenges related to costs, competition and environmental problems configure their supply chain design and logistics systems continuously.Depletion of natural resources, climatic problems, gas emissions, natural and technical disruptions are the major environmental factors acting as an indispensable challenge for all industries, which can be dealt with green supply chain management.Green supply chain management is an important trend among most of the industrial activities which enables managers to address the adverse impacts of traditional supply chain .GSCM is applied within two procedures: first, it is used to integrate the environmental management principal; second, with a prevention or preservation policy to put an end for the further depletion.Hervani et al. stated that GSCM minimize all types of wastes such as emissions, chemical, hazardous and solid waste."Another advantage of GSCM is that this concept considers the all phases of a good's life cycle, beginning from the extraction of raw material phase, design phase, production phase and distribution phase, to the good's use by costumers, and finally its final disposal .Supply chain of mining includes several activities that are tightly related to each other such as operations, logistics and marketing functions that needs to be addressed in an appropriate way to for ensuring the high efficiency and profit.Mining industry is one of the important industries that can afford jobs for hundreds of people directly and indirectly.Mining activities mainly consist of extraction, processing, and transportation of minerals from mining sites to market place .Over the years, arbitrary and unregulated mining activities have significantly contributed to environmental degradation.Consequently, a number of environmental issues such as disturbance of top soil, contamination of water bodies which stems from acid mine drainage, the release of cyanide and other toxic chemicals, atmospheric pollution, global warming with increasing emissions from greenhouse gases, visual intrusion, dust, vibrations, traffic, and noise may occur .Besides, low investment capacity, the use of traditional technologies, the poor working conditions of unskilled manpower lead to poor productivity and maintenance of equipment which both can contribute greatly to several health and economic problems.Exploring mining supply chain reveals more details about the different parts of its network.In recent years, supply chain of mining has gotten some advances into considering not only the production process, but also the condition influencing the material to be delivered to markets.Following these improvements, mining sector has focused more on concepts such as environmental management, sustainable development and corporate social responsibility.This paper aims to investigate a multi-objective green supply chain of mining industry with fuzzy demand values.The objectives are to minimize costs and emission, besides maximize customer satisfaction.The rest of the paper is organized as follows.In Section 2, we review the related studies in literature.Problem description is presented in Section 3.Next, we propose multi-objective mixed integer programming.We devise benders decomposition to solve the model.In Section 4, a case study of Iranian mining industry is studied and evaluated by model.Finally, we conclude in Section 5.The mining industry in Iran has been a significant economic activity for centuries and surprisingly contribute greatly to economic growth annually.Government of Iran highly support activities which led to extraction of material such as gold, metal, copper, bronze and so on.Mining companies in Iran are exclusively supported by Ministry of Industry & Mining politically, technically and financially.Iran is a country, rich in materials with manifold grand mines, dispersed all over the country.However, the processing and consuming centers for mineral materials have brought some challenges which have to be dealt appropriately to ensure the maximum efficiency and profit.Although some of these challenges have to do with technical part, a large proportion is happening outside the mines through their supply chain that influence profit and environmental conditions.One of the important challenges has to do with the uncertainty in the demand and supply values.Uncertainties in both supplier and demand sides are very common in literature.Several optimization methods are proposed to solve the supply chain problems under uncertainty such as mixed integer programming, stochastic programming, dynamic programming, exact methods and also meta-heuristics.Govindan et al. comprehensively reviewed and categorized the studies associated to supply chain network design under uncertainty.Investigating the studies in two steps, they reviewed the research articles first considering the supply chain design and planning and then went through the optimization tools that are used in literature for the networks under uncertainty sets such as fuzzy sets.These challenges have notoriously affected the economic benefits of mine industry so the necessity of addressing problem is needless to mention.In this part, we categorize literature on Mining in Iran, GSCM in mining industry and studies related to benders decomposition.Mining sector in Iran plays an undisputable role in the growth of economy and is one of the important infrastructures in the country.The dispersion of many diverse mineral resources such as iron, coal, oil, gas, chromite, copper, lead and zinc and manganese, and their sub-materials have made this industry, more reliable and profitable.Although Iran produces more than 68 types of materials and having a total 84 billion tons of proven and potential mineral reserves, its mining industry in not developed technically and most of the processes are done traditionally.However, Iranian mine industry has undergone a decline in growth and do not place among top 10 countries concerning capacity and production in the recent.A possible answer to this can be find in the traditional mechanism and the machinery that are used in Iranian mines.By the increase in the social and political pressure, sensitivity around end-of-life goods, transportation risks, environmental issues have led the field of the supply chain management to more focused on making a change on supply chain performances toward natural environment , changing the traditional strategy of supply chain to a more environment-oriented strategy .Integrating internal and external resources of organizations, green supply chain management has become more interested to industry and academia.There are number of different definition of GSCM in the literature."Zsidisin and Siferd, and Diabat and Govindan defined green supply chain as “the set of the set of supply chain management policies held, actions taken and relationships formed in response to concerns related to the natural environment with regard to the design, acquisition, production, distribution, use, re-use and disposal of the firm's goods and services” .GSCM is also defined as sustainable supply chain .Amalnick and Saffar designed green supply chain network considering environmental issues as CO2 emissions.They proposed fuzzy mathematical model with an aim on minimizing costs and environmental impacts.Recently, GSCM has been noticed in several sectors such as manufacturing, electrical, automotive, mobile phone, etc.Addressing environmental issues related to mining industries in the literature and industry shows the attentions to adopting the cleaner production, environmental management and policies, parallel to GSCM , adhering to governmental regulations, obtaining social license to operate , attracting financial groups and increasing their eco-efficiency .Ghose explored techno-economic and socioeconomic factors that inhibit environmental management practices in Indian small-scale mining industries.Ghose examined the schematics of environmental management plans adopted by small scale mines in India.Berkel introduced a framework for eco-efficiency in Australian mineral processing industries.Nikolaou and Evangelinos utilized SWOT analysis to analyze challenges in Greek mining industries to adopt environmental management practices.Examination of key mega trends and potential challenges of environmental sustainability in mining industries of Australia, Catalonia and Tanzania is conducted by .Muduli et al. examined the qualification of the barriers to GSCM implementation using graph theoretic and matrix approach.Sivakumar et al. used AHP, a decision-making method, and Taguchi loss functions to the evaluation and selection of green vendor.Govindan et al. investigated the influential strength of factors on adoption of green supply chain management practices in Indian mine industries.It can be inferred from Table 1 that all the studies performed in field of mine supply chain networks can be categorized by following research areas:Modeling approaches.Different types of models are applied for mine supply chain network.Most of these methods are qualitative models such Environment management system, Eco-efficiency, SWOT, Interpretive structural modelling, Taguchi loss, DEMATEL.Mixed integer programming is presented for first time for this problem.Solution methods.Exact solution and heuristics algorithms are normally applied for large supply chain networks.Objective functions.Different mathematical formulation developed for modeling supply chain networks are presented in context of single objective and multi-objective models.Objective function types.Objective functions of mathematical models in supply chain networks mostly represent most three common types such cost function, CO2 reduction function and Customer satisfaction function.Research scopes.Most of the scopes have been considered in literature for studying mine supply chain were restricted to supplier and manufacturer, distributor and environmental and ecological issues.Demand points.Demand points in supply chain networks usually are out of two types: crisp number and fuzzy number,Benders, developed procedures for solving large-scaled mixed variable programming problems, specially dealing with complicated variables.Rahmaniani et al. presented a survey of benders decomposition algorithm with the application in the optimization.Besides, the algorithm has been applied in wide areas such as facility location, scheduling, mixed integer programming, distribution systems design, network design and optimization,.Benders decomposition is widely used in the supply chain management.Keyvanshokooh et al. proposed an accelerated stochastic benders decomposition algorithm to solve a profit maximization model for closed-loop supply chain network.They proposed a hybrid robust-stochastic programming approach which includes stochastic scenarios for transportation costs and polyhedral uncertainties for demands.Santoso et al. presented a solution method with combining the sample average approximation scheme and an accelerated benders decomposition algorithm to solve large-scaled stochastic supply chain managements.Pishvaee et al. used an accelerated benders decomposition algorithm with three efficient acceleration mechanisms to solve a sustainable medical supply chain network under uncertainty.Zarrinpoor used accelerated benders decomposition to solve location-allocation model addressing a real-world health service network design problem and for the design of integrated water supply and wastewater collection systems.Uster et al. develop a dual solution approach to generate strong benders cuts by introducing three new approaches for adding benders cuts, to solve a multi-product closed-loop supply chain network.In comparison to branch-and-cut algorithm, their solution method works better in generating lower and bounds to find optimal solution and amount of the time needed to obtain the optimal solution.Easwaran and Uster proposed a bender decomposition algorithm including benders cuts and tabu search heuristics to determine optimal locations of collection centers and remanufacturing facilities in a multi-product closed-loop supply chain of remanufacturing facilities, finite-capacity manufacturing, distribution and collection facilities that works with number of retailers.Roni et al. presented a mixed integer programming of biomass co-firing in coal-fired power plants supply chain within a hub-and-spoke network.They used benders decomposition to evaluate their model with numerical data.Shaw et al. considered bender decomposition to solve chance-constrained a green supply chain which addresses carbon emissions and carbon trading issues by taking into account uncertainties in capacity of suppliers, plants and warehouses, and demand.As we noted earlier, only a few studies have been done on modeling the mine green supply chain.Here, we design the mine green supply chain with its most important contents including suppliers, manufacturing centers, distribution centers and vehicles with considering the emissions.Then, we proposed the mathematical formulations for the green supply chain of mine industry.Benders decomposition algorithm is developed to solve the model.For the first time in the literature, we also investigated our model for the case of mine industry in Iran.In this section, a multi-objective mixed integer programming model is developed for three-echelons Iranian mine supply chain including: supplier, manufacturer, distributor with multi-type vehicles.Schematic view of the model including suppliers, manufacturers and distributors are shown in Fig 1.Suppliers potentially provide the raw material.Being transported from supply centers to manufacturing centers, manufacturers can produce several new products.Products are transmitted to distributors who directly sent them to markets."Our proposed model aims to reduce costs, increase customer's satisfaction and decrease the vehicle emissions.Followings are considered as innovations in this paper, since they are not considered in the literature.We used fuzzy set for demand values in order to consider the uncertainties and fluctuations in real-life problems.As much as we want to make our problem closer to a real-life problem, specific types of vehicles with different capacities are considered.In reality, most of the time suppliers, manufacturing centers and distribution centers purchase or rent vehicles for the shipments.We consider this in our model because their costs are different and noticeably effect the final cost of the network.In order to reduce extra costs in terms of distance, we take into account the possibility of coalition in same-type centers.Innovations in this paper try to fulfill the gaps in the literature of supply chain network of mine industry.The model has been proposed based on the following assumptions:Only the emission of the vehicle carrying loads in ongoing route is considered.Customers can demand and receive goods with no limitation.We did not consider assignment problem for landfills.Costs related to carrying of goods are considered fixed for different periods.Suppliers have the adequate raw material to deliver to manufacturers.Inventory cost for suppliers is not considered in our model.Inventory strategies differ in each manufacturing center.Manufacturers, suppliers and distributors can utilize both purchased and rented vehicles simultaneously.Coalition of the same type centers, for reducing costs in distance, is only considered for purchased vehicles, carrying from suppliers to manufacturers.Suppliers s = 1,…,S,Manufacturer m = 1,…,M,Distributor d = 1,…,D,Vehicles v = 1,…,V,Manufacturing locations i = 1,…,I,Distribution locations j = 1,…,J,Time periods t = 1,…,T,Transportation costs, shortage costs, locating costs, purchase costs and production costs are minimized by objective functions and, respectively.Objective functions and minimize the CO2 emissions of vehicles.Eq. ensures that number of products sent to distribution centers is more than their demand.Eq. makes the balance in inventory of raw materials in the supplier and products sent to distribution centers from manufacturing centers.Eq. ensures the production time.Eq. ensures the production capacity of the product.Eq. show the capacity limitation for raw materials in manufacturing center.Eq. show the capacity limitation for the products in distribution center.Eq. represents the capacity limitation for vehicles transporting goods from supplier to manufacturing center.Eq. represents the capacity limitation for vehicles transporting goods from manufacturing center to distribution center.Eqs. and shows the establishment of distribution centers.Eqs. and shows the establishment of manufacturing centers.Available numbers of vehicles from supplier to manufacturing center is ensured in Eq.Available numbers of vehicles from manufacturing center to distribution center is ensured in Eq.Eq. ensures that suppliers send out raw materials.Eqs. and guarantee the usage of vehicles in respect to demand.Eq. assures that q1smtv, q2mdtv, q4mt, q5dtv, i1mt, i2dt, b2mt, b1dt are non-negative variables, and x1mi, x2dj, x3r, x4smtv, x5mdtv, x6dctv are binary variables.Eqs.- and-,Eq. shows total objective function.The second objective function is restricted by ɛ which is variable between ZZ2min and ZZ2max.In each iteration, the single objective is solved for each value of ɛ.The series of solutions obtained are pareto optimal front of multi-objective problem that are shown in Fig 2.It is shown in Fig 2 that solutions for first objective function and second objective function are in contradiction.In other words, an increase in one of the objective functions lead to the decrease in the other one.Benders decomposition algorithm, introduced by Benders , is one of the efficient and exact algorithms to solve large scale MIP problems.Benders decomposition instead of solving the original large model, reformulate the model by decomposing it to a pure integer programming, namely master problem, and a linear programming called sub-problem.Then the model is solved with cutting plane method by using the solution of one in the other until the optimal solution is achieved.Steps of Benders decomposition algorithm are described in Fig 3.As illustrated in the Fig. 3, a series of projections, outer linearization and relaxations are the main components of the Benders.In the first step, by having the set of complicated variables, the primal model is projected onto the subspaces.Then, the dual model is formulated for the obtained result, where extreme rays and points define the feasibility cuts and optimality cuts of those variables.Next, by enumeration of all the extreme rays and points, a new equivalent model is formulated.Solving this model by relaxation strategy considering feasibility cuts and optimality cuts lead to MP and SP.Then, the problem is solved iteratively to reach the optimal solution .In other words, both MP and DSP are solved iteratively until they satisfy the convergence condition.Advantages of benders decomposition algorithm overweigh those of other algorithms like meta-heuristics or heuristics, used for solving problems with large sizes such as: it relies on strong algebra concepts, the convergence of this algorithm and achievement of optimal solution is analytically proven, the DM can adjust the optimality gap precisely when it is needed and other efficient solution methods can be employed while solving the decomposed problems within a BDA .So, we implement Benders decomposition for the concerned model.Sub-problem, MP and Dual-sub problem are formulated in this part.In this section, the proposed model and solution algorithm is applied for a real-life case of IMSC and the corresponding results are represented and analyzed.In this stage, Iran currently has ten active mines spread in different geographical parts of the country.Likely one manufacturing center and one distribution center exist near each mine in Iran.Considered mines, manufacturing centers and distribution centers in this paper are illustrated in Fig 4.Transportation cost between three parts of IMSC are represented in Tables 2 and 3.Most important issue of mine supply chain in Iran has to do with the policy taken by private corporations extracting material from mines.Main policy of these corporation is to plan manually for their orders from manufacturing centers.Next, manufacturing centers do not consider costs related to transportation of material from supplier.Their only criterion to choose to buy material has to do with the suppliers’ sale cost.They rarely consider criteria such as distance, vehicle type, emissions of the vehicle and etc.The aim of investigating green supply chain of mine industry in Iran is to reduce extra costs, increase the profit for suppliers, manufacturing centers and distribution centers and decrease emissions based on the vehicles that are used in transportation.Therefore, managers could make better decisions on productivity of the mine industry in Iran.As stated in problem description section, we consider demands as triangular fuzzy number.Q2 represents the determined demand value.For Q1 and Q3, we consider six demand-based scenarios in such a way that for Q1 we consider demands as −15%, −20% and −40% of the deterministic demand; and for Q3 we choose positive values +15%, 20% and 40% of deterministic demand.Then, we solve the model under seven scenarios based on the fuzzy demand.We ran our model seven times with deterministic demand values, and defined scenarios for demand values.Both R1 and R2 which respectively represent number of distribution centers needed to be established and number of manufacturing centers needed to be established are considered to be two.GAMS 24.9.1 optimization software is used to implement the model and the solution method and all the other experimental are carried out by an Intel Core i7, 2.4 GHz computer with 8 GB RAM.The reason that GAMS 24.9.1 optimization solver and CPLEX solver is used to solve the model has to do with the medium size of our linear programming model.In Table 4, number of suppliers, manufacturing centers, distributors, optimal objective value, computational time using Benders decomposition and computational time using GAMS 24.9.1 software are presented.As it is shown in Table 4, Benders decomposition solution approach effectively decrease the computational time of solving the case problem at most by 74.4% in scenario 3.While GAMS software solves the model in not a long time, there may be an argument about the necessity of Benders decomposition in such situations.However, it is needless to say that Benders decompositions will be more valuable when the size of problem grow in future.Then, we employ five classes of problems in which test instances, sets and parameters, are generated randomly.Generated values for sets are shown in Table 5.Class 1 represents the original values for case of Iran.As far as the results shown in Table 6 are concerned, it is visible that unlike the CPLEX that solves the test instances in very long time, not only benders decomposition algorithm solves all the test instances of different size to optimality with iota gaps, but also it solves them in more reasonable amount of time.In addition, CPLEX solver may solve the instances with small size in good time but running time will increase exponentially as the size of problem increases.We did a series of sensitivity analysis to see how changes in parameters effects objective function value.Fig 5 shows the results of objective function with respect to changes in unit purchasing cost of raw material from supplier s by manufacturing center m. By increasing the purchasing value, objective function value increases, too.Furthermore, the increase in purchasing cost of raw material will also change objective function vale more, as purchasing cost of product from manufacturing center by distributor would probably increase, too.Manufacturing centers are of great importance in determining the price of a product which surely has to do with producing cost of that product.Hence, little changes in producing costs of a product will strongly influence the price of final product.Here, we analyze our model with respect to increase in producing cost of a product in manufacturing center.The results in Fig 6 illustrates that increase in unit producing cost of product in manufacturing center will worsen objective function value.In this paper, we developed multi-objective mixed integer programming models in order to minimize the transportation costs, shortage costs, purchase costs, production costs and CO2 emissions of vehicles.In order to consider uncertainty in our model, demand values were chosen as a triangular fuzzy number.Then we evaluated our models with a real-life case problem related to Iranian mine industry.Providing a large employment opportunity and also producing a great mass of mineral products such as gold, metal and copper, mines are of great significance to the economy of the Iran.Most of the manufacturing centers and distributors are established randomly based on their close distance to the mines and other factors such environmental, economic, geographical criteria are not taken into account.With taking these factors, mine industry can easily prevent many unnecessary costs.Besides, mines are always under pressure of environmental activists since mine industry is one of the major causes that cause air pollution in Iran.For future studies, our model can be extended into taking into account more environmental factors based on mining activities.In addition, a location-allocation problem can be formulated for choosing the optimal manufacturing and distribution centers location and mines allocation to those centers.Also, a robust model can be formulated by considering uncertainty in flow among suppliers-manufacturing centers-distribution centers or disruption in any echelon of the supply chain.
This paper attempts to design a three-echelon supply chain network for mine industry considering suppliers, manufacturing centers and distributors with different kinds of vehicles. A multi-objective mixed integer programming model is proposed to minimize transportation costs, shortage costs, purchase costs, production costs and CO2 emissions of vehicles with fuzzy demand values. ε-constraint method is used in order to convert multiple objective functions to a single objective function. Benders decomposition algorithm is applied to solve the model under three demand-based scenarios. A case study is done on Iranian mine industry to present the significance and applicability of the proposed model in a real-life case problem as well as the efficiency of Benders decomposition.
94
Associations between abuse/neglect and ADHD from childhood to young adulthood: A prospective nationally-representative twin study
Childhood maltreatment including abuse and neglect can affect between 2.5–32% of children worldwide and is an important risk factor for the development of internalising and externalising psychopathology in adolescence and adulthood.Being a victim of maltreatment at a young age is related to symptoms of psychiatric disorders in later years, as well as to alcohol and cannabis abuse, antisocial behavior and conduct disorder.However, many challenges remain for establishing causal relationships between child maltreatment and mental health problems.We focused on clarifying the nature of the association between child maltreatment and attention deficit/hyperactivity disorder.ADHD is characterized by a persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning and development, 2013).It is one of the most common neurodevelopmental disorders in childhood, with an estimated prevalence of 3.4%.Childhood ADHD has been associated with poor functional outcomes and comorbid psychiatric disorders including oppositional defiant, conduct and learning disorders.ADHD is highly heritable in childhood with genetic factors explaining 60–90% of the variance.Once considered only a childhood disorder, ADHD is now recognized to persist and also emerge in adulthood.The estimated prevalence of adult ADHD ranges between 2.5% and 5%.Similar to children with ADHD, adults affected by ADHD experience poor functional outcomes.Comorbid disorders among adults with ADHD include anxiety disorders, depression, substance use disorders, antisocial and other personality disorders.Studies have indicated that the heritability in adulthood is lower than in childhood, accounting for approximately 30–41% of the variance of adult ADHD.Various forms of maltreatment have been associated with ADHD in children samples.Similar findings were observed in adult samples: associations between retrospective reports of child maltreatment and adult ADHD have been reported.Altogether, these studies indicate that maltreatment occurring prior to young adulthood is more common among people with ADHD compared to non-ADHD groups, and higher levels of ADHD symptoms are observed among individuals who were exposed to child maltreatment compared to non-exposed individuals.Yet, no study thus far has examined the association between ADHD and maltreatment in adolescent years separately from childhood.Adolescence is a time of major emotional, physical, social and neurodevelopmental change, suggesting that victimization during this period could have important implications for development.Moreover, as adolescents spend an increasing proportion of their time outside the home environment, they are likely to experience a greater variety of types of victimization which could be associated to their ADHD symptoms.Most importantly, however, the robustness of this association and the direction of the link between maltreatment and ADHD have yet to be tested.One study based on a large population-based sample of adult twins reported an association between child maltreatment and adult ADHD symptoms among monozygotic twin pairs discordant for maltreatment.The discordant MZ twin design tests whether twins exposed to maltreatment have more ADHD symptoms compared to their genetically-identical twin who was not exposed to maltreatment.Since the twins in this study grew up together, familial confounding factors were also controlled.Findings indicated that the association between ADHD and maltreatment within the MZ group was significant.Because of the stringent control for potential confounders, this study concluded that the association between child maltreatment and adult ADHD is partly causal.However, the validity of retrospective reports of childhood maltreatment has been questioned in light of possible misclassification and bias.In addition, it is necessary to consider temporal priority between the exposure and the outcome, requiring prospective population-based samples of children followed into adult years.This is required because ADHD can be the result of maltreatment in childhood but can also be an early risk factor for experiencing maltreatment and other forms of violence victimization.Behavioral characteristics associated with ADHD, including being impulsive, making careless mistakes and interrupting or intruding on others, may evoke negative responses from the environment and produce or increase conflicts.In the present study, we used prospectively-collected measures from a longitudinal cohort study of twins to examine the association between exposure to abuse/neglect in childhood and adolescence, with ADHD up to age 12 and at age 18.First, we examined the associations between abuse/neglect and ADHD diagnoses in childhood and in young adulthood separately.We tested the robustness of these associations by also analysing ADHD symptom scales and by controlling for potential confounders.We also explored the specificity of these associations by looking at bullying and domestic violence.We further examined whether the association was concentrated specifically among ADHD participants with comorbid conduct disorder.In addition, we investigated twins’ differences in abuse/neglect and ADHD to control for familial confounding.Second, we investigated the longitudinal associations between abuse/neglect and ADHD from childhood into young adulthood.Participants were members of the Environmental Risk Longitudinal Twin Study, which tracks the development of a birth cohort of 2232 British children.The sample was drawn from a larger birth register of twins born in England and Wales in 1994–1995.Full details about the sample are reported elsewhere.Briefly, the E-Risk sample was constructed in 1999–2000, when 1116 families with same-sex 5-year-old twins participated in home-visit assessments.This sample comprised 56% MZ and 44% dizygotic twin pairs; sex was evenly distributed within zygosity.Families were recruited to represent the UK population with newborns in the 1990s, on the basis of residential location throughout England and Wales and mother’s age.Teenaged mothers with twins were over-selected to replace high-risk families who were selectively lost to the register through non-response.Older mothers having twins via assisted reproduction were under-selected to avoid an excess of well-educated older mothers.At follow-up, the study sample represents the full range of socioeconomic conditions in the United Kingdom.Follow-up home visits were conducted when children were aged 7, 10, 12, and 18 years.Home visits at ages 5, 7, 10, and 12 included assessments with participants and their mothers.With parents’ permission, questionnaires were mailed to the children’s teachers, who returned questionnaires for 94% of children at age 5 years, 93% of those followed up at age 7 years, 90% at age 10 years, and 83% at age 12 years.The home visits at age 18 included interviews only with participants.There were no significant differences between those who did and did not take part at age 18 years in socioeconomic status when the cohort was initially defined, age-5 IQ scores, age-5 behavioral or emotional problems, or rates of childhood ADHD.At age 18 years, participants were asked to identify two individuals who know them well to act as co-informants; 99.3% of participants had co-informant data.The Joint South London and Maudsley and the Institute of Psychiatry Research Ethics Committee approved each phase of the study.Parents gave written informed consent and twins gave assent between ages 5 and 12 and then written informed consent at age 18.Analyses in this paper were restricted to 2040 individuals with ADHD information in childhood and in adulthood.The measurement of childhood victimization has been described previously.Briefly, exposure to several types of victimization was assessed repeatedly, using a standardized clinical interview protocol with mothers, when the children were 5, 7, 10, and 12 years of age and dossiers have been compiled for each child with cumulative information about exposure to physical and sexual abuse by an adult, emotional abuse and neglect, physical neglect, bullying by peers, and domestic violence.Exposure to each type of victimization was rated by coders as “0” not present; “1” probable harm, occasionally present, or evidence of only minor incidents; or “2” definite harm, frequently present, or evidence of severe incidents.Childhood abuse/neglect in this study included exposure to physical and sexual abuse by an adult, emotional abuse and neglect and physical neglect.In our study sample, 18.8% of the children experienced moderate abuse/neglect and 7.3% experienced severe abuse/neglect across childhood.A total of 36% have been exposed to occasional bullying by peers and 8.8% had been frequently bullied by peers.Finally, 28% were exposed to a single phase of domestic violence and 17.2% were exposed to repeated phases of domestic violence.Childhood poly-victimization dossiers have been compiled for each child with cumulative information about exposure to physical abuse, sexual abuse, emotional abuse and neglect, physical neglect, bullying by peers and domestic violence.All childhood victimization experiences were summed: 1490 children had experienced no severe victimization; 423 had 1; 127 had 2 or more severe victimization experiences by age 12.These measures have been described previously.In brief, at age 18, participants were interviewed about exposure to a range of adverse experiences between 12 and 18 years using the Juvenile Victimization Questionnaire 2nd revision, adapted as a clinical interview.The JVQ has good psychometric properties and was used in the U.K. National Society for the Prevention of Cruelty to Children national survey, thereby providing benchmark values for comparisons with our cohort.Our adapted JVQ comprised 45 questions covering different forms of victimization grouped into seven categories: crime victimization, peer/sibling victimization, Internet/mobile phone victimization, sexual victimization, family violence, maltreatment, and neglect.All information from the adapted JVQ-R2 interview was compiled into victimization dossiers.Using these dossiers, each of the victimization categories was rated by trained researchers.Ratings were made using a 6-point scale: 0 = not exposed, then 1–5 for increasing levels of severity.The ratings for each type of victimization were then grouped into three classes: 0 – no exposure, 1 – some exposure, and 2 – severe exposure due to small numbers for some of the rating points.For this study, abuse/neglect in adolescence included exposure to maltreatment, sexual victimization and neglect to match the variable of childhood abuse/neglect.In our study sample, 16.9% of the participants have experienced moderate abuse/neglect in adolescence and 5.9% have experienced severe abuse/neglect in adolescence.A total of 42.4% have been exposed to some peer/sibling victimization and 15.3% have been exposed to severe peer/sibling victimization.Finally, 6.5% have been exposed to some family violence and 12% have been exposed to severe family violence.Adolescent poly-victimization.Adolescent poly-victimization was derived by summing all seven victimization experiences coded as severe: 1321 adolescents had experienced no severe victimization; 391 had 1; 325 had 2 or more severe victimization experiences.We ascertained ADHD diagnosis on the basis of mother and teacher reports of 18 symptoms of inattention and hyperactivity-impulsivity derived from DSM-IV, 1994) diagnostic criteria and the Rutter Child Scales.Participants had to have 6 or more symptoms reported by mothers or teachers in the past 6 months, with the other informant endorsing at least 2 symptoms.We considered participants to have a diagnosis of childhood ADHD if they met criteria at age 5, 7, 10, or 12.In total, 247 participants met criteria for ADHD across childhood: 6.8% at age 5, 5.4% at age 7, 3.4% at age 10 and 3.4% at age 12 years.We ascertained ADHD at age 18 years based on private structured interviews with participants regarding 18 symptoms of inattention and hyperactivity-impulsivity according to DSM-5 criteria, 2013).Participants had to endorse 5 or more inattentive and/or 5 or more hyperactivity-impulsivity symptoms to receive an ADHD diagnosis; we also required that symptoms interfered with individual’s “life at home or with family and friends” and “life at school or work” were rated 3 or higher on a scale, thereby meeting criteria for impairment and pervasiveness.The DSM-5, 2013) requirement of symptom onset prior to age 12 was met if parents or teachers reported more than 2 ADHD symptoms at ages 5, 7, 10, or 12 years; a diagnosis of childhood ADHD was not required for young adult ADHD diagnosis.A total of 166 participants met criteria for ADHD at age 18, 52% of them male.Co-informants also rated participants on 8 ADHD symptoms, including 3 items relating to inattention and 5 items relating to hyperactivity/impulsivity.Participants’ parental socio-economic status was measured via a composite of parental income, education, and occupation when they were aged 5, and was categorized into tertiles.IQ at age 5 was measured using a short form of the Wechsler Preschool and Primary Scale of Intelligence—Revised.Using two subtests, children’s IQs were prorated following procedures described by Sattler.IQ at age 18 was measured using a short version of the Wechsler Adult Intelligence Scale–Fourth Edition.Using two subtests, young adults’ IQs were prorated according to the method recommended by Sattler.Mothers’ depression was assessed using a modified version of the Diagnostic Interview Schedule.We assessed lifetime depression according to DSM-IV criteria, 1994).We derived a diagnosis of children’s conduct disorder on the basis of mothers’ and teachers’ reports on 14 of 15 items from DSM-IV, 1994) criteria for conduct disorder.We considered participants to have a diagnosis of conduct disorder if they met five or more criteria at age 5, 7, 10, or 12.15.6% of the children in the study sample met criteria for conduct disorder across childhood.During the age-18 interview, we assessed participants’ mental health over the previous 12 months including depressive disorder, generalized anxiety disorder, post-traumatic stress disorder, alcohol dependence, cannabis dependence and conduct disorder according to DSM-IV, 1994).Assessments were conducted in face-to-face interviews using the Diagnostic Interview Schedule.The assessment of conduct disorder was conducted as part of a computer-assisted module.A total of 38.8% of the young adults in this study sample experienced any of these mental health problems and a total of 14.9% had conduct disorders at age 18.To examine the associations between abuse/neglect and ADHD diagnoses in childhood and in young adulthood, we used logistic regressions.We tested the robustness of our findings in three different ways.First, we used linear regressions to examine group differences between participants who experienced abuse/neglect and participants who did not on the ADHD total symptom scale and on inattentive and hyperactive/impulsive symptom sub-scales separately.Second, we controlled for potential childhood confounders including sex, IQ, parental SES and mother’s depression in logistic regression models.Third, for adult ADHD, we used linear regressions and repeated the analyses using a measure of ADHD symptoms reported by co-informants.We also examined whether associations with ADHD extended to: other forms of victimization, including bullying and domestic violence; and a cumulative measure of victimization.We tested whether the association was concentrated among ADHD participants with comorbid conduct disorder, and additionally in young adulthood, with other forms of psychopathology.We further controlled for familial confounders by examining correlations between twins’ differences scores of poly-victimization and ADHD total symptom scale.For these analyses, we used continuous variables of exposure to violence and ADHD symptoms to maximise variation in both measures.We conducted the analyses with DZ and MZ twins together, and then only with MZ twins to control for all genetic confounding.Regression analyses were conducted in Stata 14.1.No interactions were found between sex and abuse/neglect in relation to ADHD in either childhood or young adulthood, therefore analyses were not stratified by gender.Participants in this study were pairs of same-sex twins, and each family contained data for two children, resulting in non-independent observations.To correct for this, we used tests based on the Huber-White or sandwich variance, which adjusts the estimated standard errors to account for the dependence in the data.To examine the longitudinal associations between abuse/neglect and ADHD from childhood to young adulthood, we used structural equation modelling procedures of Mplus 7.11.We tested a full cross-lagged model with the autoregressive effects and both abuse/neglect and ADHD predicting each other at a later time point.This model accounted for the cross-sectional overlap and stability of variables.First, we conducted the analyses controlling for sex only.Second, we additionally controlled for age-5 IQ and parental SES.Third, we further controlled for conduct disorder in childhood.We accounted for non-independence of twin observations and non-normality of the data by using robust standard errors.Our findings indicate higher rates of children meeting diagnostic criteria for ADHD among those who were exposed to abuse/neglect compared to children who were not exposed to abuse/neglect.Furthermore, higher rates of abuse/neglect were found among children with ADHD compared to those without ADHD diagnosis.Children exposed to moderate abuse/neglect had higher odds of 2.02 for meeting diagnostic criteria for ADHD compared to children who were not exposed, while children exposed to severe abuse/neglect had higher odds of 2.78 for having ADHD.This association was robust to control for sex, age-5 IQ and parental SES, and became marginal when controlling for mother’s depression.However, the association remained significant when we merged the two groups of children who experienced moderate and severe abuse/neglect.We replicated this association using a total scale of ADHD symptoms.Group differences were similar when we separately examined inattentive and hyperactive/impulsive symptom sub-scales.The association between abuse/neglect and ADHD in childhood extended to other forms of childhood victimization: children who were frequently bullied or were exposed to repeated phases of domestic violence had greater odds for ADHD diagnosis.Furthermore, we found that childhood ADHD was associated with being exposed to poly-victimization: children who were exposed to more than one type of victimization had higher odds for having a diagnosis of ADHD.ADHD was highly comorbid with conduct disorder in our sample: 118 children with ADHD had comorbid conduct disorder.Prevalence of exposure to abuse/neglect among sub-groups of children with ADHD, with or without comorbid conduct disorder, is presented in Fig. S1.We found that the risk for being exposed to abuse/neglect was concentrated among children with ADHD and comorbid conduct disorder.Similar to childhood, we found an over-representation of young adults with ADHD among those who were exposed to abuse/neglect between 12–18 years, as well as an over-representation of those who experienced abuse/neglect among young adults with ADHD.Young adults who were exposed to moderate abuse/neglect during adolescence had higher odds of 2.76 for ADHD compared to those who were not victimized.In addition, young adults who were exposed to severe abuse/neglect in adolescence had higher odds of 3.86 for ADHD.The association between abuse/neglect and ADHD diagnosis was robust to control for confounders including sex, age-18 IQ and parental SES.We replicated the association between adult ADHD and abuse/neglect in adolescence using a total scale of ADHD symptoms.We observed similar group differences when examining separately sub-scales of inattentive and hyperactive/impulsive symptoms.Furthermore, the association between abuse/neglect and ADHD was not simply an artefact of using self-reports of ADHD in young adulthood: findings indicated that those who were exposed to moderate abuse/neglect had more ADHD symptoms according to co-informants’ reports compared to those without ADHD.We observed a similar finding for those who experienced severe abuse/neglect.As in childhood, the association between abuse/neglect in adolescence and young adult ADHD extended to other forms of victimization: participants who reported being severely victimized by peers or being exposed to severe family violence in adolescence had increased odds for young adult ADHD.Furthermore, we found that adult ADHD was associated with being exposed to multiple types of victimization in adolescence: young adults who were exposed to more than one type of victimization in adolescence had higher odds to have ADHD.Similar to childhood, young adults with ADHD often had comorbid conduct disorder, but also other forms of psychopathology.Prevalence rates of exposure to abuse/neglect among sub-groups of young adults with a diagnosis of ADHD, with or without comorbidity, are presented in Fig. S1.We found that the odds for moderate and severe abuse/neglect in adolescence were elevated among adults with ADHD and comorbid conduct disorder, as were the odds among those with ADHD alone.We found similar elevated odds of abuse/neglect among the adults with ADHD and other forms of psychopathology, while the odds decreased but remained significant for those with ADHD only.We examined differences between twins on poly-victimization and ADHD symptoms.In childhood, we found a modest association between twins’ difference scores on poly-victimization and difference scores on ADHD total symptoms scale.This indicates that within a twin pair, the twin who had higher score on poly-victimization also had more ADHD symptoms.This association became not significant when repeated with MZ twins only, indicating that the association between poly-victimization and ADHD symptoms in childhood was accounted for by genetic factors.In young adulthood, we found a modest association between twins’ difference scores on poly-victimization and difference scores on ADHD symptoms.This association remained when repeated with MZ twins only, thus controlling for shared environment as well as genetic factors.This finding indicates that the association between poly-victimization and ADHD symptoms in young adulthood is partly environmentally-driven.Abuse/neglect in childhood was not associated with ADHD in young adulthood taking into account sex.However, we found that childhood ADHD was associated with abuse/neglect in later years.This longitudinal association was robust to adjustment for the stability of being exposed to abuse/neglect up to age 18, for ADHD from childhood to young adulthood, and also for concurrent associations between abuse/neglect and ADHD in childhood and in young adulthood.When we controlled for age-5 IQ and parental SES, the association between childhood ADHD and abuse/neglect in adolescence remained significant.When we further controlled for conduct disorder in childhood, the association between childhood ADHD and later abuse/neglect became not significant.This finding indicates that the longitudinal association between ADHD and later abuse/neglect is specific to those participants with comorbid conduct disorder in childhood.Our study using data from a prospective cohort of twins provides three notable findings on the associations between abuse/neglect and ADHD.First, concurrent analyses showed that abuse/neglect was strongly and robustly associated with ADHD in childhood, but also in young adulthood, indicating that this known link is not limited to childhood years.These associations survived control for SES, IQ, shared environmental and genetic confounds and extended to other forms of victimization, but in childhood, was concentrated among children with ADHD and comorbid conduct disorder.Second, longitudinal analyses indicated that childhood abuse/neglect did not predict later ADHD.This finding is contrary to previous studies using global retrospective measures of maltreatment up to young adulthood.Third, childhood ADHD was associated with later exposure to abuse/neglect when comorbid with conduct disorder.This indicates that disruptive behaviors, and not ADHD symptoms per se, have a long-term influence on the way the environment responds to individuals.Our findings shed a new light on the longitudinal associations between ADHD and maltreatment, calling for replications of these findings.Children’s mental health symptomatology increases their risk of maltreatment, peer victimization and sexual victimization.Our findings are in line with previous studies showing that disruptive behaviors, including ADHD and conduct disorder, may increase future risk of exposure to abuse and neglect.Symptoms associated with ADHD and conduct disorder - including aggressiveness, impulsiveness and noncompliance - may pose caregiving challenges and make children vulnerable to various forms of victimization in childhood.Our findings extend others’ findings by showing that ADHD increased risk for later abuse/neglect in adolescence.At least two hypotheses may explain this result.Firstly, this association could be accounted for by the continuation of ADHD symptoms and conduct problems into the adult years.While this hypothesis can partly account for this longitudinal association, it cannot explain it completely as we found the association to be significant over and above ADHD symptoms at age 18, indicating that young adults with remitted ADHD are nevertheless at risk for experiencing abuse/neglect.Secondly, ADHD and conduct problems may have a long-lasting influence on relationships.Despite ADHD symptoms having remitted, it is possible that others’ presumptions about one’s behaviors are what preserve the pattern of relationships that are difficult to change in later life.Our findings add support to the growing body of evidence suggesting that children’s temperament and behavior influence the response and reaction of others towards them.They also emphasize an important role for preventative monitoring of children with ADHD and conduct problems to reduce their risk for harm as parents may struggle to cope with children’s behaviors and demands, possibly influencing children’s risk for experiencing adversity.Close monitoring of this risk should be included as part of routine assessment with health professionals.Future research should examine the role of possible mediators, such as parenting skills and distress tolerance.Our findings do not support previous conclusions that childhood maltreatment is an environmental risk factor for ADHD in adulthood.Nevertheless, we cannot completely rule out the possibility that exposure to abuse/neglect can increase vulnerability to developing ADHD symptoms, as reported recently.Previous studies have generated findings linking biological disruptions associated with adverse childhood experiences, including maltreatment, to greater risk for a variety of chronic diseases well into the adult years.There is growing evidence for the extent to which both the cumulative burden of stress over time and the timing of specific environmental insults during sensitive developmental periods can create structural and functional disruptions that lead to a wide range of physical and mental illnesses later in adult life.However, as our findings do not support a causal link between abuse/neglect and ADHD, we suggest a careful interpretation of findings that may suggest that child maltreatment causes ADHD.For the first time, we found strong associations between abuse/neglect in adolescence and ADHD in young adulthood.These associations are robust to control for potential confounders and using ADHD symptoms scales and informant reports.Nonetheless, we found these associations to be nonspecific, as they extended to other forms of victimization.Our findings are consistent with previous studies demonstrating that adult mental health is similarly influenced by a wide range of adverse exposures.Different from childhood, we found that this association in young adulthood is not accounted for by other behavioral or psychiatric disorders.This is despite the high prevalence of comorbidity.This suggests differences between ADHD in childhood and in adulthood, and points to the need for further studies to explore the unique features of adult ADHD and its predictors.We also found that this association in young adulthood is environmentally-driven.This can be explained by the process of gaining more independence during these years, and the new interactions with people outside the family and the education system.Our findings highlight the importance of taking into consideration victimization in adolescence and examining its consequences.Furthermore, the assessment of adolescents and young adults with ADHD should include inquiry about exposure to victimization in adolescence in addition to the childhood years.The strength of our study includes the use of prospective as well as repeated measures of both abuse/neglect and ADHD up to young adulthood in a nationally-representative cohort.However, our findings should be considered in light of some limitations.First, the assessment of victimization in adolescence covered a longer period of time compared to young adult ADHD which covered symptoms in the past year.However, participants were interviewed face-to-face using a well-established measure and the assessment referred to a specific time-frame.In addition, referring to this time-period enabled us to gather detailed information regarding the exposure to victimization throughout adolescence.Second, due to our relatively small sample size, we had limited statistical power when looking at twins’ differences among our group of MZ twins.A larger sample size would facilitate further examination of twins’ differences in twin pairs discordant for abuse/neglect.Third, the E-Risk sample is composed of twins, so the results may not generalize to singletons.Reassuringly, the prevalence of childhood abuse/neglect as well as the prevalence rates of victimization exposure between 12 and 18 years in our sample matches recent UK general population estimates.The prevalence of childhood ADHD at each age in our sample is well within the range of 3.4%–11% estimated previously and our rate of ADHD persistence is similar to that found in a meta-analysis.We provided additional evidence regarding the robustness of the associations between maltreatment and ADHD, emphasising the important role of comorbid conduct disorder.We also showed that this association is not limited to childhood and not specific to abuse/neglect.Our findings highlight the possibility of a long-term effect of disruptive behavior on the risk of experiencing violence victimization, rather than the other way around.Although our study does not support previous causal inferences regarding the relationship between child maltreatment leading to adult ADHD, it emphasises the complexity of establishing causality.Additional research using prospective longitudinal designs is important to examine whether our findings can be replicated.Another factor to be considered when examining the direction of the association from ADHD to abuse/neglect is the presence of ADHD symptoms among the parents of a child with ADHD.Since ADHD is a heritable condition, it is probable that at least one of the parents of a child with ADHD also experience similar symptoms.This adds to the complexity of parent-child relationships.Further research is needed to examine to what extent parents’ ADHD symptoms influence their parenting, especially with a child with ADHD.Our study also has clinical implications.First, our findings emphasize that clinicians treating people with ADHD, and especially those with comorbid conduct disorder, should be aware that their patients are at heightened risk for current and future maltreatment and of other forms of violence victimization.This indicates the importance of conducting an evaluation of concurrent and past victimization during routine assessment and treatment planning of people with ADHD.Second, our findings suggest that along with interventions focusing on children’s ADHD and conduct disorder symptoms there is a need to provide guidance and support to carers.Knowledge about behavioral problems might help to better understand the potential challenges they are facing.Teaching them various strategies that can be used in order to facilitate behavior and function might give them more effective ways to deal with the child’s behavior.Third, our findings suggest that while maltreatment may not directly cause ADHD, maltreatment and ADHD are associated and mental health professionals and clinical services that are in contact with children, adolescents and adults who experienced maltreatment should be aware of their higher risk of having ADHD.The E-Risk Study is funded by the Medical Research Council.Additional support was provided by National Institute of Child Health and Human Development, National Society for Prevention of Cruelty to Children and Economic and Social Research Council, The Avielle Foundation, and by the Jacobs Foundation.Adi Stern is supported by The Haruv Institute’s Post-Doctoral Students Fellowship and by the Humanitarian Trust Fellowship.Helen L. Fisher is supported by an MQ Fellows Award.Louise Arseneault is the Mental Health Leadership Fellow for the UK Economic and Social Research Council.
Child maltreatment has consistently been found to be associated with attention deficit/hyperactivity disorder (ADHD). However, the robustness of this association and the direction of the link between maltreatment and ADHD remain unclear. We used data from the Environmental Risk (E-Risk) Longitudinal Twin Study, a cohort of 2232 British twins, to investigate the associations between exposure to abuse/neglect and ADHD in childhood and in young adulthood, and to test their robustness and specificity. We also aimed to test longitudinal associations between abuse/neglect and ADHD from childhood to young adulthood, controlling for confounders. Results indicated strong associations between abuse/neglect and ADHD in childhood and also in young adulthood. In childhood, the association was concentrated among children with comorbid conduct disorder. Longitudinal analyses showed that childhood ADHD predicted abuse/neglect in later years. This association was again concentrated among individuals with comorbid conduct disorder. Abuse/neglect in childhood was not associated with later ADHD in young adulthood after adjusting for childhood ADHD. Our study does not provide support of a causal link between child abuse/neglect and adult ADHD but highlights the possibility of a long-term effect of disruptive behaviors on the risk for experiencing abuse/neglect. These findings emphasize the need for clinicians treating people with ADHD, especially those with comorbid conduct disorder, to be aware of their increased risk for experiencing abuse/neglect. Interventions aimed at reducing risks of abuse/neglect should also focus on the environment of individuals with disruptive behaviors.
95
Multi-part segmentation for porcine offal inspection with auto-context and adaptive atlases
Segmentation of non-rigid biological objects into their constituent parts presents various challenges.Here we address a segmentation task in which parts are organs in body images captured at abbatoir.This constitutes one stage in an envisaged on-site system for screening of pathologies; these are characteristically organ-specific.The spatial arrangement of organs in an image is only weakly constrained and their shape is variable.Furthermore their appearance changes due to factors including cause of pathology, surface contaminants, and specular reflections.There can be limited control over orientation, severe occlusions between parts, and parts may be missing altogether.In this paper we describe adaptations to the auto-context segmentation algorithm to address such a task.We apply these to segment heart, lungs, diaphragm and liver in porcine offal.The groups of inter-connected organs are called plucks, examples of which are shown in Figs. 2 and 3.Auto-context is an iterative technique that combines contextual classification information with local image features.AC is relatively flexible and easy to implement, and has been applied to various biomedical imaging problems .The context features used by AC to inform class label inference at a pixel location are posterior class probabilities produced by the previous iteration.These probabilty values are typically sampled at a fixed set of locations relative to the pixel in question.Additionally we design integral context features obtained by summing probability values over sets of locations.In the application considered here we argue that sums over rows and sums over the entire foreground are appropriate.One attractive feature of AC is that a prior atlas can be used as a source of contextual data for the initial iteration.Such an atlas can be obtained by averaging rigidly registered manual segmentation maps.However, a single averaged map does not provide a good representation of the multi-modal map distribution that arises as a result of the variations mentioned above, such as occlusions and missing parts.We describe weighted atlas auto-context, a method that adapts an atlas representation to be relevant to the current image.This improved atlas is used at the next iteration as an additional source of information together with the label probability maps.In this paper we combine integrated context and WAAC into one system, extending work reported in conference papers on integral context and WAAC .We report a direct comparison of all of these methods applied to segmentation of multiple organs in pig offal, and we also compare with a conditional random field method.We evaluate performance in terms of Dice coefficient distributions, pixel-wise classification and quadratic scores.Post-mortem inspection is an important means of ensuring the safety and quality of meat products, enabling the detection of public health hazards and pathologies, and providing useful feedback to farmers.There are moves towards visual-only inspection of pig carcasses and offal without palpation, in order to minimise risk of cross contamination .This along with the potential to detect a greater number of pathologies with improved reproducibility than currently possible with manual inspection motivates development of automated visual inspection.Reliable segmentation of organs would constitute an important step towards this goal.In this context even modest improvements in organ segmentation could be significant as regions assigned to the wrong organ may ultimately lead to missed or falsely detected pathologies.Applications to meat production deal mostly with estimation of proportions of muscle, fat and bone either in vivo and post-mortem, sometimes involving segmentation of organs without distinguishing them individually .Tao et al. segmented poultry spleen from surrounding viscera as an aid to detection of splenomegaly.Jørgensen et al. segmented gallbladders in chicken livers from images acquired at two visible wavelengths.Stommel et al. envisaged a system for robotic sorting of ovine offal that would involve recognition of multiple organs.Most literature on segmentation of multiple organs deals with human abdominal organs in CT or MR imaging through techniques including level set optimisation , statistical shape models , and atlas-based methods .Segmentation methods that incorporate spatial context information include those combining inference algorithms based on belief propagation with models like conditional random fields .Disadvantages common to many such techniques that aim to capture context information include their reliance on fixed spatial configurations with confined neighbourhood relations and complex training procedures.There is extensive literature dealing with the construction of unbiased atlases for multi-modal data, especially in brain magnetic resonance image analysis, as in the work of Blezek and Miller and Zikic et al. .Some related work makes use of AC.Kim et al. , for example, employed an approach similar to that of Zikic et al. , training multiple models, each based on an individual annotated image, so that the probability map of a new image was obtained by averaging maps predicted by individual models.Zhang et al. proposed a hierarchy of AC models whose bottom level is similar to the set of models used by Zikic et al. and Kim et al. .Given a new image, only the best models in the hierarchy are selected to contribute to the final probability map.Model training via these techniques can be computationally expensive.We perform segmentation using methods built around the auto-context algorithm of Tu and Bai .AC learns to map an input image to a multi-class segmentation map consisting of posterior probabilities over class labels.It iteratively refines the segmentation map by using the label probabilities in a given iteration as a source of contextual data for the following iteration.Label probabilities at a set of locations relative to the location to be classified are concatenated with local image features to form a combined feature vector for training the next classifier.In our implementation of AC, context probabilities for a location are extracted at 90 surrounding stencil points as well as at the location itself.At the first iteration, context consists of the 5 class label probabilities provided by the prior atlas at each of the 91 associated context points; at subsequent iterations, it consists of the label probabilities output by the classifier at the previous iteration, at the same context points.This gives 91 × 5=455 context features per image point.We use multi-layer perceptron classifiers; these can be trained to directly estimate posterior probability distributions over the class labels.Context data can be enhanced by including integral features, i.e. sums of class label probabilities.We augment the context features described above with two types of integral context features suitable for our application.The relative positions of organs along the vertical direction vary little from image to image, given that each pluck hangs from a hook and the part of the pluck that is attached to the hook is very consistent across plucks.Thus, given a point on an image, class probabilities averaged over the row to which the point belongs provide the classifier on the next iteration with useful information as to which organs are likely to occur at that particular height.For example, a row containing heart is likely to contain also lungs, but very unlikely to contain liver.In contrast, relative positions of organs along the horizontal direction vary considerably from image to image, given lack of control over the orientation of the pluck around the vertical axis.The heart, in particular, is sometimes fully occluded.Nevertheless, organs are fairly consistent in volume from pig to pig.Thus, class probabilities averaged over the whole image reflect the proportions of the pluck covered by each visible organ, and provide the next classifier with useful information on which organs are likely to be visible and how visible they are.For example, a small proportion of visible diaphragm is consistent with a hidden heart and a large proportion of lung.We use IC to refer to methods in which these integral context features are included, i.e. the sum of the label probabilities in the row and the sum of label probabilities in the entire image.WAAC uses the same number and spatial arrangement of context points as AC; in other words, there is no additional spatial context.At each iteration, WAAC combines information from two sources that are very different in nature: the probability maps output by the classifier; and a weighted atlas obtained from the ground-truth component of training data.The dataset consisted of 350 annotated colour images of plucks in an abattoir production line.The images were acquired under LED illumination using a single-lens, tripod-mounted, reflex digital camera.Each image had a resolution of 3646 × 1256 pixels.Four organ classes were manually annotated in each image: the heart, lungs, diaphragm and liver.A fifth class, upper, was used to mark the portion of the pluck located above the heart and lungs usually consisting of the trachea and tongue.Fig. 2 shows some pluck images along with annotations showing the regions occupied by each organ class.The 350 available images were randomly divided into 10 subsets of 35 images.Those subsets were used to carry out 10-fold cross validation experiments comparing the performance of CRFs, conventional AC, and the proposed WAAC method.We used local appearance features based on a multi-level Haar wavelet decomposition .Each image was converted to the CIELUV colour space .For each component, the approximation wavelet coefficients, as well as the horizontal, vertical, and diagonal squared detail coefficients, were obtained at three levels of decomposition.This resulted in 36 feature maps, all rescaled to match the original dimensions of the image.We then sub-sampled each feature map and each label map by a factor of 20 along both dimensions.This resulted in 180 × 60 points per map, which was found to provide sufficient detail for our purposes.MLPs had a softmax output and a hidden layer of 20 neurons with logistic activation functions.They were trained with an L2-regularised cross-entropy loss using scaled conjugate gradients optimisation in the Netlab implementation .The CRF model used for comparison was implemented with the toolbox for Matlab / C++ made available by Domke .A 180 × 60 pairwise 4-connected grid was created to match the dimensions of our feature and label maps.CRF models were trained for five iterations of tree-reweighted belief propagation to fit the clique logistic loss, using a truncated fitting strategy.We first discuss some example results obtained using AC, WAAC and CRF.Fig. 2 shows pixel labellings obtained, by assigning labels with highest probabilities, from three pluck images.The CRF method produced smooth inter-organ boundaries but made gross labeling errors; some regions were labeled in implausible locations, for example small regions of heart and diaphragm near the top of the upper segment in Fig. 2, and upper regions below the lungs in Fig. 2.When the highest probabilistic outputs from AC and WAAC were used to obtain class labels, high frequency class transitions occured.The use of the adaptive atlas in WAAC tended to improve spatial coherence compared to AC.Note that these results are presented without any attempt to apply post-processing to smooth the labellings.The organ-specific atlas components obtained at the final iteration of WAAC are also shown in Fig. 2.The atlas has clearly adapted differently to the different spatial configurations in the input images.In Fig. 2 it has adapted to exclude the heart which is indeed occluded in that example.Fig. 2 shows a difficult example for which all the methods failed.In this unusual case the liver, which is nearly always present, is missing entirely.This eventuality was not well represented in the training data.The methods did manage to exclude liver from their results but the mismatch resulted in poor localisation of other organs in the image.For a further two test images, Fig. 3 shows results obtained without and with integral context.Note that a simple denoising post-processing step would have improved the quality of segmentation results, but we left that step out to more clearly show the effect of adding integral context.The importance of integral features is most visible in cases like that of Fig. 3, in which standard context was not enough to yield a confident segmentation of the heart.Fig. 3 illustrates the reverse situation, where integral features helped to dissipate a mistakenly segmented heart.In this case, the integral features representing class probabilities averaged over the whole image will have reflected the small area occupied by the diaphragm and large area covered by the liver, thus helping to identify a pluck whose dorsal aspect faced the camera, hiding the heart.Table 2 gives confusion matrices obtained from CRF and WAAC+IC when used to perform classification at the pixel level by assigning each pixel to the class with the highest probability.After its 5th iteration, the CRF method performed at a similar level to AC before context iterations.The largest improvement apparent in the WAAC+IC result was observed for the heart.Being relatively small, the heart is the organ whose two-dimensional projection on each image is most affected by the orientation of the pluck around its vertical axis: it can be fully visible near the centre, partially or fully visible on either side of the pluck, or completely hidden.Thus, it is not surprising that the ability of WAAC to deal with multi-modality had a larger impact on the segmentation associated with this organ.Integral context features also helped to deal with the unpredictability of the heart’s presence and position in the image.Execution times were measured running on a regular desktop machine, using only the CPU.Processing an image at test time was dominated by feature extraction which took 7.2s.One iteration of AC took 0.14s whereas an iteration of WAAC took 0.73s due to the extra computation needed to compute the weighted atlas.The feature extraction and atlas computation routines were implemented in Matlab.The computation of weighted atlases would be easily adapted for faster execution on a GPU.We introduced the problem of multiple organ segmentation at abattoir and proposed solutions based on an auto-context approach.Specifically, we described two modifications of auto-context for multi-part segmentation.Firstly, the stencil-based context features were augmented with integral features.Secondly, a weighted atlas was iteratively adapted and made available for the extraction of features to complement those used in the conventional approach.Experiments on the task of segmenting multiple organs in images of pig offal acquired at abattoir demonstrated the effectiveness of this approach.It outperformed an alternative CRF method and was able to deal with parts whose spatial arrangement, appearance and form varied widely across images, most noticeably when segmenting the heart which was often severely occluded.Taking advantage of the iterative nature of AC, WAAC is able to identify the training label maps that are most relevant for a given test image and use that knowledge to steer the segmentation process, thus helping to avoid the erroneous localisation of parts within conflicting contexts.Future work could include the computation of weighted atlases in a class-wise fashion, the use of alternative similarity measures in the computation of the atlases, and the use of other types of classifier within the WAAC algorithm which is not restricted to MLPs.We used auto-context to obtain a sequence of relatively shallow classifiers incorporating label context to achieve semantic segmentation of organs.In recent years, deep neural networks have been designed for semantic segmentation, achieving impressive results in a range of applications albeit on datasets with greater numbers of annotated images .It will be interesting to compare this approach on our inspection task in future work with more annotated images.The segmentation task evaluated here constitutes a component in an envisaged automated post-mortem inspection application.We describe elsewhere a method for detection of porcine pathologies in masked images of pre-segmented organs .This could be integrated with the segmentation methods described in this paper.These methods should also be applicable to other problems involving the segmentation of non-rigid objects into their constituent parts, such as anatomical structures in medical images of various modalities, or sub-cellular compartments in microscopy images.
Extensions to auto-context segmentation are proposed and applied to segmentation of multiple organs in porcine offal as a component of an envisaged system for post-mortem inspection at abbatoir. In common with multi-part segmentation of many biological objects, challenges include variations in configuration, orientation, shape, and appearance, as well as inter-part occlusion and missing parts. Auto-context uses context information about inferred class labels and can be effective in such settings. Whereas auto-context uses a fixed prior atlas, we describe an adaptive atlas method better suited to represent the multimodal distribution of segmentation maps. We also design integral context features to enhance context representation. These methods are evaluated on a dataset captured at abbatoir and compared to a method based on conditional random fields. Results demonstrate the appropriateness of auto-context and the beneficial effects of the proposed extensions for this application.
96
The impact of multi-layered porosity distribution on the performance of a lithium ion battery
A number of recent studies have highlighted that lithium ion batteries have become the dominant battery technology for many automotive and transport applications.One of the primary reasons for their adoption is their relatively high energy density and high power density compared to alternative battery technologies.However, it is well understood that there is an inherent trade-off that must be optimised, when designing a new cell to achieve the energy and power targets expressed by a number of vehicle manufacturers and national research bodies.Recent studies, through both numerical simulation and experimental evaluation, have attempted to quantify the trade-off for different cell chemistries and manufacturing processes.The underpinning trend in the results presented is that increasing the power density is only possible at the cost of reduced energy density .Power density can be improved either by replacing a portion of the active material with conductive fillers and through using large pores for ion transport or through the use of thinner electrodes.Both approaches to cell design and manufacture result in lower values of active material within the cell that consequently lead to a reduction in energy density .A consensus does not yet exist as to the optimal design of battery cell, in terms of both chemistry and form-factor.There is significant research characterizing the different chemistries, including: lithium cobalt oxide, lithium iron phosphate oxide, lithium nickel cobalt manganese and lithium titanate oxide.The battery performance that can be achieved is guided by the choice of material, the design of the battery, its internal resistance, the electrode properties and the voltage limit for the side reactions .Narrowing the gap between the theoretical and actual useable capacity is a key requirement for improving battery performance .Several research studies have been published that employ novel optimisation approaches to improve the design of high performance batteries .Miranda et al. performed geometry optimisation of a lithium ion battery using the finite element method of simulation.Their research took into account different geometries, including conventional and unconventional shapes such as horseshoe, spiral, ring, antenna and gear batteries.Mitchell and Ortiz applied computational topology optimisation methods in an attempt to improve the performance of lithium ion batteries that employ a silicon anode.They addressed the structural and conduction design criteria to concurrently minimise the volume expansion of the anode and to maximise its electronic conductivity.A similar study is reported by Ji et al. in which key material properties within the cell are optimised to improve the low temperature performance of the cell, namely increasing the energy capacity of the cell under a 1C discharge at a temperature of −20 °C.Irrespective of the exact choice of battery chemistry, there are important design parameters which can be varied within the manufacturing process to improve cell performance.These include electrode thickness, particle size, porosity, electrode surface area, geometry and the dimensions of the current collectors .Newman and co-workers applied new mathematical approaches to optimise the design variables of a lithium ion battery.They developed a simplified battery model to facilitate the mathematical formulation of the problem and to allow the optimisation of the porosity and electrode thickness .Discharge time and cell capacity were found to be the most significant factors affecting their final design.Further, they also investigated the influence of different particle size distributions on the operation of porous electrodes .Their research continued as they developed a full cell model to evaluate the ohmic related energy loss of the solid electrolyte interface layer.Their model was used to optimise the design of a graphite–iron phosphate cell .In addition, the authors continued to develop a comparable model to optimise and evaluate the performances of both graphite and titanate negative electrodes .Other researchers have employed higher fidelity models and carried out parametric studies to quantify the impact of specific design variables on cell heat generation and the electrical performance of a lithium ion battery .Wu et al. developed a coupled electrochemical–thermal model to evaluate the impact of particle size and electrode thickness on battery performance and heat generation rate.Du et al. introduced a new surrogate modelling framework to map the effect of design parameters, such as cathode particle size, diffusion coefficient and electronic conductivity on battery performance in terms of specific energy and power.They quantified the relative impact of various parameters through a global sensitivity analysis using a cell-level model in conjunction with methods such as kriging, polynomial response, and radial-basis neural network.In addition to these numerical works, Singh et al. investigated experimentally the amount of energy that may be extracted from a cell manufactured using thick electrodes compared to cells that employ a thinner electrode for a Gr/NMC chemistry.They observed a significant capacity loss when using the thicker cells at C-rates of C/2 due to poor kinetics.The authors suggest that the proposed thick electrodes could be advantageous for certain applications where a continuous low C-rate is required.Transport properties, ionic and electronic conductivity have all been shown to have a significant impact on lithium ion battery performance.Corroborating the study by Singh et al. , results presented by Doyle and Newman highlight that thicker less porous electrodes are a better choice for applications that require a long discharge time, whereas thinner electrodes with higher porosity are more suited for high power short discharge applications.The battery power density is related to both ionic and electronic transport rates.Ion transport occurs in both the separator and electrodes and transport resistance can be reduced by either optimising the separator or by reducing the ionic resistance within the electrode .It is noteworthy that there is a trade-off between ionic and electronic conductivity and neither electronic nor ionic conductivity in isolation can achieve optimal specific energy or power.The design trade-offs that exist between ionic and electronic conductivity have been quantified in a study by Chen et al. .Increasing the power density of a cell while maintaining the energy density is a common challenge when developing lithium ion batteries.This is particularly an issue when having high energy cells with thick electrodes, as the power reduces due to longer transport length of the ions and electrons.Understanding the 3D distribution of potential, current, reaction rate, temperature, heat generation, state of charge and other properties is a pre-requisite in optimising the design and manufacturability of lithium ion batteries for larger scale applications.Much of the research presented within the literature is underpinned by the formulation of new electrochemical–thermal models.Examples of such models can be seen in the literature for high power applications , larger cell capacities, cell designs with different tab locations , the modelling of complete battery packs , and to study the heat generation from the cell and its relationship to the overall pack level thermal performance .Most electrochemical–thermal models within the literature assume a spatially uniform porosity within the electrode or separator.However, it is known that porosity has an impact on material properties such as, conductivity, heat capacity and density.It is argued therefore that an optimised spatial variation of porosity within the electrode could enable a more uniform temperature distribution within the cell .The spatial variations in porosity have not been extensively covered within the literature for battery applications .Relevant studies include Chiang et al. , who have developed a bipolar device with a graded porosity structure.The authors claim that such structures can improve transport properties by removing tortuosity and reducing diffusion distance.Ramadesigan et al. employ a 1D analytical model to optimise the spatial porosity profile across the electrode, for a porous positive electrode made of lithium cobalt oxide.They found that for a fixed amount of active material, optimal grading of the porosity could decrease the ohmic resistance by circa: 15–33%.Golmon et al. optimised the design properties, porosity and radii distributions with respect to the stress level in the cathode particles.They performed their simulations for a LiMn2O4 cathode through the use of half-cell models in addition to a carbon anode fuel cell model.They found that porosity variability had a higher impact on energy capacity than particle radii.Moreover, the improvement in capacity that came from a graded porosity design was in the order of 22% higher than that of a non-graded cell.In addition to the battery domain, the benefits of varying the porosity of the electrode are being actively studied for fuel cell applications as well.For example, recent results highlight that a non-uniformly dispersed porosity of gas diffusion layers can lead to a better water management within Polymer Exchange Membrane fuel cells .This study aims to extend the research, discussed above, by investigating the impact of multi-layered porosity distribution on the electrical and thermal performance of batteries of different cathode chemistries such as LFP or NMC which has not been reported before.To facilitate this study, for the first time a coupled 3D full cell model containing both electrodes is developed.The model enables the authors to investigate the spatial variation along the battery height which is presented here for the first time.To further increase the accuracy of the model the electrochemical model is coupled with a thermal model by considering the porosity dependency of the thermal parameters which has not been considered in the previous works.The paper is structured as follows: Section 2 discusses in greater depth the formulation of the problem statement and the different use cases that form the basis of the optimisation study.Section 3 defines the creation of a reference model that is validated against both electrical and thermal data presented within the literature.Sections 4–6 present simulation results for the different electrode porosity profiles.Section 7 discusses a new case study containing a high energy cell.Further work and conclusions are discussed in Sections 8 and 9, respectively.The following use cases are presented to study the impact of varying the porosity of the cathode on cell performance.The scenarios are: a multi-layered structure of the porosity across the electrode thickness and a multi-layered structure of the porosity across the electrode height, as presented in Fig. 1 and.As discussed above, variations of will impact a number of material properties, such as the ionic conductivity, electronic conductivity, lithium diffusion coefficient in the electrolyte, thermal conductivity, heat capacity and finally the density.For each use case, the specific energy, the specific power and temperature profile along the surface of the cell are quantified.For this design case the height of the electrode is divided to N layers with different porosities, as shown in Fig. 1b.A full 3D electrochemical–thermal model is developed for a single electrode-pair of a lithium ion battery with LFP cathode.Each pair is assumed to be a sandwiched model of different layers: a negative current collector, a negative electrode, a separator, a positive electrode and a positive current collector.The current model is extension of the previously developed and validated 1D model discussed in , which is based on a similar, experimentally validated model presented within Li et al. .The input to the model is load current, geometrical design parameters, material properties and ambient operating temperature.Through the proposed case studies, discussed in Section 2, only one of the design variables was modified: the volume fraction of the active material, denoted as.As explored in the previous section, changing will in turn change other properties such as ionic conductivity, electronic conductivity, lithium diffusion coefficient in the electrolyte, thermal conductivity, heat capacity and finally the density.The outputs from the model are the responses of the cell to the load, i.e. terminal voltage, generated heat, temperature profile and SOC.Additional internal variables include: lithium concentration in the electrodes and the separator, potential distribution of different phases, reaction current, electronic and ionic current.The primary motivation for using Li et al.’s model for verification is because the authors have presented a complete set of electrochemical parameters for the cell along with a definition of parameter variations with temperature.Since the electrochemical model contains temperature dependent variables, coupling it with a thermal model yields more accurate results.Furthermore, the model presented in has been validated using experimental data.The parameterisation data and experimental results presented within allow for the creation of a Reference Model to ensure the accuracy and robustness of the model, before proceeding with the two use case studies defined within Section 2.2.The developed model is based on P2D electrochemical–thermal equations which are solved in COMSOL Multiphysics for a 3D cell geometry.A segregated time dependent solver has been used for all the simulations.A cut off voltage of 2.5 V has been used to terminate the cell discharge.The output from the model shows a high degree correlation with the results published within in terms of the terminal voltage response when the cell is discharged, the surface temperature gradient and other internal parameters.Further validation details are discussed below.The terminal voltage discharge curve for the 10 Ah LFP pouch cell under 1C to 5C discharge current rate at ambient temperature is presented in Fig. 2 and shows a good agreement between the simulation results and the experimental data.The simulation results are generated by the 3D COMSOL model used in this study and the experimental data are those reported by Li et al. .The initial design parameters of the Reference Case are those reported by Li et al. , presented in Table 2.The following parameters have been employed to fully define the reference model.The validated model is used as a Reference Case to evaluate the specific energy, specific power, total heat generation and the resulting temperature profile.The model is further applied for use cases with multi-layered porosity profile, to account for any changes in the energy or power that may arise from such a porosity distribution.The reaction current and the local SOC of the 10 Ah LFP lithium ion battery are shown in Fig. 3 and, respectively.The current distribution is through the positive electrode thickness during a complete 5C discharge.L is electrode thickness, where x = 0 represents the separator/electrode interface while x/L = 1 is the electrode/current collector interface.At the early stages of the discharge process, the peak value of the reaction current occurs at the separator/positive electrode interface and moves towards the electrode/current collector interface as the discharge proceeds.This happens when the effective conductivity of the solid phase is higher than that of the liquid phase, i.e.In general, the peak location depends upon the magnitude of σeff compared to κeff .In addition, the ionic current density is always maximum at the separator/electrode interface whereas the maximum value for electronic current is seen at the electrode/current collector interface.The local SOC is defined as the ratio of lithium ion concentration at the surface of the active material to the maximum lithium concentration of the bulk electrode.Hence, by definition the SOC profile follows the same trend as the lithium concentration presented in Fig. 3b. Within the positive electrode, the particles near to the separator firstly approach the fully charged state.This can be attributed to the reaction current which itself is a function of conductivity of the different phases.This is shown in Fig. 3 and and is consistent with theoretical predictions .The observed dip in the SOC profile versus x/L is due to the non-homogenous distribution of the reaction current.The amount of activation heat is proportional to the reaction current density and the surface over-potential.Generally, the average value for activation heat is known not to change much within the positive electrode during the electrochemical reaction.In this example, it ranges from 0.239 W/m3 at t = 10 s to 0.253 W/m3 at the end of discharge event when the cut-off voltage of 2.5 V is reached.The peak for the reaction heat appears at the electrode/separator interface during the early period of the discharge, until t = 300 s, and then moves towards the electrode/current collector interface as it proceeds to the end of the discharge, see Fig. 4.As shown in Fig. 3a, a change in the location of the maximum reaction heat is due to a shift in the location of the reaction current as well as a change in the location of the electrochemical over-potential .The average value for the ionic heat is around 0.284 ×105 W/m3 at the beginning of the discharge, and reduces slightly to 0.225 ×105 W/m3 at t = 90 s and increases again monotonically until the end of the discharge event to 0.509 × 105 W/m3 at t = 670 s.The spatial profile of the ionic heat is related to the potential gradient of the electrolyte and as a consequence, to the local current density within the electrolyte.It can be seen from Fig. 4 that the location for the maximum ionic heat is at the separator/positive electrode interface during the entire discharge process.That is in agreement with the ionic current profile through the electrode thickness.Within the next section, the different use cases, defined in Section 2.1, are further studied to understand the causality between variations in εs within the electrode and how this affects the charge resistance and spatial profile of the heat generation as well as the internal electrochemical parameters for the cell.As discussed within , Ohmic resistance is a major source of voltage drop, especially within thicker electrodes.Improved ion transport kinetics within a composite structure such as an electrode can be achieved by adjusting the ionic conductivity relative to the current distribution.As the ionic current, can be higher near the electrode and separator interface, a higher ionic conductivity in this region can improve the transport rate.This means that having a higher porosity near to the separator can help improve power density, while a higher fraction of active material in the depth of the electrode helps to retain a higher energy capability .The following case study indicates the variation of the electrode volume fraction versus the dimensionless distance across the electrode.A distance of x = 0 indicates the separator/positive electrode interface, while x/L = 1 represents the positive electrode/current collector interface.The simulation results of the different cases are presented in Table 4.In case studies 1–5, the minimum εs is at the electrode/separator interface, while the maximum εs is close to the current collector.In case studies 6–10, the distribution of εs is in the reverse direction, as shown in Table 3.The results of the first group do not highlight significant improvement in battery performance, in terms of either specific energy or power density.For the best case scenario, the energy and power increased by 0.33% and 0.44%, respectively.Moreover, the maximum variation of the peak temperature compared to the Reference Case is approximately 0.9 °C.Even though no significant improvement has been observed by linear and non-linear distribution of the porosity, porosity distribution in the reverse direction, can deteriorate cell performance significantly.Hence by having a high active volume fraction near the separator and low volume fraction near the current collector, it is impossible to improve performance of a battery, regardless of its chemistry or size.The results of case studies 6–10 confirm the significant loss in both energy and power along with a higher peak temperature and temperature gradient.For the worst-case scenario, e.g. case 8, the specific energy and the specific power of the battery have decreased by circa: 36% and 4.1%, respectively.In terms of the peak temperature, case 6 is the worst condition, with a 13.72 °C increase in peak temperature.Case 3 has the highest power density among the test cases.Even though the power variation of the different test cases in group 1 is almost negligible.The following sub-sections compare in greater depth the reaction current, SOC and heat generation of case 3 from group 1 with those of the Reference Case during the discharge event.The spatial variation of the reaction current and SOC within the cathode for case 3 is shown in Fig. 5 and, respectively.At the beginning of the reaction, the reaction current is comparable with that of the Reference Case.However, as it proceeds to the end of discharge the local variation within the current density increases and generates a larger charge gradient across the positive electrode.The same phenomenon is true when reviewing the SOC results.As shown in Fig. 5b, the local variation of SOC for case 3 is much higher compared to the Reference Case.This is particularly true at the end of the discharge event and causes a non-uniform utilisation of the active material which is not desirable.The reason for this is due to the increased local ionic resistance to the charge transfer within the electrode regions closer to the current collector that exhibit a relatively high volume fraction of active material.Fig. 6a compares the heat generation of the different sources for case 3 with the heat generation modes of the Reference Case over the same time interval.The values represent the average heat evolution of the positive electrode during the discharge.The results show that for case 3, the activation heat, the ohmic heat decreases with the multi-layered porosity structure that in turn leads to a significant reduction of total heat generation.The ionic heat which is the major contributor to heat generation decreases by 11–24% compared to the Reference Case during the 5C discharge process.The reduction in electronic heat is approximately 28–38% over the discharge.From Fig. 6, the activation heat appears to be unchanged during the electrochemical reaction until t = 345 s. Immediately after this time, it begins to increase while in the Reference Case it remains almost constant.The underlying reason for this is because within case 3 the activation over-potential increases after this point, even though the reaction rate is lower.Combining all the heat generation modes together, the total heat generation of the positive electrode for case 3 reduces by 4.2–14%, with a time averaged value of 10.4% compared to the Reference Case presented in Fig. 6a.The heat evolution of the positive electrode is almost 45% of the total heat generation of the battery as shown in Fig. 6b. considering this, a total heat reduction of 4.2% is obtained for the whole cell.This scenario elaborates a multi-layered porosity distribution along the z axis of the electrode.Six case studies were investigated.These can be divided into two groups; group 1 with low εs at the top and high εs at the bottom of the cell and the second group with a reverse distribution, i.e. with high εs at the top and low εs at the bottom.The profile of the porosity from minimum to a maximum value is either linear or non-linear as shown within Table 5.The average volume fraction of the layers is 0.5, equal to the overall volume fraction of the Reference Case model defined in Section 3.2.In general, the test cases from group 1 show an improved battery performance from those represented within group 2, see Table 6.However, the authors acknowledge that the differences are not significant.The only exception to this is test case 6 for which the discharge process terminated after a very short time because the terminal voltage of the cell dropped below the threshold value of 2.5 V.It is for this reason that value of Ecell is very low.From the results obtained from group 1, it seems that the ionic current at the top of the cell where the active volume fraction is maximum is very close to zero, meaning that the low porosity region is a barrier for ion transport and the electrochemical reaction cannot proceed in that region.Among the test cases of group 1, cases 1 and 3 show a significant reduction in energy capability, while maintaining the same power capability as the reference model case.However, case 2 shows a 0.8% power reduction compared to the Reference Case while maintaining the same energy.Moreover, case 2 exhibits the highest temperature within all of the designs options within group 1; equal to 51.9 °C—a 12% increase compare to the Reference Case.The high temperature can partly be attributed to its lower power, as a result of its reduced cell voltage.Case 2 from group 1 was chosen for further analysis, due to it having the highest specific energy and temperature, and is explored further in the following sub-sections.Fig. 7 and presents the spatial distribution of the internal parameters for case 2, similarly Fig. 7 and shows the spatial variation of the electrochemical parameters for the Reference Case during the 5C discharge process at a time t = 500 s. For case 2, the reaction current is minimum at the bottom of the cell where the active volume fraction is highest.This result is consistent with theoretical predictions; as at higher εs there is an increased ionic resistance that limits the rate of the electrochemical reaction.From the results obtained, the current seems almost uniform along the rest of the cell where the active volume fraction is lower than 0.6.The cathode SOC, Fig. 7, shows a large gradient for case 2, in the order of 53.8% within the electrode that in turn yields a very inconsistent reaction rate as shown in Fig. 7.In contrast to case 2, the local variation of SOC is only 1.3% for the Reference Case, accompanied by a much lower gradient for the reaction rate across the electrode surface.The non-homogeneous distribution of the electrochemical parameters within the positive electrode explains the increased temperature gradient that can be seen in case 2 when compared to the Reference Case.The temperature profile of both cases is shown in Fig. 8.It is noteworthy that with a porosity profile of this shape, not only does the peak temperature and temperature gradient vary, but also the location of the hot spot moves from the positive tab towards to the bottom of the cell.This highlights the correlation of the temperature gradient with electrode design.To investigate the sensitivity of the model to the thermal parameters, a sensitivity analysis was undertaken.Model outputs such as the temperature of the cell as well as the location of the hot spot were investigated over a wide range of thermal parameters.The results show a small variation of temperature versus the thermal parameters.Moreover, the location of the hot spot was similar to those presented within Section 6.2.Nano-sized LFP electrodes are well known for their high power capabilities and the results defined within Sections 5 and 6 highlight limited improvement for such cells through multi-layered porosity structure.To extend this research a different cell with 53 Ah capacity was investigated for comparison.The detailed study and optimisation of the cell is defined as a future work, see Section 8.The NMC/graphite cell is made of 111 µm positive electrode along with 202 µm negative electrode.The pertinent design parameters are summarised in Table 8.Similar to LFP cells, a multi-layered porosity structure across the electrode thickness is applied for the NMC cell and three case studies are presented considering a multi-layered anode, a multi-layered cathode and a combination of both.The case studies and achieved energy and power are presented in Table 9.The results show that for this specific cell a significant improvement is obtained by a varied porosity structure across the anode.The improvement can be seen within both the energy and the power characteristics of the cell.The gained energy and power compared to the Reference Case is around 8.37% and 2.6%, respectively.The cathode varied porosity does not make an improvement for this cell.One conclusion may be that the porosity structure of the cell is not yet optimised and therefore there is the potential for a performance improvement.Further analysis is required to fully understand the causality between the porosity distribution within the electrode and whether the power capabilities of the cell can be improved without compromising energy density.This study highlights the effect that a multi-layered porosity structure within the positive electrode may have for different internal parameters as well quantifying the general performance implications for the battery.While the numerical results presented are only applicable to the specific battery type employed, the authors assert the modelling approach and rationale are transferable to other cell types and chemistries.Further research is required to better understand the transferability of these results to larger cells that have higher energy capacity and increased physical dimensions, potentially operating at higher C-rates.In addition, extending the parameterisation of the model to take account of different electrode chemistries will further highlight if the results presented here are representative of a wider cross-section of cell types.As seen from the results, variations in electrode porosity can impact the value and location of the peak temperature during the discharge event.A further refinement would be to extend the model to include degradation mechanisms within the cell.Example improvements to the model would include representing the solid-electrolyte interface growth and its effect on the capacity fade within the battery.Finally, it is suggested that the optimal design should be manufactured, potentially using the new techniques being developed by Grant et al. , accompanied by an experimental evaluation of the optimised cell versus an equivalent commercially available cell that has been manufactured using traditional electrode formulation methods.In this study, a 3D electrochemical–thermal model has been developed using COMSOL Multiphysics, in order to investigate the impact of non-homogeneous porosity distribution on cell performance.In order to facilitate a comparison a reference model has been formulated and validated using published data and experimental results presented by Li et al. , for a 10 Ah LFP pouch cell.Two different scenarios have been defined to vary the porosity across the thickness or height of the positive electrode.The scenarios are: a multi-layered structure of the porosity across the electrode thickness and a multi-layered structure of the porosity across the electrode height.The spatial distribution of the internal parameters such as current density, SOC, as well as overall cell performance has been investigated for each case study for different discharge currents.From the results obtained, varying the porosity distribution increases the inhomogeneity of the electrochemical parameters.Even though no significant improvement has been observed for this specific cell type, the simulation results highlight that a multi-layered porosity distribution along the electrode thickness, with high porosity at the separator/electrode interface, seems to yield the most promising results.For example, case study 2 from scenario 1 showed a reduction in the total heat generation within cathode by approximately 4.2–14% over the discharge event.Extending this research further, additional studies are currently ongoing in which more significant results may be obtained, for example, by re-parameterising the model to emulate a cell with a thicker electrode, where the impact of ohmic resistance is expected to be higher.In addition, it would be more promising to simulate the performance of a larger cell operating under a higher C-rate conditions, where the inhomogeneity of the cell is known to be more pronounced.While the numerical results presented are only applicable to the specific battery type employed, the authors assert the modelling approach and rationale are transferable to other cell types and chemistries.An example of a high energy cell with different chemistry types has been presented in Section 7, showing a significant potential for improvement.
This study investigates the impact of a multi-layered porosity profile on the electrical and thermal performance of a lithium-ion battery. Consideration is given to key attributes of the battery, namely its specific power and energy and the temperature distribution that may generated throughout the cell under electrical load. The COMSOl Multiphysics software tool has been employed to develop a 3D electrochemical–thermal model of a commercially available 10 Ah lithium iron phosphate cell. Through an extensive simulation study, for a fixed value of active material, the impact of varying the porosity profile across both the thickness and height of the electrode has been studied. For each case study, the distribution of reaction current and the corresponding localised state of charge and temperature profile are quantified for a constant current discharge of 5C. Simulation results highlight that a multi-layered porosity distribution across the thickness of the electrode has the potential to yield superior battery performance compared to when the porosity is varied along the electrode height. Moreover, the total heat generation within the cathode may be reduced by up to 14% compared to a Reference Case, along with 0.33% and 0.44% improvement in the specific energy and power, respectively.
97
Poisoning effects of H2S and HCl on the naphthalene steam reforming and water-gas shift activities of Ni and Fe catalysts
Syngas obtained during biomass/municipal solid waste gasification is mainly a mixture of carbon monoxide, carbon dioxide, hydrogen, methane and nitrogen, which can be utilized for electric power generation or liquid fuel synthesis .The biomass- and MSW-derived syngas, however, contains significant concentrations of impurities such as tar, HCl, alkali chlorides, particulate matter, ammonia, HCN and sulfur compounds.Tar, consisting of a mixture of aromatic hydrocarbons, causes equipment failure by its condensation and corrosion upon cooling of syngas .The techniques that can efficiently remove tar compounds to the acceptable levels are still under development.One of the prospective techniques is catalytic steam reforming which converts tar into H2 and CO .Different types of natural minerals and synthetic catalysts were proposed for tar reforming, among which Ni-based catalysts are the most common and commercially available.The utilization of Ni-based catalysts enhances syngas production due to steam reforming of hydrocarbons and other catalyzed reactions, including dry reforming, WGS and Bodouard reactions .Furthermore, Ni-based catalysts facilitate simultaneous decomposition of NH3 and HCN into N2 and H2 during the reforming process, resulting in lower NOx emissions .Besides nickel, other metals as well as bimetallic and polymetallic composites have been extensively investigated as reforming catalysts .For instance, monometallic Fe and bimetallic Ni-Fe catalysts have shown satisfactory reforming activity and high catalytic stability during reforming of tar compounds under certain conditions .Noichi et al. found that higher Fe content in Fe-Al catalysts enhanced the catalytic steam reforming activity by increasing naphthalene conversion efficiency.In NiO–Fe2O3–Al2O3 catalysts developed by Dong et al. and Margossian et al. , syngas production and dry reforming activities of methane were influenced by Fe content.Furthermore, catalysts with optimized Fe content were reported to enhance thermal stability of the Ni-Fe catalysts by mitigating coke formation during tar reforming .This superior effect was attributed to the formation of Ni-Fe alloys enriched with Fe-O species at the surface of nanoparticles that could catalyze coke oxidation .Yung et al. have attempted to regenerate the spent Ni catalyst which was contaminated during catalytic tar reforming at 850 °C by 43 ppmv H2S in syngas produced from gasification of white oak.It was found that the Ni-S species in catalyst could not be completely removed during the steam/air regeneration procedure.As a result, the catalytic activity of Ni was only partially recovered and was lower than its initial activity levels .The low melting point and high surface mobility of NiS can accelerate sintering , which may deteriorate the activity of catalyst.Furthermore, sulfur species can increase the carbon deposition on catalyst surface, which also decreases the catalytic activity .The presence of HCl in syngas was reported to decrease the reforming and WGS activities of Ni catalysts .Richardson et al. found that the conversion of methane was extremely inhibited in the presence of HCl, due to the chemisorption of HCl by Ni.Coute et al. demonstrated that HCl induced detrimental effect on WGS activity during steam reforming of chlorocarbons.Veksha et al. investigated the mechanism for the activity loss of Ni catalysts during naphthalene reforming in the presence of 2000 ppmv HCl and demonstrated that naphthalene conversion is not influenced by HCl while WGS activity was poisoned due to the sintering of Ni.In the above mentioned studies, either H2S or HCl were present in gas streams during the reforming while in real syngas, these impurities are present simultaneously.To what extent the co-existence of both H2S and HCl in the gas can influence the catalytic activity during steam reforming has not yet been investigated.The purpose of this work was to investigate the influence of H2S and HCl on the poisoning of synthesized and commercial catalysts during steam reforming of tar.It has been well known that Ni is an excellent metal for steam tar reforming.In this study, the addition of Fe is attempted, because Fe is a low cost material and Fe species has high redox activity .Furthermore, the addition of Fe to Ni had beneficial effect on the performance of bimetallic catalysts under certain experimental conditions .Four synthesized catalysts with different loadings of Ni and Fe on alumina support and two commercial catalysts were tested in a fixed bed reactor at different temperatures with varying contents of H2S and HCl in gas.Naphthalene was used as a model tar compound as it is one of the major tar species which also has high stability during tar reforming .In this study, 50 ppmv H2S and 300 ppmv HCl were used as they are in the range of typical concentrations of H2S and HCl present in syngas produced from biomass/MSW .The individual and combined effects of impurities on reforming and WGS activities of the catalysts and the reversibility of the catalyst poisoning are discussed.Four catalysts with different Ni and Fe contents were synthesized using the method described elsewhere .Briefly, the catalysts were prepared by impregnation of aluminum hydroxide having particle sizes of 0.56–1.18 mm with known concentrations of Ni2·6H2O and Fe3·9H2O in aqueous solution.After evaporation of water in a rotary evaporator, the materials were dried overnight in an oven at 105 °C and calcined at 500 °C for 2 h in air, followed by sieving to obtain particle sizes between 0.56 and 1.18 mm.The synthesized catalysts are denoted as xNi – yFe, where x and y represent calculated molar contents of metals per 100 g of the resulted catalyst.Two commercial catalysts from different manufacturers were crushed and sieved to obtain 0.56–1.18 mm particles, and used as the reference materials.The catalysts were characterized by X-ray diffraction analysis with Cu-Kα radiation source, X-ray photoelectron spectroscopy with a dual anode monochromatic Kα excitation source, X-ray fluorescence spectroscopy, transmission electron microscopy at 120 kV and N2 adsorption at −196 °C.Binding energies of elements in XPS spectra were corrected against an adventitious carbon C 1s core level at 284.8 eV.The processing of XPS peaks was carried out in the CASA XPS software.TEM images were used to measure the size of Ni nanoparticles in spent catalysts.The diameters were calculated using ImageJ software by analysing 150–200 Ni nanoparticles in each sample and assuming that nanoparticles have ideal spherical shape.Specific surface area of catalysts was calculated from N2 adsorption isotherms using BET model.Total pore volumes were calculated from N2 adsorption volume at P/P0 = 0.96.Temperature programmed reduction was performed in a 5% H2/N2 gas mixture at 30 mL min−1 flow rate with a temperature ramp of 10 °C min−1 up to 900 °C.Carbon content in the catalysts was measured by CHNS elemental analyser.The properties of pristine Ni, Fe, Ni-Fe and commercial catalysts are presented in Table 1.Ni and Fe contents were determined from XRF analysis and used to calculate molar quantities of Fe and Ni.The molar Ni and Fe loadings per 100 g of catalysts were close to the corresponding theoretical values of x and y in xNi-yFe samples.The amount of Ni in 0Fe-0.4Ni and two commercial catalysts loaded into the reactor for steam reforming of naphthalene was nearly the same due to the differences in bulk density, allowing comparison between the activities of Ni in the synthesized and commercial catalysts.The synthesized catalysts had higher BET specific surface areas and total pore volumes compared to the commercial catalysts.According to the high N2 adsorption volumes at relative pressures P/P0 > 0.1 and hysteresis loops between adsorption and desorption branches of isotherms, the synthesized materials were mesoporous.Among them, 0Fe-0.4Ni had the largest porosity, which is one of the reasons for its better catalytic performance stated in the following study.The specific surface areas and total pore volumes of the synthesized catalysts decreased with increasing Ni + Fe contents, which can be attributed to the impregnation of the porous alumina with loaded metal species.X-ray diffraction patterns of the synthesized catalysts in Fig. 2 consist of broad peaks with no distinct XRD peaks and also show no sharp XRD peaks indicating that alumina, nickel and iron oxides have non-crystalline and/or nanosized structures, so that alumina provides surface area for better dispersion of catalyst.On the other hand, in commercial catalysts, the XRD peaks of NiO and α-Al2O3 can be clearly identified.Fig. 3 depicts the Ni 2p and Fe 2p core level spectra of the four pristine synthesized catalysts.Ni 2p spectra of 0Fe-0.4Ni, 0.1Fe-0.4Ni, 0.2Fe-0.4Ni and 0.5Fe-0Ni contain shake-up satellite peaks with binding energy of approx. 862 eV and peaks with BE of approx. 856 eV.In Ni-based catalysts, the binding energy of Ni2+ typically increases with the strength of NiO–Al2O3 interactions from approx. 854 eV for unsupported and weakly bound to the support NiO to approx. 856 eV for strongly bound to the support NiO .At high NiO–Al2O3 interaction levels, the binding energy of Ni2+ in NiO of alumina supported catalysts becomes similar to that in NiAl2O4 spinel .Due to the shift in binding energy, it is uncertain whether the Ni2+ state in the catalysts NiO or NiAl2O4 solely based on XPS spectra.The similar binding energies of Ni2+ in all synthesized catalysts suggest that independently on Ni + Fe loading, strong interactions between NiO and alumina were maintained in the catalysts and there was no formation of new compounds with Fe species.The same can be concluded from Fe 2p core level spectra.Binding energies of Fe 2p for all catalysts were similar regardless of the presence of NiO and corresponded to Fe3+ in Fe2O3 .TPR profiles provide useful information regarding reducibility of Ni and Fe oxides in the synthesized catalysts.The reduction of catalysts occurred in a wide temperature range between 300 and 800 °C.The catalysts 0.2Fe-0.4Ni and 0.5Fe-0Ni contained a distinct reduction peak at 475 °C, which corresponds to the reduction of Fe2O3 , i.e. the main Fe species in catalysts according to XPS.According to the TPR profile of 0Fe-0.4Ni, most of nickel was reduced at 500–700 °C with the maximum reduction temperature at 590 °C, which can assigned to highly dispersed NiO having strong metal-support interactions .Small shoulder peaks at 350, 425 and 770 °C were also observed.The reduction at 300–400 °C is typically attributed to bulk and/or unsupported NiO, while the reduction >700 °C could be attributed to nickel aluminates formed due to sintering of NiO with Al2O3 , indicating that minor quantities of these species could be also present in the synthesized catalysts.According to the similar positions of H2 consumption peaks in the catalysts, the reducibility of Ni species was not influenced by the addition of Fe2O3 and vice versa.XPS and TPR data of the commercial catalysts are shown in Fig. S2.As it was reported elsewhere , in both catalysts, Ni2+ was in the form of NiO.However, in Commercial 2, NiO was more strongly bonded to the support compared to Commercial 1.Considering the similar Ni loading per 0.5 mL catalyst bed for Commercial 1, Commercial 2 and 0Fe-0.4Ni, this allows investigation of the effects of H2S and HCl on the activity of catalysts with different strength of NiO–Al2O3 interactions determined by the differences in porosity, crystalline structure, NiO dispersion etc.The addition of Fe to Ni-based catalyst provides further insight about the influence of H2S and HCl on the activity of catalysts with different metal composition.Fig. 5 depicts naphthalene conversion using the six catalysts in the presence and absence of H2S and HCl at 850 °C.CO and CO2 were the only reaction products.No formation of C1–C5 hydrocarbons was observed during the process.Naphthalene conversion over catalysts fluctuated during the first 30 min of experiment and was stabilized thereafter.In all catalysts containing Ni, the reforming activity was lower in the presence of H2S and HCl due to the poisoning effect.Furthermore, naphthalene conversion by 0.5Fe-0Ni was approx. 12% in the absence of H2S and HCl, and decreased to approx. 8% in the presence of H2S and HCl, suggesting the poisoning of Fe.Regardless of the presence of H2S and HCl, naphthalene conversion was stable during 5 h tests.The synthesized 0Fe-0.4Ni showed comparable conversion efficiency with commercial catalysts, which was likely due to the same amount of Ni loading per 0.5 mL bed in the three catalysts.These results suggest that there was similar poisoning effect on the naphthalene reforming activity for the catalysts with different strength of NiO–Al2O3 interactions.Reforming activity of 0.1Fe-0.4Ni was similar to 0Fe-0.4Ni, while the higher content of Fe in 0.2Fe-0.4Ni resulted in the decreased naphthalene conversion.This could be attributed to the decreased porosity and specific BET surface area with the higher Fe content due to the occupation of surface sites.Unlike Ni-based catalysts, 0.5Fe-0Ni merely achieved approx. 8% of naphthalene conversion, indicating that Ni is more active catalyst for naphthalene reforming compared to Fe.The lower catalytic toluene reforming activity due to Fe addition to Ni/zeolite catalyst was reported by Ahmed et al. , who found the depletion in basicity strength of this Fe-Ni/zeolite catalyst leading to suppressed steam reforming.Elemental CHN analysis of the pristine and spent catalysts suggests that there was no significant increase in the amount of carbon after the reforming, indicating no coking happened in the presence of H2S and HCl.This can be attributed to the relatively high content of steam in the model gas that could assist in carbon gasification.Fig. 6 shows the TEM image of fresh 0.2Fe-0.4Ni after preheating in 20 vol% H2–80 vol% N2 and spent 0.2Fe-0.4Ni after 5 h of reaction at 850 °C in the presence of H2S and HCl.The comparison of the morphologies of fresh and spent catalyst indicates the absence of carbon deposition during naphthalene reforming, which is consistent with CHN analysis.After reforming, Ni was present in the form of discreet spherical nanoparticles.This is attributed to the sintering of Ni during the process .On the contrary, Fe was evenly distributed over the catalyst surface.Fig. S3 shows that in other Fe-containing catalysts, Fe also remains in the dispersed state.The coverage of entire surface of the spent 0.2Fe-0.4Ni catalyst by S and Cl indicates that the chemisorption of these species occurred both on the Ni and Fe sites , which explains the poisoning effect of HCl and H2S both on the reforming activity of Ni and Fe.Fig. 7 shows the XRD patterns of the spent catalysts after naphthalene reforming at 850 °C in the presence of 50 ppmv H2S and 300 ppmv HCl.In the spent samples containing Ni element, the formation of metallic Ni phase was observed as suggested by the labelled NiO XRD peaks.As there were no XRD peaks of NiO in the fresh catalysts, these results indicate that upon reduction and reforming, Ni undergoes sintering into larger size crystalline nanoparticles, which is consistent with TEM data in Fig. 6a. Unlike NiO, the formation of crystalline FeO in 0.5Fe-0Ni was not observed as suggested by the absence of corresponding XRD peaks in this sample and even distribution of Fe in the TEM images of spent catalysts.According to TPR data, the reforming temperature was sufficient for the reduction of Fe2O3 to FeO.Therefore, it is likely that in the spent catalysts iron was in metallic non-crystalline state.These observations are consistent with scanning TEM data in Fig. 6, showing the differences in Fe morphology compared to Ni.There was no change in the position of NiO XRD peaks in 0Fe-0.4Ni, 0.1Fe-0.4Ni and 0.2Fe-0.4Ni with the addition of Fe, which would have been observed with the formation of Ni-Fe alloys , indicating that there was no alloying between Ni and Fe in the spent catalysts.The amount of chemisorbed sulfur and chlorine species during reforming was typically low which explains the absence of XRD peaks corresponding to metal chlorides and sulphides in all catalysts.The reaction temperature is one of the most important operating variables for steam reforming.0Fe-0.4Ni, 0.1Fe-0.4Ni and Commercial 1 were further selected to investigate the effect of temperature on catalytic activity.Fig. 8 shows naphthalene conversion at 790, 850 and 900 °C in the presence of H2S and HCl.Except for the decrease in conversion within the initial 30 min at 790 °C, the activity of catalysts remained constant thereafter indicating that it is possible to maintain stable conversion efficiency of naphthalene in the presence of H2S and HCl within the studied period of time at each temperature.The catalytic activities of the three catalysts were similar in the presence of H2S and HCl at each temperature regardless of the strength of NiO–Al2O3 interactions and the addition of Fe.The reforming activities of all catalysts were greatly influenced by the reforming temperature, increasing from approx. 40% to approx. 100% efficiencies with the increase in reaction temperature from 790 to 900 °C, respectively.These results can be attributed to the increased reaction rate of naphthalene with steam and the decreased H2S poisoning effect at higher temperature.It has been well known that H2S poisoning is caused by sulfur adsorbed on the nickel surface in the catalyst according to reaction.This reaction is reversible.With the increasing temperature desorption of H2S increases releasing surface active sites for the steam reforming reaction .To determine the respective and relative roles of H2S and HCl in the catalyst poisoning effect observed in Figs. 5 and 8, the naphthalene reforming at four different conditions was compared: 50 ppmv H2S and 300 ppmv HCl, 0 ppmv H2S and 300 ppmv HCl, 50 ppmv H2S and 0 ppmv HCl, and 0 ppmv H2S and 0 ppmv HCl.The experiments were carried out at 790 °C, as the poisoning was the most prominent at this temperature.According to Fig. 9, in the absence of H2S, naphthalene conversion was approx. 100% both at 0 and 300 ppmv HCl.In the presence of 50 ppmv of H2S, naphthalene conversion decreased to approx. 40% both at 0 and 300 ppmv HCl.These results suggest that the poisoning of naphthalene reforming was caused by H2S, while HCl had negligible effect on this reaction.Furthermore, since the naphthalene conversion in the presence of H2S was similar at 0 and 300 ppmv HCl, it can be concluded that H2S and HCl had no synergistic effect on the poisoning of reforming activity when both impurities were present in the stream.Based on the obtained data, during the reforming of naphthalene from gas streams containing both H2S and HCl, the poisoning of catalysts is mainly caused by H2S and can be attributed to the decreased accessibility of surface active sites for hydrocarbons due to H2S chemisorption .The poisoning effect on naphthalene reforming activity was similar for the catalysts with different strength of NiO–Al2O3 interactions and Ni + Fe contents.Increasing reaction temperature could effectively improve catalytic activity of Ni and Ni-Fe based catalysts in the presence of H2S and HCl leading to approx. 100% naphthalene conversion.Fig. 10 shows the ratios between CO and CO2 in the gas during naphthalene reforming over the catalysts at 850 °C in the presence of H2S and HCl.Steam reforming of hydrocarbons is typically presented as the combination of two reactions, namely, partial oxidation of hydrocarbon by steam into CO and H2 followed by WGS reaction .Consequently, the lower CO/CO2 ratio is probably attributed to the higher conversion of CO into CO2 over catalysts via WGS reaction.Dashed black line shows the CO/CO2 ratio at thermodynamic equilibrium.For all catalysts, the CO/CO2 ratios were higher than 0.52 indicating that thermodynamic equilibrium was not attainable.This is due to the lower space velocity and longer residence time required for the equilibration of WGS reaction over the catalysts .There were significant differences in the kinetics of WGS reaction as suggested by the different CO/CO2 ratios for the catalysts.The CO/CO2 ratios of synthesized Ni and Ni-Fe catalysts increased from 0.9 to 1.0 to 1.2–1.5 during the 5 h tests, depending on the sample.These changes were much lower compared to Commercial 1 and Commercial 2 catalysts, suggesting higher stability of WGS activity of the synthesized catalysts.Based on the similar CO/CO2 ratios for 0Fe-0.4Ni, 0.1Fe-0.4Ni and 0.2Fe-0.4Ni, the addition of Fe to catalysts did not alter the WGS activity of catalysts.Furthermore, the lower CO/CO2 ratios over the synthesized Ni containing catalysts compared to 0.5Fe-0Ni indicate that the WGS activity over Ni was higher compared to Fe during naphthalene steam reforming.Although 0Fe-0.4Ni, Commercial 1 and Commercial 2 had similar NiO loading per catalyst bed volume, the strength of interactions between NiO and alumina support was different in the catalysts, eventually, leading to the different Ni-support interactions in the reduced catalysts.Specifically, the strength of interactions increased from Commercial 1 to Commercial 2 and, finally, to 0Fe-0.4Ni which is consistent with the increase in WGS activity in the same order.One reason behind the observed phenomenon is the mechanism of WGS reaction over Ni based Al2O3 catalysts.By combining density functional theory and microkinetic modelling, it was demonstrated that Ni-support interface provides catalytically active sites for WGS reaction, serving as a storage for oxygenated Ni2+ species .Therefore, the decrease in the strength of metal-support interactions in catalysts can result in the observed loss of WGS activity.In comparison, for the steam and dry reforming reactions of methane, the importance of metal-support interactions was found to be less important as the active sites for these reactions seem to be different. ,Assuming that the mechanisms for reforming of hydrocarbons are similar , this could explain the negligible differences in naphthalene conversion over 0Fe-0.4Ni, Commercial 1 and Commercial 2.Fig. 11a shows the standard Gibbs reaction energies for the oxidation of Ni as the function of reforming temperature.It can be seen that ΔG° increases with temperature suggesting that higher temperature causes the formation of metallic Ni.However, at the same temperature, ΔG° is lower when γ- and α‐Al2O3 participate in the reaction, indicating that in the catalysts with strong NiO-support interactions, there is a higher content of oxygenated Ni2+.From the corresponding thermodynamic equilibrium constants, the content of Ni2+ can be calculated at the experimental conditions.According to Fig. 11b, in the absence of NiO-support interactions, the content of Ni2+ slightly increases with temperature and is 1.3%, 1.4% and 1.5% at 790, 850 and 900 °C, respectively.In the presence of NiO-support interactions, the content of Ni2+ is much higher at each reforming temperature.Notably, the γ- Al2O3 favors the stabilization of Ni2+ to a larger extent compared to α‐Al2O3 highlighting the importance of alumina material for the design of catalysts with tailored WGS activity.The provided thermodynamic calculations confirm that at the reforming temperatures, NiO-Al2O3 interactions can indeed stabilize nickel in the oxidized form due to the participation of support in the reaction, which could be in turn responsible for the higher WGS activity on the synthesized catalysts.Since, the XRD patterns of the spent catalysts contain only metallic Ni phase, it is likely that the oxygenated Ni2+ species are mainly present at the NiO-Al2O3 interface.Previously, it was proposed that the exposure of catalysts to high concentration of HCl during steam reforming of naphthalene causes the chemisorption of HCl on Ni followed by the sintering of Ni species into larger size nanoparticles.This process is irreversible and leads to a permanent loss of WGS activity .The poisoning of WGS activity of catalysts by low concentrations of H2S and HCl has not been investigated.Fig. 12 presents the CO/CO2 ratios for 0Fe-0.4Ni, 0.1 Fe-0.4Ni and Commercial 1 at 790, 850 and 900 °C.Among the tested catalysts, Commercial 1 showed the lowest WGS activity at each temperature.With the increase in temperature, CO/CO2 ratios for Commercial 1 catalyst decreased, indicating that WGS activity of this catalyst could be improved by increasing the reforming temperature.Since the content of oxygenated Ni2+ species is relatively high at all reforming temperatures, this could be attributed to the faster reaction rate that allows to approach closer to the thermodynamic equilibrium and/or enhanced desorption of S- and Cl-species at higher temperature.Nevertheless, the CO/CO2 ratios for Commercial 1 remained high compared to those corresponding to thermodynamic equilibrium.The CO/CO2 ratios for Ni and Ni-Fe catalysts were lower than that of Commercial 1 and closer to thermodynamic equilibrium at all temperatures, indicating higher WGS activity.Since the poisoning of Commercial 1 was more pronounced at lower temperature, the individual and combined effects of H2S and HCl on WGS activities of two representative catalysts, namely, 0Fe-0.4Ni and Commercial 1, were compared at 790 °C.Fig. 13a presents the CO/CO2 ratios at four experimental conditions: 50 ppmv H2S and 300 ppmv HCl, 0 ppmv H2S and 300 ppmv HCl, 50 ppmv H2S and 0 ppmv HCl, and 0 ppmv H2S and 0 ppmv HCl.In the context with respect to WGS reaction, the presence of H2S and HCl had negligible effect on the poisoning of 0Fe-0.4Ni indicating high stability of the WGS activity to the action of both impurities.The deterioration of WGS activity of Commercial 1 was observed even in the absence of H2S and HCl.This could be attributed to the lower strength of NiO–Al2O3 interactions in this catalyst compared to 0Fe-0.4Ni.As shown in Fig. 13b and c, the sizes of Ni nanoparticles were larger in the spent Commercial 1 compared to 0Fe-0.4Ni after using condition 4, which could result in the lower WGS activity .For Commercial 1, CO/CO2 ratios increased both under condition 2 and condition 3, indicating that both impurities contributed to the poisoning of WGS activity.The poisoning of WGS activity in the presence of H2S was faster compared to HCl as demonstrated by the rapid increase in CO/CO2 ratio within the first 60 min of reaction.The poisoning of catalyst was more pronounced in the presence of both H2S and HCl, indicating the detrimental synergistic effect of impurities.According to Fig. 13b and c, at low concentrations of H2S and HCl, there was no change in the sizes of Ni nanoparticles of 0Fe-0.4Ni and Commercial 1.These data suggest that unlike at 2000 ppmv HCl in literature , low concentrations of H2S and HCl are unable to enhance Ni sintering, and the detrimental effect on WGS activity of Commercial 1 was most likely associated with the poisoning of catalyst surface solely via chemisorption.This could explain the increase in WGS activity of Commercial 1 with the increase in the reaction temperature from 790 to 900 °C in Fig. 12 as higher temperature typically decreases chemisorption.If this hypothesis is correct and chemisorption is the main reason for the catalyst poisoning, after desorption of S and Cl species, the WGS activity of catalyst can be restored.On the other hand, if sintering causes the poisoning as observed for high concentrations of HCl, the loss of WGS activity would be irreversible .To test the hypothesis, the spent Commercial 1 and 0Fe-0.4Ni after 5 h of naphthalene reforming at 790 °C in the presence of 50 ppmv H2S and 300 ppmv HCl were respectively used for the subsequent 5 h naphthalene reforming at 790 °C in the absence of H2S and HCl.According to Fig. 14a, while at the end of Exp.1 naphthalene conversions by 0Fe-0.4Ni and Commercial 1 were both only approx. 40%, they were restored to approx. 80% by commercial 1 and approx. 85% by 0Fe-0.4Ni during Exp.2 when H2S and HCl were removed from the gas stream.This improvement can be attributed mainly to the desorption of H2S, that has detrimental effect on the steam reforming of hydrocarbons as it was demonstrated in Section 3.2.Despite the two times increase in the catalytic activity, naphthalene conversion during Exp.2 was still lower compared to that of the fresh catalysts utilized in the absence of H2S and HCl.These data suggest that desorption of H2S was incomplete.According to Fig. 14b, for 0Fe-0.1Ni, CO/CO2 ratio during Exp.2 was similar with that during Exp.1 indicating that the presence of H2S and HCl had negligible effect on WGS activity of 0Fe-0.4Ni.This observation is consistent with Fig. 13a showing high stability of the WGS activity to the action of both impurities.However, after Exp.1, CO/CO2 ratio by Commercial 1 was 4.4 and the CO/CO2 ratio drastically decreased to 2.5 during the first 30 min of Exp.2 and remained stable for 4.5 h.This value is comparable with the CO/CO2 value obtained for the fresh Commercial 1 utilized in the absence of H2S and HCl, indicating that at low concentrations of impurities, the poisoning effect on the WGS catalytic activity was reversible, thus, confirming the hypothesis.The structure of catalyst played an essential role in WGS reaction but not in reforming reaction.The stronger NiO–Al2O3 interactions provided beneficial effect to catalytic activity which could be probably attributed to the formation of larger content of oxygenated Ni2+ species that serve as active sites for WGS reaction .The poisoning effect of HCl and H2S on WGS was more pronounced in a catalyst with weakly bonded NiO to the Al2O3 support.At low H2S and HCl concentrations, the poisoning of WGS activity proceeds via chemisorption of S and Cl species and the loss of catalytic activity is reversible when H2S and HCl are removed from the gas stream.The effects of H2S and HCl on catalytic steam reforming of naphthalene were investigated using Ni, Ni-Fe and Fe catalysts supported on alumina at 790, 850 and 900 °C.Ni had higher reforming and WGS activities compared to Fe and the activities of Ni were not significantly influenced by the addition of Fe.H2S poisoned naphthalene reforming activity of the catalysts, while the addition of 300 ppmv to gas stream had no effect on this reaction at 0 and 50 ppmv H2S.On the contrary, both HCl and H2S could poison WGS activity of the catalysts and the poisoning effect was more pronounced when both impurities were present in the gas stream.The poisoning by H2S could be only partially restored by removing H2S from the gas stream indicating the strong chemisorption of H2S on Ni.However, H2S poisoning effect could be prevented by carrying out reforming of naphthalene at higher temperatures.Specifically, the increase in temperature from 790 °C to 900 °C increased naphthalene conversion from approx. 40% to approx. 100%.The poisoning of WGS activity during naphthalene reforming was significantly influenced to the structure of catalyst.The stronger NiO–Al2O3 interactions provided beneficial effect minimizing the loss of WGS activity.This beneficial effect could be attributed to the formation NiO-support interfaces upon reaction serving as active sites for WGS reaction.At these concentrations of H2S and HCl, the loss of WGS activity was reversible when H2S and HCl were removed from the gas stream.
H2S and HCl are common impurities in raw syngas produced during gasification of biomass and municipal solid waste. The purpose of this study was to investigate the poisoning effect of H2S and HCl on synthesized and commercial catalysts during steam reforming of naphthalene. Four synthesized catalysts with different loadings of Ni and Fe on alumina support and two commercial catalysts were selected and evaluated in a fixed bed reactor at 790, 850 and 900 °C. The obtained results revealed that reforming and water-gas shift (WGS) activities of catalysts did not benefit from the Fe addition. The activities were influenced differently by H2S and HCl indicating that the reactions were catalyzed by different active sites on the nickel surface. In the presence of H2S and HCl, the poisoning of naphthalene reforming activity was caused by H2S and was not affected by HCl when both compounds were present in the gas. H2S chemisorbs on nickel surface forming NiS and decreasing the accessibility of active sites to hydrocarbons. The poisoning effect was only partially reversible. On the contrary, the poisoning of WGS activity could be caused by both H2S and HCl, and the activity could be completely restored when H2S and HCl were removed from the gas. Unlike naphthalene reforming activity, which was comparable for catalysts with similar Ni loadings, WGS activity depended on the catalyst structure and was less susceptible to poisoning by H2S and HCl in case of the catalyst with strong NiO-support interactions.
98
Service growth in product firms: Past, present, and future
Service growth in product firms has become one of the most active service research domains, to the point that it has been identified as a strategic research priority.This domain is concerned with product firms shifting from developing, manufacturing, and selling products to innovating, selling, and delivering services.This shift towards services is typically a strategic response to reaching the maturity phase in the product lifecycle and, thus facing limited revenue growth.Services are a way to escape the product commoditization trap; for example, in the elevator industry, companies like Otis and Kone enjoy maintenance service margins of 25–35% compared with a margin of approximately 10% for new equipment."If successfully deployed, services can become an important source of revenue and profits, ensure customer satisfaction and loyalty, and support firms' growth.In addition, services can play a powerful role in building brand equity in business markets, especially in industries where it is difficult to maintain competitive product differentiation due to commoditization.Across industry sectors, firms are actively pursuing service growth strategies.Examples include traditional manufacturing corporations—such as General Electric and Siemens—as well as software firms like Microsoft, and former hardware firms like IBM.For example, Microsoft is increasingly orienting towards services with strategic initiatives such as re-formatting its Office suite into a cloud-based subscription model and IBM is transforming into a cognitive solutions and cloud platform company.The shift, however, is not limited to large firms, as many SMEs are also re-orienting towards services.Service growth is open to a variety of conceptualizations, and has attracted interest from a variety of disciplines.Theoretical and empirical work has been accumulating with a sharp rise in publications, special issues, and conferences in recent years.Well over 180 scholarly journal articles on this topic are published every year, as well as books geared towards academics and managers.Many findings have been proven to be highly relevant to industry and have attracted management attention.While the history of research on service growth in product firms can be traced back to the mid-1980s, its antecedents go back to the mid-1800s, when the expansion of the railroad and the telegraph networks in the US set the stage for the vertical integration of manufacturers into marketing, sales, repair, financing, and purchasing activities.However, articles increasingly replicate existing knowledge in an exploratory and descriptive manner.The identification and investigation of small empirical gaps dominates current contributions and results in incremental theoretical improvements.Much of the research still lacks a strong theoretical foundation and substantial theoretical extensions are rare.The purpose of this special section is to promote and bring together critical research that challenges prevailing assumptions and strengthens the theoretical foundations.As a way to frame the contribution of this special section, we discuss the past, present, and future of the research domain.The evolution of the research field can be divided into two distinct phases.However, critical analysis suggests that despite tremendous research interest and output, which suggest that the research tradition is well established, the research domain is still in a theoretical and methodological nascent stage.Service growth strategy was identified as a recurring phenomenon, and the boundary of the research domain was established during the last two decades of the last century, in what we call the first phase of the research evolution.The research started with the idea that services were customer service, that is, an add-on to products and an important part of the buyer-seller relationship and means of competitive advantage.Conceptually, Bowen et al. suggested two alternative configurations of service orientations in manufacturing: service-oriented manufacturing and prototypic manufacturing characteristics.If manufacturers emphasize service-oriented goals, such as customer responsiveness and high customer contact, then they are urged to adopt a service-oriented manufacturing configuration based on organizational arrangements and resource allocation originating in service literature."One of the seeds for this line of research was Vandermerwe and Rada's introduction of the term “servitization of business”.While servitization today has become almost synonymous with service growth in product firms, Vandermerwe and Rada regard it as a competitive tool relevant for companies in all industries on a global scale.It allows companies to create value by blending services into the overall strategies of the company."Echoing Levitt's argument that “Everybody is in service”, Vandermerwe and Rada argued that simplistic distinctions between goods and services were outdated: “Most firms today, are to a lesser or greater extent, in both. "Much of this is due to managers looking at their customers' needs as a whole, moving from the old and outdated focus on goods or services to integrated “bundles” or systems, as they are sometimes referred to, with services in the lead role”.Such arguments resonated with practitioners and led to the formation of strategic, financial, and marketing arguments for service growth in product-oriented companies.The second phase starts around 2000 with the realization that taking advantage of strategic, financial, and marketing benefits requires different types of services.During this phase, the majority of the contributions and conceptualizations that built the intellectual core of the research field emerged.These conceptualizations include product-service systems, the transition from products to services, integrated solutions and systems integration, service infusion, and service business development.During this phase, research also explored barriers and key success factors for services in product firms.Gebauer et al. points out that service growth is far from easy.Companies often face the service paradox: they invest in services, but do not earn the expected, corresponding returns.The emergence of the conceptual foundations of the field is closely intertwined with growing interest from practitioners looking to the field to answer questions like: how to achieve service growth, how to transform business models from selling products to selling solutions, how to innovate new services, how to change from giving services away for “free” to charge for services, and so on."Mathieu's distinction between service offerings related to the manufactured goods and more product-independent services focusing on the customer's processes is perhaps the most wide used classification of industrial services.For example, it serves as one of two dimensions in several service taxonomies."The most cited publication is Oliva and Kallenberg's field study of equipment manufacturers.The article proposed one of the first process theories for service growth and its service transition concept has had major influence in the research domain, regardless of academic discipline.It found that in most of the firms sampled, the transition is a deliberate transformation effort that involves disruptive developments of new capabilities as response to strategic threats and opportunities.For each of these disruptions they identified the series of triggers, goals, and actions normally deployed, and they argued that the adoption of new services seemed to be based on a trial and error capability-centered development.It is, however, interesting to note that the transition framework that they used to design the inquiry has been interpreted as a proposal of a smooth and continuous evolution towards more services, although they clearly state that that such evolution is not expected and, indeed, did not find evidence for it.Oliva and Kallenberg were also the first to articulate the potential cultural conflict between the existing product and the emerging service organizations."Previous work had suggested that a services operation run with manufacturing's emphasis on throughput and efficiency will result in eroding service quality standards.Indeed, Oliva and Kallenberg found that firms that successfully managed to deploy services tended to isolate the service organization early in the transition.Finally, while the trend towards integrated solutions can be traced back to the emergence of build-operate-transfer infrastructure projects in the 1980s, it was not until this phase that important conceptual and empirical works on integrated solutions were published.For instance, Tuli, Kohli and Bharadwaj proposed a new perspective on the concept of a solution; in contrast to extant product-centric views, they suggested that solutions should be conceptualized as a customer–supplier relational process comprising of four distinct sub-processes: customer requirements definition, customization and integration of goods and/or services, deployment, and post-deployment customer support.Hence, solution projects require longer lifecycles than traditional products or systems—from high-level pre-bid negotiations to a long-term operational service phase—and the provider needs to acquire or develop skills to cover all four phases of the lifecycle.Table 1 summarizes the intellectual core of service growth in product companies.The individual contributions can be clustered into solution delivery, solution marketing, service business performance, services growth strategies, Product-Service-Systems and servitization.The research domain on service growth in product companies has become an established field producing over 180 articles every year.In the last few years, research has started to borrow from other management theories, such as industry lifecycles, business models, service systems, innovativeness, customer centricity and the resource and capability perspectives for creating competitive advantage.For example, research on service-based business models and business model innovation is receiving attention through conceptualizations such as service business models and solution business models.Furthermore, as the limitations of dyadic studies of manufacturers and customers are increasingly acknowledged, a growing number of studies take a network perspective and investigate other actors such as dealers and service partners.However, with some exceptions, these studies mostly rely on qualitative data from the supplier.While the level of research output in the field is encouraging, we believe that the dominance of qualitative research points to a lack of theoretical development and validation.According to Edmondson and McManus what is already known and what is being explored should drive the research strategy, and that there needs to be a fit between the research question and the data and methods used to answer the question.According to them, nascent theories—that is, research areas where the research questions are of an exploratory nature—require interviews, case studies, and direct observation of the phenomena.Iterative exploratory content analysis of these types of data yields new constructs and suggestive models of correlation.Research propositions and provisional causal models, the expected contribution of intermediate theories, require explicit interview protocols, survey work and archival data to be processed via statistical analysis and pairwise comparisons.A fully mature theory contains precise models that capture hypotheses generated by the same theory.To generate these, it is necessary to establish quantitative measures of established constructs and statistically test them.Clearly, a field that is still dominated by qualitative research cannot get past suggestive models.Our goal with this special section was to promote and bring together critical research that challenges prevailing assumptions and strengthens the theoretical foundations.Our goal was to move the service growth theory further along the maturity framework.We are happy to report that the contributions to this special section do indeed move our theories forward and we want to thank the authors for their submissions to the special section.The next section provides an overview of the contents of the special section and what we believe is their contribution to the theoretical developments.The articles in the special section each contribute in different ways by challenging prevailing assumptions and strengthening the methodological, empirical, and theoretical foundation.We have one meta-analysis of the literature with a very fresh perspective that challenges us to question our underlying assumptions about service growth; two empirical/hypotheses-testing papers assessing the effect of service-growth on financial performance, thereby testing some of the fundamental premises of the drivers for servitization; and three conceptual papers, each attempting to make sense of some problem identified in the transformation process from products to services.In their systematic analysis, Luoto, Brax, and Kohtamäki consider scientific texts as narratives and delineate the methodic concept of ‘model-narrative.’,They identify four paradigmatic assumptions that have become institutionalized in research on service growth in product firms: 1) alignment to the western narrative of constant development; 2) realist ontology; 3) positivist epistemology; and 4) managerialism.Interestingly, these assumptions have remained fairly consistent throughout the investigated 25-year period.While qualitative research designs such as case studies dominate the data, the same assumptions were identified in the quantitative studies.Supporting some earlier observations this study shows that there is a need for paradigmatic alternatives or multiple paradigms.The authors suggest that research in developing and emerging economies could validate, diversify, and enrich existing research with western origins.They also recommend that future research take a critical stance and examine whether service growth represents a viable strategy for all firms.For instance, larger populations may enable researchers to identify potential counter-evidence and seek alternative explanations.In addition, managerialism, which was identified as one of the central paradigmatic assumptions, implies that service-related failures can only be attributed to irrational management and poor process design.Critical studies should investigate both leadership issues and the role of other factors beyond managerial action.The next two articles are empirical studies that provide important insights related to the financial performance of manufacturers pursuing service growth.First, Böhm, Eggert, and Thiesbrummel examine whether a healthy financial situation is a necessary condition for successful service growth, something which no extant empirical study has previously investigated.They draw on configurational theory and employ fuzzy set qualitative comparative analysis of 294 manufacturers.The study demonstrates that emphasis on services is a viable option for both manufacturers in a healthy financial situation and those in financial decline.This is a notable result, given that previous qualitative and quantitative studies repeatedly emphasize the importance of a solid financial situation to deal with the investments required for strategic service initiatives.In addition, by considering sets of configurations that promote an emphasis on services, the research provides a more realistic and comprehensive view of the requirements for service growth than studies analyzing the net effects of single variables.By analyzing the interplay among context factors, their results confirm that resources and knowledge sources become important only in specific context situations, such as financial difficulties.While exploratory research shows that small firms can successfully pursue service growth and may in fact have advantages over larger competitors, the study further suggests that small firms are less likely to have a general recipe for service success.Larger firms are more likely to have the organizational slack and market power that are favorable conditions for success.As a fruitful avenue for research, the authors suggest investigating the conditions under which firms decide to reduce their service orientation, causing deservitization.Furthermore, while the study regards service orientation as a homogeneous entity, other empirical studies show that service orientation can be achieved in different ways, by focusing on specific service offerings.Additional research could therefore analyze if successful growth through specific service offerings requires different organizational characteristics."In the second empirical paper, Benedettini, Swink, and Neely use portfolio theory as a novel theoretical lens to investigate the relationship of manufacturers' service offerings to their survival.It is the first study on service growth that addresses bankruptcy likelihood as a direct outcome variable."Estimating a conditional multivariable logistic regression model, their evaluation of secondary data on 74 bankrupt manufacturers and 199 matched non-bankrupt competitors shows that offering more services does not consistently increase a firm's chances of survival.This result challenges the notion from conceptual literature that adding services increases the chances of survival.Specifically, they find that a focus on product-dependent services does not increase the chance of survival, while a diversified product business, offering more product-related services, decreases the likelihood of bankruptcy.The authors assume that their findings, which are based on data from mostly US-based companies, would transfer to Western European product firms.Further validation in other national contexts would therefore be valuable.Another natural extension of the study would be to examine different dimensions of service orientation, such as the relative emphasis placed on services.In the first conceptual article, Spring and Araujo question the assumption in much of the marketing and servitization literature that products can be treated as stable bundles of attributes that have been assembled through manufacturing, such as a “more or less pre-produced package of resources and features”, and “distribution mechanisms for service provision”."Instead, they adopt anthropologist Igor Kopytoff's notion of the product biography to reveal novel insights by challenging the conventional views of products in servitization research.Products are conceptualized as open-ended propositions that are constantly unstable, both physically and institutionally.They further use the context of the ‘circular economy’ to show how biographies of products can add to our understanding of service growth opportunities in product firms beyond the linear path from design to manufacture to disposal.Because of various forms of instability in their status or condition, products can enable entrepreneurial opportunities for service growth that go beyond the restoration to original status, through reverse cycles of reuse, remanufacturing, and recycling.Furthermore, the Internet of Things, which is coevolving with the circular economy, permits connected and more comprehensive product biographies and thus enables new forms of service business models arising from continuous tracking of the biographies of individual products and a more fine-grained understanding of the interaction of multiple biographies in larger systems.Overall, these insights can facilitate the emerging debate on the plurality of potential service transition models and also give further structure to the nascent discourse on the institutional context of service growth.In the second conceptual contribution, Valtakoski puts forward the knowledge-based view of the firm as an integrative perspective to inform our understanding of antecedents and consequences of servitization and to offer explanations for servitization failure and deservitization.Knowledge-based theory posits that the purpose of the company is to facilitate the creation, integration, and transfer of knowledge, and highlights the dynamics of learning and organizational renewal.By conceptualizing servitization as a dyadic phenomenon, Valtakoski identifies eight key knowledge processes that potentially explain why product firms fail in their service growth initiatives.The theoretical framework informs on the dynamics of servitization and provides a more nuanced view of the customer-supplier interaction than extant literature.It suggests that the choice of knowledge sourcing depends on two contingencies: the structure of the planned solution and the knowledge bases of the collaborating customer and supplier firms.Furthermore, deservitization is regarded as a special case of industry evolution.In the final contribution, Forkmann, Ramos, Henneberg, and Naudé conceptualize service infusion as a business model reconfiguration.While extant literature discusses service infusion mainly as an outcome, they further develop a process perspective to describe the addition and reduction of services as multidimensional processes affecting the transaction content, structure, and governance level of the business model.Service “diffusion” is introduced as a concept antonymous to service infusion—similar to the concept of deservitization.By employing a knowledge-based perspective on service growth, they show how service addition and reduction are driven by multiple tacit and explicit knowledge conversion mechanisms and may affect all three levels of the business model.In order to achieve stability across all three levels, service infusion may need to be followed by a reduction of services elsewhere in the network.Since the study could only show the reduction of service on a structural level, further research should examine the phenomenon on a content level.In addition, research should look at performance-related issues of the processes and strategies of adding and reducing services.Finally, since service growth in product firms necessitates a certain organizational ambidexterity in terms of managing the co-existence of product-centric and service-centric capabilities, further research should investigate the interrelationship between this ambidexterity, service business models, and the competitive advantage of the firm.As discussed above, the contributions to this special section move the service growth research agenda forward on several fronts.More interestingly, their findings and propositions open up several new research questions and fruitful research opportunities.In the following section, we address what we believe are the current challenges and opportunities of the service growth research agenda.The two empirical studies reported in this issue address the issue of whether or not adding additional services improves the financial performance of the firm.The empirical evidence from these studies is welcome, but clearly more research is needed in this area.Much of the push for service growth has been a response to eroding margins in the product market and a way to gain revenue stability through business cycles.While the weaknesses of the pure-product firm are well understood, the evidence that ‘more services’ is an effective way to address them is scant.Broader evaluations of the impact of service deployment on profitability across industries, countries, and types of products and services, as well as a solid understanding of the environmental factors that affect it should be a high priority of the service growth research agenda.Establishing where the service growth strategy works and under what conditions is a fundamental first step to justify its effectiveness and will be instrumental in building the credibility for research to influence practice.The conceptual studies in this special section point to important gaps in our understanding of the actual transformation that takes place in the manufacturing organization that decides to deploy services."First, as pointed out by Luoto, Brax and Kohtamäki literature review, Valtakoski's conceptualization, and the empirical investigations of Böhm, Eggert and Thiesbrummel and Benedettini, Swink and Neely, service growth is often considered to be organic.However, firms often acquire other product companies to increase the number of installed products, for which services can be marketed, and mergers and acquisitions play a key role in achieving service growth.Bosch Packaging, for example, relies on acquisitions for service growth and for a more cost-efficient utilization of the service resources."Xerox acquired Affiliated Computer Services, the world's largest diversified business process outsourcing company in 2010.Such M&As might be considered interesting “anomalies” to the current theoretical assumptions on internal growth, and the theoretical lenses and methodologies from M&A literature could be used to question whether it may make economic sense to acquire specialized service companies as a strategy for servitization.Second, as suggested by Böhm et al. and Benedettini et al., service growth is frequently assumed to be achieved by moving along a continuum from products to services.As discussed above, this continuum has often been interpreted as a smooth and gradual transition into more services, despite the evidence of capability-related steps.Furthermore, as it is unlikely that firms will precisely know a priori what service offerings will be successful in the market, an evolutionary perspective suggests of tentative steps of trial and error.This experimentation, adding and reducing services to the market offer, is something that has been ignored in the literature and needs to be more carefully explored.The conceptual studies by Forkmann, Ramos, Henneberg and Naudé and Valtakoski in this issue are two of very few articles that discuss deservitization.However, as Böhm et al. point out, “empirical studies have almost exclusively tested conditions that render servitization; deservitization is not well understood.,A second implication of the continuum perspective of servitization is the assumption that service growth results from taking a position in the continuum line.Such a single position is associated with a specific type of service offering or business model — e.g., after-sales service providers, availability providers, performance enablers.In practice, however, one firm has multiple positions along the continuum: it may offer basic services for one customer segment, provide services for improving product availability for a second segment, and other services for enhancing customer performance for a third segment.Research should thus be reframed from how product firms change from services supporting one business model to another to how to manage multiple services offerings and business models in one organization.Of course, such a focus on multiple service offerings further enhances the need to understand the reversing and/or backing down from service offerings.Third, the articles in this special section point towards three interesting contextual dimensions that can tremendously improve our understanding of service growth.Luoto et al. literature review suggests that the provision of products and services in emerging economies has been neglected.The implication is that interesting learning opportunities might be available in those settings.For example, the German company Mobisol deployed a pay-per-use service to create a market for solar home systems in Tanzania.Instead of selling solar home systems, the company charged for the electricity these systems produced.Mobisol did not follow the traditional pathway towards service growth, but directly deployed pay-per-use as an advanced service.Research should investigate such deviations from the understood service growth paths and extend the service growth concept to emerging, low-income, markets that do not have the established infrastructure available in developed economies.Research can borrow theoretical lenses and methodologies from entrepreneurship and emerging markets.The second contextual dimension highlighted by the empirical studies in this special section is the fact that most service growth research focuses on companies that face a certain maturity in their industry lifecycle and product commoditization; the expected setting for a service growth strategy.Many empirical studies, however, leave out the role of services in the early stages of the industry or product lifecycle.Related to the industry lifecycle is the idea that product technologies become increasingly mature making it more difficult to achieve technological superiority through the actual product.Technological advancements might question this assumption."For instance, John Deere's tractors have increasingly become commodities, but by utilizing technologies surrounding the industrial internet, John Deere opened up a new service market including servicing all farming assets and data integration services about weather, seed quality, water irrigation and soil.Such new markets are intertwined with new business models such as pay-per-usage and/or pay-for-results.As Spring and Araujo point out, smart connected products capable of self-configuring can help achieve both business and sustainability objectives.Finally, we should be aware that service growth has been mainly investigated in traditional product manufacturing firms.However, service growth is relevant for other industries beyond manufacturing.Expanding service growth research beyond product manufacturing is the third contextual dimension that we believe might improve our understanding of service growth strategies.For example, service-based delivery models are increasingly common for software firms market of business applications.Corporations such as IBM and Oracle have been offering their corporate customers subscription-only software for years and, more recently, companies like Adobe and Microsoft have taken a similar move by renting software to consumers.Public and private utilities recognize that electricity, water, or energy provision have become commodity businesses with eroding of margins and that growth opportunities might arise through services.Contract manufacturers, which specialize on production technologies rather than selling their own products, explore opportunities arising from services throughout the lifecycle of the ordered product.Research of service growth strategies in these industries is needed to ensure that we are not limiting our understanding of service to the biases and constraints that might be inherent to manufacturers.While much has been written on the process of servitization and many firms have indeed developed successful businesses following a service growth strategy, it is clear that there are still ample areas that require further research in this domain.This special section was called to revisit and challenge some of the core assumptions that lay at the foundation of the research agenda over the last two decades.We were delighted by the response we received from the research community and the insights they uncovered through their research.The opportunity to assess these articles as a group, however, has uncovered even further potential avenues for future research in the domain.We hope the research community embraces these opportunities and challenges.Christian Kowalkowski appreciates the support from Riksbankens Jubileumsfond; research grant number: P15-0232:1.
Service growth in product firms is one of the most active service research domains and is open to a variety of conceptualizations. This article provides a critical inquiry into the past, present, and future of the research domain. The evolution of the research on service growth is discussed in two phases: (1) setting the boundaries of the research domain, and (2) emergence of the conceptual foundation. We find that while research in this area has a well-established tradition in terms of output, theoretically it is still largely in a ‘nascent’ phase. Next, we highlight the contributions of the papers in this special section, emphasizing their challenges to prevailing assumptions in the research domain. We conclude by identifying, from the contributions to this special section, suggested themes for further research on service growth: the assessment of empirical evidence of the impact of service growth on firm performance, the role of merger & acquisitions in the service growth strategy, the exploration of single/multiple positions along the transition line, the process of adding or removing services, and expanding the context of service growth beyond product manufacturing firms.
99
Political economy of fiscal unions
One of the most intriguing questions of economics concerns the conditions under which deeper integration is possible and the circumstances that make integration fail.And fail it does remarkably often: more than 100 new countries emerged in the course of the 20th century alone.Clearly, political and cultural motives such as a sense of separate identity and nationalism are of paramount importance as factors behind secessionist tendencies.Nevertheless, economic considerations also play an important role.Among them, the fact that unions tend to use fiscal policy to redistribute income across regions is often controversial.Such fiscal unions can feature inter-regional transfers that have been agreed upon, negotiated, and formalized explicitly, or that occur because of centralized automatic stabilizers such as progressive income tax, unemployment benefits, and the like.Disagreements about inter-regional fiscal redistribution can become an important driver of disintegration; fiscal transfers, and their perceived unfairness, played an important role in the break-up of Czechoslovakia and have significantly contributed to inter-regional tensions in Belgium, Spain, and the United Kingdom.Nevertheless, fiscal transfers also have an important benefit in that they facilitate risk sharing.This aspect of integration has been highlighted by, among others, Beetsma and Jensen, Gal and Monacelli, and Farhi and Werning.These studies emphasize the benefits – higher welfare due to consumption smoothing – that accrue to the participating countries when they enter into a mutual-insurance arrangement."As Farhi and Werning point out, these benefits are particularly large when fiscal policy is the only tool at the government's disposal and when financial markets are incomplete.Furthermore, the bigger and the more persistent are the shocks, the more attractive it is to form a fiscal union.The aforementioned contributions, while insightful, focus on the economic and welfare implications of fiscal unions.In this paper, instead, I consider the political economy of such arrangements.In a nutshell, a mutual-insurance arrangement that is optimal ex ante may be rejected by one of the parties ex post, once the shocks are realized.I formulate a model that is a dynamic version of the static model of Bolton and Roland.It features a union composed of two countries with a centrally provided public good."As long as integration continues, fiscal policy reflects the union median voter's preferences which, in turn, depend on the aggregate effect of regional shocks.1",The two regions thus constitute an implicit fiscal union: fiscal redistribution occurs through centralized fiscal policy rather than by means of explicit inter-regional transfers.The regions, however, have the option to secede and implement their own preferred fiscal policy if the utility gain from doing so outweighs the cost of secession.Because of the shocks, a union that was previously stable can break-up following a particular regional shock, whether positive or negative.The opposite is also true; a region that preferred independence initially can come to prefer integration in the wake of a particular shock.The analysis suggests that two aspects of shocks are important: the symmetry of shocks across regions and their persistence over time.With respect to the former, holding everything else constant, positively correlated shocks are good for the stability of integration."This is because the shocks change both regions' preferred fiscal policies in a similar manner: either both prefer more extensive redistribution or both prefer to scale it down.In this, my results echo the main finding of the optimum currency area theory, which considers currency unions with common monetary rather than fiscal policy.The situation becomes more complicated when shocks are negatively correlated.In this case, fiscal-policy preferences diverge but the regions benefit from mutual insurance: under centralized fiscal policy, the region with a positive shock makes a net transfer to the region hit by a negative shock.This is where persistence of shocks proves crucial.With temporary shocks, the disutility from having suboptimal fiscal policy is short-lived and may be compensated by the benefits from risk sharing.When shocks are permanent, however, fiscal transfers become largely deterministic and unidirectional.The cost of having to put up with suboptimal fiscal policy, likewise, becomes long lasting."As a result, either region, or both, can prefer to secede in such a case so as to implement the region's preferred fiscal redistribution.To illustrate the workings of the model, consider the disintegration of Czechoslovakia in 1993.2,The model predicts that a previously stable union can unravel due to asymmetric and persistent shocks."In Czechoslovakia's case, the shock was precipitated by the economic reforms initiated in 1990–91.While the reform took place in both parts of Czechoslovakia, it affected Slovakia much more severely than the Czech Republic: per-capita GDP fell by 12 percent in the Czech Republic during 1991–92 and by 20 percent in Slovakia; Czech unemployment, similarly, remained low, 2.6 percent in 1992, while the Slovak figure was 11.8 percent.This asymmetric effect of the reform shock was largely due to the greater dependence of Slovakia on trade with the former Eastern Block: much of the Slovak industry was built during the communist period so that the economy was highly dependent on trade with other communist countries.This trade essentially collapsed after the communist regime and central planning were abandoned.The reform thus constituted a negative and persistent shock, which affected Slovakia more severely and more persistently than the Czech Republic.3,The greater cost of reform translated into greater support for redistribution in Slovakia than in the Czech Republic, which was reflected in the outcomes of the 1992 election.The nature of the reform-induced shocks should have given an incentive to the Czech Republic to push for a break-up: it experienced a less-severe shock and it was also richer and therefore cross-subsidized Slovakia fiscally.4,However, the Czech Republic was twice the size of Slovakia so that it had much more sway over fiscal policy than Slovakia.It was, therefore, the poorer country that pushed for the break-up.As I argue below, the poor region may prefer secession if income inequality in the union is high enough and/or the negative shock is sufficiently severe: then, the poor region can choose to secede in order to impose higher taxes and redistribute more, even if this comes at a cost of losing the fiscal transfer from the rich region.Therefore, the break-up of Czechoslovakia can be attributed, at least in part, to the asymmetric repercussions of the economic reforms, and to the size and persistence of the adverse shock experienced by Slovakia.Had the shocks been more symmetric, or had the asymmetric repercussions in Slovakia been of less persistent nature, Czechoslovakia might have well survived. 5,There is already a rich body of literature analyzing the incentives that countries face to secede: Alesina and Spolaore, Alesina et al., Alesina and Perotti, Bolton and Roland, Goyal and Staal, Le Breton and Weber, Hindriks and Lockwood, and Lülfesmann et al.However, much of this literature is static in nature: they consider the trade-off between heterogeneity of preferences and efficiency gains from integration, without giving much thought to the factors that might drive preferences further apart or closer together as time passes.My analysis, in contrast, is concerned with offering insights on the reasons why unions that were originally stable subsequently break up.Alesina and Perotti also consider fiscal integration between regions that are subject to idiosyncratic shocks.Their analytical framework, however, differs from mine in several important aspects.First, they consider shocks that are permanent and perfectly negatively correlated across regions.As such, their analysis does not allow inferences on the importance of either correlation or persistence of shocks.Second, they model shocks in a way that ensures that they do not affect income distribution and, correspondingly, they do not change the preferences over fiscal policy in the participating regions.Therefore, shocks in their model make the tax base stochastic but not the tax rate.Third, they assume that income distribution in each region is discontinuous: individuals belong to three discrete income classes.This means that the median voter in the union is always the same, regardless of the shocks.This, together with their assumption on the nature of shocks, implies that the tax rate under fiscal centralization becomes stochastic: specifically, it depends only on the shock to the region of the median voter.The tax base, in contrast, is constant under fiscal centralization because the region-specific shocks cancel each other out."Hence, their main conclusion is essentially the same as that of the static political economy literature discussed above: while fiscal integration offers some benefits in terms of risk sharing, this comes at the cost of increased heterogeneity in policy preferences.The paper is structured as follows: The next section introduces the model."Section 3 outlines the regions' incentives for secession and shows how stability of integration is determined by the nature of shocks.Section 4 concludes.The fourth assumption may come across as counter-intuitive: the median individual must be either from region a or b and therefore is exposed to the shock affecting that region only.However, after the shocks are realized, the income distributions in the two regions shift relative to each other."Therefore, while the regional median voters are always the same individuals, the union's median individual is different every period. "The identity of the union median voter changes after every realization of the two regional shocks in such a way that the income of the new median voter differs from the income of the previous period's median voter by an amount that equals to the average shock.Therefore, assumption follows from the assumption that the regional income distributions are continuous, and it constitutes a crucial difference between my model and that of Alesina and Perotti, who assume that the union median voter always belongs to the same country.8, "In contrast, in my model, the union's fiscal policy depends on the average economic conditions in the union: it is the level of median income and not the nationality of the median-income individual that matters for fiscal redistribution.9",The following numerical example helps illustrate the rationale behind assumption: Region a has 150 citizens with income distribution such that the poorest individual has an income of 100, the next individual has 100.5, then 101, and so on in increments of 0.5 until the 49th individual whose income is 124."The 50th individual's income is 125, followed by 126, and so on in increments of 1 until the 150th individual whose income is 225.The income distribution in region b is identical with one exception: it has 149 citizens and the highest income is 224.The average income, at 154.2, exceeds the median income of 150.Each region has exactly one individual with an income of 150 and therefore the pivotal voter can originate from either a or b.Now consider the case where region b individuals are hit by a per-capita shock of 20 while incomes in region a remain unaffected.Correspondingly, the median income in a remains the same as before while the median income in b falls by 20.The union median income falls by 10–140.Note that not only the median income but also the identity of the median voter changes.Before the shock, there were two individuals with an income of 150, one in each region."The one in a still has the same income, whereas her counterpart in b now has an income of 130, and neither of them is the union's median individual after the shock.As before, the new median voter can be either from a or b as both will have exactly one individual with an income of 140.Whether the new median voter is from a or from b does not matter.What matters is the level of median income in the union and how it compares to the mean income.The tax rate thus depends on the skewness of income distribution: the greater the difference between the average and median incomes, the higher the tax rate."Importantly, the union tax rate and public good provision change, as argued above, not because the preferences of the union's median voter change but because a different individual with a different income becomes pivotal in the union after the regional shocks have been realized. "The region's preferred tax rate thus depends on that region's income distribution and the realization of the region-specific shock.Unless the income distributions and shocks are identical in both regions, their preferred tax rates will be different from each other and both will, in turn, differ from the union tax rate.Therefore, without efficiency gains, economies of scale, or other benefits of integration, the two regions would always prefer independence and fiscal autonomy to fiscal integration."The tax rate in Eq. maximizes the consumption of the union's median voter during each period.The tax rates preferred by the two regional median voters are generally different from the union tax rate as well as from each other: they would be the same only if the two regions had the same income distributions and faced exactly the same shocks.Integration thus carries the cost of compromising over fiscal policy.On the other hand, integration brings about two important benefits.First, it implies efficiency gains and economies of scale because of free trade, unrestricted flow of factors of production, and access to a larger market.Second, and this is particularly important in the context of my analysis, integration is associated with risk sharing.Note that risk sharing and inter-regional redistribution are not explicitly determined in the present model: the regions do not vote on or bargain about inter-regional transfers.Instead, risk sharing occurs automatically because tax collection and fiscal transfers are determined at the union-wide level: they reflect the union-wide income distribution and the average of the two regional shocks.Moreover, risk sharing is only a side effect of fiscal policy: its main objective is redistribution from rich to poor.The rich region may be making a net transfer to the poor one even if the former is hit by a negative shock, as long as it remains richer than the poor region – but the size of the net transfer is sensitive to the shock.Each period, the regions decide whether to remain in the union or secede.This decision takes place before the region-specific shocks are realized."Therefore, the decision is based on the expectations of current period's shocks, which in turn depend on the past shocks and their persistence.I assume that the persistence of past shocks is common knowledge.The decision on fiscal policy, on the other hand, is made after the shocks have been revealed and therefore taxes and transfers reflect the actual realization of shocks in the current period.The union breaks up whenever at least one region votes for secession.Secession comes at a cost λk,t ≤ 0, which I assume to be independent of the regional shocks.This cost reflects the loss of efficiency due to disintegration as well as the cost of creating a new regional government, military, etc.The cost is likely to be substantial immediately after breaking up the union and may fall thereafter."Specifying a particular time profile for this cost is not material for the model's results, however, given that the decision on secession reflects the shocks and the cost of secession is independent of those.Finally, the cost need not be symmetric: one of the regions can find secession less costly, for example, because of considerations such as national pride, patriotism, or historical legacies."The outcome of the vote on secession therefore depends on the realization of previous period's shocks and their persistence.As a digression, Eq. is a necessary but not sufficient condition for secession.Whether secession occurs depends on the net present value of the gain from secession, NPVSk,t ≡∑s = 0∞ δsEtΔk,t + s.The sufficient condition for secession then is NPVSa,t + NPVSb,t > 0, reflecting the fact that as long as at least one region prefers integration, it can offer concession to the other region to prevent it from seceding.11,This, however, would introduce the possibility of strategic behavior, especially if λk,t is not observable: either region could threaten to leave the union in order to elicit concessions from the other region.While interesting, such considerations are largely orthogonal to the question of the effect of shocks on integration.Therefore, I do not include them in this paper but they are discussed in the final section.The first term in Eq. reflects the differences in income distributions between the union as a whole and region k.The greater the difference, the greater the incentive for the region to leave.Note that the incentive to secede increases with the absolute difference: even the poor region can gain from seceding because it can implement its preferred fiscal policy in that case."The second term captures the difference in tax base: the higher the region k's mean income compared to the union's mean income, the greater the incentive to secede.Finally, the last term captures the cost of secession, which is always negative by assumption.For the relatively rich region, therefore, the first two terms are both positive.If secession were costless, the rich region would always want to secede.For the poor region, the first term is positive but the second term is negative so that this region may prefer continued integration even if secession were possible at no cost.It is, however, possible that the poor region would prefer to secede, for example, if its mean income is only slightly lower than the union mean income but its income distribution is much more skewed than in the other region."In such a case, the benefit from being able to implement the region's preferred fiscal policy might outweigh the cost of losing the net transfer from the richer region.The break-up of Czechoslovakia, instigated by Slovakia, the poorer country in the federation, may be an example of this.Next, I turn to the role played by the region-specific shocks."Voters in one or both regions may be induced to vote for secession either in response to the home region's shock or because of the other region's shock: either shock can raise or reduce the incentive for secession captured by the expected gain from secession, Δk,t. "The upshot of Proposition 1 is that for a given realization of the region's own shock, εk,t − 1, either region is more likely to secede if the other region encountered a negative shock in the preceding period, ε− k,t − 1 < 0.The intuition underlying this result is simple.For a given own shock, εk,t − 1, a positive shock in the other region reduces the expected union tax rate and raises the expected level of government spending.The transfer effect increases consumption in both regions.The tax effect is different, though.The median voter in region a prefers a lower tax rate than the union tax rate by assumption A1."A positive shock in region b decreases the expected union tax rate, so that the expected disparity between region a's preferred tax rate and the union tax rate shrinks. "The transfer effect also implies that the incentive for region a to secede falls after a positive shock in region b. On the other hand, region b's preferred tax rate is higher than the tax rate chosen by the union median voter.Thus, as the expected union tax rate falls, the expected disparity between the two tax rates widens even further.Hence, the tax effect and the transfer effect go in opposite directions for region b.The response of region b will therefore be smaller than the response of region a, even though the overall effect is unambiguously positive for both regions.Without the assumption A2, the sign of Eq. may be positive so that the home region would be more likely to secede after a positive shock in the other region.It also implies that the median voter of the richer region would prefer a zero tax rate if pivotal in the union.Both of these effects are rather counter-intuitive and difficult to rationalize, and therefore, I disregard this possibility."In region b, the median voter's preferred tax rate is above the union's tax rate. "A positive shock results in the reduction of both the expected union's tax rate as well as the region b's expected tax rate. "The expectation of the region's preferred tax rate falls by more and the difference in this case thus shrinks.The transfer effect on region b is similar the effect on region a described above.Hence, for region b, the tax and transfer effects go in opposite directions.Depending on how different the two regional income distributions are from each other, the overall effect therefore can be positive or negative.The upshot of Proposition 3 is that if shocks are sufficiently short-lived, they will not give a sufficient incentive for either region to secede: the gain from seceding would be so small so as to be outweighed by the efficiency loss due to disintegration.On the other hand, when shocks are sufficiently persistent, they can bring the union down.So far, I have been assuming that the regional shocks are fully independent of one another, i.e. each shock only affects incomes in one region.In open economies, this is unlikely to be the case: shocks have spillover effects because of trade, migration and investment flows, due to remittances from migrants or because of dividend payments on past investments.Therefore, I now consider the case when shocks have spillover effects.Proposition 4.Correlation: Positive correlation of shocks reduces the probability of secession, whereas negative correlation increases that probability, taking the persistence of shocks as given.The last two propositions contrasts with the key insight of the OCA literature that is concerned with monetary integration.In that literature, only the correlation of shocks plays a role: currency unions are predicted to be viable if the shocks are positively correlated.The present paper adds another dimension: the persistence of shocks.In particular, fiscal unions can be stable even with negatively correlated shocks, as long as these shocks are sufficiently transient.A number of observations can be made about other factors underlying disintegration:Risk sharing: Integration reduces the uncertainty about fiscal policy.Both the tax rate and the tax base are more volatile after secession than under integration.Integration thus helps smooth taxes and in turn reduces the volatility of disposable income and consumption.The potential benefits from risk sharing are at their greatest when shocks are negatively correlated and temporary.Uncertainty: An increase in the variance of either shock, σk2, increases the probability of disintegration, but only if the shocks are persistent.High variance increases the likelihood that a sufficiently large shock will occur to prompt one of the region to split off.This is likely to have happened in the case of Czechoslovakia and possibly also for other multi-ethnic unions in Central and Eastern Europe: the economic reforms implemented in the late 1980s and early 1990s were associated with a substantial increase in the volatility of asymmetric shocks as well as with changes in their inter-regional correlation and persistence.Decentralization may destabilize integration arrangements if it decreases the spillovers of shocks between regions.For example, promoting the use of regional or minority languages – such as French in Quebec, Catalan in Catalonia, or Dutch in Flanders – restricts labor mobility across language boundaries.Similarly, regional policies promoting local firms can make the regions more vulnerable to asymmetric shocks if the regions are dominated by different industries.All these measures in turn reduce the spillovers of shocks across regions.Hence, the efforts to rescue troubled unions by increasing the autonomy of regions may prove futile, and federalization, or devolution, may prove to be a step toward the slippery slope of disintegration.Size: Relaxing the assumption of regions being equally sized, and assuming that the cost of secession is independent of size, the smaller region stands to gain more by seceding than the larger region."Given that the union's fiscal policy responds to the average shock, the larger region's shock affects the centralized fiscal policy more than the smaller region's shock. "This implies that the smaller region is more likely to find itself preferring secession following a particular realization of either its own shock or the other region's shock.The relative size helps explain why disintegrations often involve the secession of a relatively small part of the original union, such as Slovakia breaking away from the Czech Republic, the Baltic countries from the Soviet Union or Slovenia and Croatia from the former Yugoslavia.Furthermore, allowing more than two regions in the model would be equivalent to asymmetric size.For instance, assume that there are three equally sized regions.From the point of view of each region, the decision on secession involves considering its own shock, and the aggregate shock in the rest of the union, with the rest of the union being twice the size of the home region.In this paper, I seek to shed light on the reasons why unions that survived for decades or centuries suddenly unravel.While there are often many very diverse motives for secessionist movements, my focus is on fiscal policy and fiscal redistribution.On the one hand, inter-regional fiscal transfers provide insurance against asymmetric and region-specific shocks.In doing so, they help smooth consumption over time.However, fiscal redistribution typically requires that regions delegate control over tax setting and redistribution to a higher level of government and may end up with fiscal policy that deviates from their preferences.The net balance of costs and benefits of fiscal integration can, correspondingly, be positive or negative.I show that two aspects region-specific shocks are crucial for the stability of fiscal unions: the correlation of shocks and their persistence.When the participating regions experience symmetric shocks, they do not have much incentive to quit.Things become more complicated when shocks are uncorrelated or negatively correlated.The benefits of risk sharing can be substantial in this case.However, a sufficiently large and persistent shock can turn risk sharing into a long-term unidirectional transfer."Europe abounds with examples of this: former Czechoslovakia, Belgium, Germany, or the UK.Such transfers constitute redistribution caused by past shocks, not insurance against future ones.The reluctance with which these transfers are made, and the conditions, limitations, and safeguards attached to them, underscore the importance of political economy aspects of integration.Importantly, fiscal unions may be politically unpopular also in the countries on the receiving end of fiscal transfers.The protests in Greece and Spain against austerity measures imposed from Brussels and Frankfurt demonstrate this; the break-up of Czechoslovakia, instigated by Slovakia, the poorer partner in the federation, is another example.Quitting the fiscal union allows the richer country to lower taxes and scale down redistribution.The poorer country can do the opposite, increase taxes and redistribute more.In both cases, the post-independence fiscal policy better reflects the preferences of the median voter: secession brings the government closer to the people, in the rich and poor country alike.Depending on other factors, such as, for example, the relative sizes of the countries, it can be the richer or the poorer region that elects to secede.A number of potentially important considerations have been left out of the analysis.First, in the presence of continued uncertainty about future shocks, maintaining the fiscal union is associated with an option value of waiting which is likely to be positive.In particular, the temptation to secede, stemming from the past shocks, can be undone by future economic fluctuations.Assuming that secession is irreversible, countries therefore face a strong incentive to be cautious.This aspect of decision making under uncertainty has been analyzed extensively by Dixit and Pindyck and others.Second, the incentive to secede would be different if the regions participating in a fiscal union could borrow to mitigate the adverse effects of negative shocks.In this case, the regions could engage in intertemporal rather than inter-regional insurance.The role played by financial markets is analyzed by Farhi and Werning who argue that the ability of regions to self-insure against asymmetric shocks diminishes the gains from fiscal integration.When considering the political economy implications, anything that makes regions less dependent on inter-regional transfers also diminishes their incentive to secede in the wake of particularly large shocks.Therefore, fiscal unions which allow the participating regions to self-insure in the financial markets will entail lower gains from integration but should also prove more stable over time.Finally, unless both countries wish to secede, there is a potential for bargaining and side payments whereby one country agrees to compensate the other sufficiently to stop it from seceding.As Claeys and Martire argue using examples from Spain and Italy, side payments are often incorporated into fiscal federalism regimes in the real world.Harstad shows, however, that bargaining and side payments create an incentive for both regions to behave strategically.In particular, the secession-prone region is likely to appoint a negotiator who values integration less than the median voter, so as to extract maximum transfer from the other side.While interesting and certainly worthwhile exploring, this would add an additional layer to the analysis.My model sheds light on factors that shape the underlying incentive to secede, leaving the strategic aspects of bargaining about secession aside.
Fiscal unions often use fiscal transfers to counter asymmetric shocks, but such transfers may be politically controversial. I present a model of a two-region fiscal union with region-specific shocks where the threat of secession imposes a limit on fiscal redistribution between regions. I show that both correlation of shocks across regions and their persistence over time are important for political support for integration. The gains from inter-regional risk sharing are potentially large when shocks are negatively correlated and temporary. In contrast, unions with negatively correlated permanent shocks are likely to be fragile.