Unnamed: 0
int64
0
31.6k
Clean_Title
stringlengths
7
376
Clean_Text
stringlengths
1.85k
288k
Clean_Summary
stringlengths
215
5.34k
200
Depth as a driver of evolution in the deep sea: Insights from grenadiers (Gadiformes: Macrouridae) of the genus Coryphaenoides
The deep oceans are vast three-dimensional habitats characterized by decreasing sunlight, low temperatures, and increasing hydrostatic pressures with depth.Life in these extreme habitats is almost entirely reliant upon the organic nutrients that rain down from the photic zone.These habitats were long believed to be environmentally homogenous with limited barriers to dispersal, leading to the assumption that species in the deep sea have vast ranges with little opportunity for divergence.However, our view of this habitat has been transformed as exploration of the deep sea has revealed complex topographic features, high species richness, and species turnover rates of 45–80% over hundreds to thousands of kilometers.With less than 1% of the deep-sea floor having been explored and with sampling efforts concentrated in the Northern Hemisphere our knowledge of the deep sea lags behind that of shallow systems.However some broad-scale patterns concerning the distribution of biodiversity in the deep sea have emerged.Studies show that abundance and biomass decreases with depth in most locations.Furthermore, there are differences in vertical assemblages from the continental slope to the abyss.Other datasets indicate increasing diversity of benthic and demersal species with increasing depth from the continental shelf, reaching a peak over upper-bathyal depths, and then decreasing in deeper water.This unimodal pattern has been shown across a diversity of taxa in the North Atlantic and on a global scale.However, with so few studies the generality of the pattern remains unknown.A hypothesis explaining mid-slope peaks in species richness proposes that most diversity in the deep sea originated in the heterogeneous bathyal depths.The genus Coryphaenoides, known as grenadiers or rattails, is a diverse group of fishes found worldwide from tropical to polar seas.There are 66 recognized species that are found across a large depth range from the euphotic zone to the deep abyss, with most species found between 700 m and 2000 m.Despite the fact that members of this genus often comprise large portions of the demersal biomass few species are commercially harvested, except C. rupestris, which forms a large fishery in the North Atlantic.Two species are considered circumglobal and another shows a possible anti-tropical distribution.Eight species are known from abyssal depths and another two have been recorded at the edge of the deepest bathyal habitat.Based on morphology, Cohen et al. tentatively divided the genus into five subgenera: Bogoslovius, Chalinura, Coryphaenoides, Lionurus, and Nematonurus.However, the generic and subgeneric designations within Coryphaenoides are poorly resolved based on morphological data or molecular data.Cohen et al. acknowledged the likely paraphyly of the largest subgenus Coryphaenoides and the apparent lack of diagnostic characters has left the group in ‘taxonomic limbo.’,Although there is uncertainty in the classification scheme put forward by Cohen et al., it provides a framework in which to test for congruence between morphological and molecular characters.In a phylogeny constructed for the order Gadiformes, Roa-Varón and Ortí indicated that Coryphaenoides was paraphyletic, with the monotypic species Albatrossia pectoralis nesting within the lineage, a finding which is supported by allozymes, peptide mapping and DNA sequence data.The genus Coelorinchus is the sister group to Coryphaenoides.The study of Roa-Varón and Ortí, which was designed to resolve taxonomic relationships at the family and subfamily level among Gadiformes, included 11 species of Coryphaenoides but none that reside at abyssal depths.Unfortunately, other phylogenies that have focused on Coryphaenoides suffer from poor taxonomic sampling and are based on a single molecular marker.Despite presumed low resolution, each of these phylogenies indicates that abyssal species cluster together or even form a separate clade.Here we investigate the evolution of diversity across the bathyal-abyssal interface using the most complete phylogenetic treatment for species of Coryphaenoides to date.This investigation is based on sequence data from two mitochondrial and two nuclear markers from 29 of the 66 recognized species in the genus.We provide the first independent appraisal of the subgenus designations put forward by Cohen et al. and confirm the placement of A. pectoralis within Coryphaenoides.Second, we include seven of the eight abyssal Coryphaenoides to determine if they are monophyletic, as indicated by previous, but incomplete, phylogenies.Finally, we use biogeographic models to assess the evolutionary history of the group, and evaluate the depth distributions of species of Coryphaenoides to determine if there is greater species diversity at bathyal depths, as might be expected under the depth-differentiation hypothesis.A total of 73 specimens across 29 of the 66 recognized species of Coryphaenoides were obtained for this study.According to a previous phylogeny, the genus Coelorinchus is a sister taxon of Coryphaenoides and so we rooted our phylogenetic trees using sequences from Coelorinchus labiatus.Total genomic DNA was extracted from tissues using either a phenol-chloroform protocol or using the E.Z.N.A extraction kit following the manufacturer’s protocol and subsequently stored at −20 °C.Two mitochondrial and two nuclear genes were used in this study, amounting to a total of 3086 bp.We resolved 626 bp of the mitochondrial cytochrome oxidase I gene using the primers FishF2 and FishR2 of Ward et al.We also resolved 1010 bp of the 16S mitochondrial ribosomal fragment and 715 bp of exon three of the recombination-activating protein 1 using the primers of Roa-Varón and Ortí.Lastly, we resolved 735 bp of the MYH6 gene using two pairs of primers 459F/1325R and 507F/1322F from Li et al.Both RAG1 and MYH6 required a nested PCR approach.In these cases 1 μl of a 1:20 dilution of the first PCR product was used as the template for the second PCR.For a few low quality samples it was necessary to increase template concentration to achieve successful amplification of MYH6.Polymerase chain reactions were carried out in a 20 μl volume containing 1 μl of extracted DNA, 0.4 μl of each primer, 0.4 μl of dNTP mix, 2.4 μl of MgCl2, 4 μl of 5x Green GoTaq Flexi Buffer, 0.1 μl of GoTaq Taq DNA polymerase, and deionized water to volume.PCR reactions utilized the following cycling parameters: initial denaturation at 95 °C and final extension at 72 °C, with an intervening 35 cycles of 30 s at 95 °C, 30 s at the annealing temperature, and 45 s at 72 °C.Amplification products were purified using USB ExoSAP-IT.Following the manufacturer’s protocol, we incubated 5 μl of PCR product and 2 μl of ExoSAP-IT reagent at 37 °C for 15 min followed by 15 min at 85 °C.DNA sequencing was performed with fluorescent-labeled dideoxy terminators on an ABI 3730XL Genetic Analyzer at the Durham University DBS genomics facility.Sequences for each locus were edited using the DNA sequence assembly and analysis software Geneious Pro v. 5.5.6.Following Morita, we calculated the average rate of transversions between sequences for COI using the pairwise distance function in MEGA v. 7.0.14.Sequence alignments were conducted using the MUSCLE algorithm as implemented in Geneious Pro.Both 16S and RAG1 alignments had gaps.The single gap in RAG1 was also recovered by Roa-Varón and Ortí and consisted of a 3–9 bp indel.For this gene we constructed a ML tree for a truncated alignment that omitted the gap using MEGA.The topology of this tree was highly similar to the ML tree that resulted from the full RAG1 alignment with all major nodes retained.Therefore all subsequent analyses were conducted using the full RAG1 alignment.To improve alignments and minimize gaps in RAG1 and 16S we used the Gblocks v.0.91b web server using the codon option for RAG1 and allowing for gaps in both markers.In both cases, gaps were reduced with >97% of positions retained.We used MESQUITE v. 3.04 to assign codon position in our protein-coding markers by minimizing stop-codons and translated the sequences to ensure that no stop-codons were present.Each locus was tested for saturation using Xia’s test as implemented in DAMBE5.We conducted these tests for the combined 1st and 2nd codon positions and for the 3rd codon position.We estimated the proportion of invariable sites using the neighbor-joining tree algorithm as recommended by Xia and Lemey.The default of 60 replicates was used and we considered only fully resolved sites, as recommended.Only COI showed signs of saturation at the 3rd codon position.Sequences were concatenated and the best-fit partitioning scheme and substitution models were investigated using PartitionFinder v. 1.1.1.One run was performed using the “search all” algorithm with branch-lengths “linked” between partitions and a second, “greedy” search, was also performed with branch-lengths “unlinked”.Best-fit schemes were identified using the Bayesian Information Criterion.In order to examine the topology of phylogenetic trees for individual fragments we conducted Maximum Likelihood analyses on each marker separately using MEGA.We first selected the most appropriate model of evolution for each marker, based on the BIC, using default settings implemented in MEGA.Subsequently, we ran ML analyses invoking the appropriate model and applying 500 bootstrap replicates.Trees were rooted using the closely related Spearsnouted grenadier, Coelorinchus labiatus.A Bayesian Markov Chain Monte Carlo analysis, as implemented in MrBayes v. 3.2, was also conducted.Partitions were assigned according to PartitionFinder results.Posterior probabilities were calculated using 10 million iterations of four chains replicated in four independent runs with a sample frequency of 1000 and a burn-in fraction of 0.25.The branch length prior was set to exponential with an unconstrained molecular clock.Under these parameters standard deviations between independent runs stabilized and resulted in split frequencies below 0.01.Subsequent phylogenetic analyses of the concatenated dataset were performed on the CIPRES Science Gateway v. 3.1.ML analyses on this larger dataset was conducted using the Randomized Accelerated Maximum Likelihood software v. 7.3.0.We set RAxML to estimate all model parameters and partitions were assigned according to PartitionFinder results.Eight independent runs were performed and the best trees from the individual runs were compared to assess concordance in topology and ensure that the ML search was converging on the optimal area of tree space.In addition, a ML analysis with 1000 bootstrap replicates was performed to estimate support for individual clades in the tree.A Bayesian MCMC analysis, as implemented in MrBayes, was performed on the concatenated dataset as described above.Under these parameters standard deviations between independent runs stabilized and resulted in split frequencies below 0.01.A strict-consensus maximum parsimony tree was generated in PAUP∗ v. 4.0b10 from a heuristic search with TBR branch swapping, 1000 random additions and 500 bootstrap replicates.In order to test for congruence across loci, partitioned Bremer support indices were calculated for each node using TreeRot v. 3 and PAUP∗ by performing heuristic searches with 1000 random additions.In order to reconstruct the biogeographic state of ancestral nodes, Bayesian Binary MCMC and S-DIVA analyses were performed in RASP v. 3.2.We sampled 10,000 trees at random from a Bayesian phylogenetic MCMC analysis, generated from the concatenated alignment.S-DIVA analysis was run on all trees and ancestral nodes were plotted on a majority-rule consensus tree.Our objective in using biogeographic analyses was to infer the geographic origin of major lineages and therefore we assigned distributions to terminal nodes based on ocean basin: Pacific Ocean, Atlantic Ocean, and Southern Ocean.Most species are found within a single ocean basin and only four are found across several ocean basins.We allowed for all three areas to be included in any single ancestral distribution.We assigned all regions to the root to avoid biased reconstructions at the base of the tree.Using this same dataset, we ran the BMM for 1 million generations with a sampling frequency of 100 and 10 chains were run with a temperature of 0.1.We discarded 1000 samples as burn-in before calculating the state frequencies.The Fixed Jukes-Cantor model for state frequencies was applied with the gamma shape parameter for among-site rate variation.The analysis was run twice to ensure that estimations were converging.Depth range estimates were based on data provided in Fishbase.When possible these values were checked in the literature.However, it should be noted that obtaining accurate depth records for deep-sea species is difficult.In some cases poor sampling may underestimate the full species range while errant records can inflate depth ranges.Furthermore, the inclusion of records of larvae or very small juveniles may lead to artificially broad depth ranges as many species of fish are known to exhibit pelagic tendencies and/or ontogenetic downslope migration.A total of 3048 bp of DNA were resolved after editing with Gblocks.Our data matrix of 73 individuals and four loci was 97% complete.Of the 3048 bp, 2276 were invariant and 655 were segregating sites.Descriptive statistics for each locus are given in Table 1.No stop codons were detected in our protein-coding markers COI, MYH6, and RAG1 and only COI showed signs of saturation at the third codon position.All conspecific samples were monophyletic across the two mitochondrial trees.The best-fit partitioning schemes identified using the BIC from PartitionFinder are detailed in Table S4.Our phylogenies based on individual markers have many well-supported nodes and no strong pattern of incongruence based on the TreeRot analyses.The MYH6 locus showed the lowest resolution while the tree based on mitochondrial 16S offered the greatest resolution.In the mitochondrial phylogenies all taxonomic species were supported by high ML bootstrap values and posterior probabilities.The exception was the distinction between the sister species C. filifer and C. cinereus in the 16S tree and C. acrolepis and C. longifilis in the COI tree.In all cases C. pectoralis nested within Coryphaenoides and was most closely related to C. acrolepis and C. longifilis.Only in the mitochondrial datasets were the species level designations of these three taxa well-resolved, with exceptions highlighted above.Coryphaenoides dossenus was conspicuously divergent at all loci.Of note is the presence of a well-supported and divergent abyssal clade which included all species found at depths >4000 m.Only the COI tree failed to resolve this clade, but the arrangement of lineages were recovered as a polytomy and did not conflict with this grouping.Interestingly, the two non-abyssal species, C. striaturus and C. murrayi, clustered with the abyssal taxa in the mitochondrial trees while the nuclear trees showed mixed topologies.One sample of C. striaturus nested within the abyssal clade at all loci, whereas the other C. striaturus sample and C. murrayi samples exhibited non-abyssal nuclear genotypes.The ML analyses based on the concatenated dataset consistently returned the same tree topology across eight independent runs.Similarly, our Bayesian analyses inferred the same tree topology across all runs.These two different approaches produced consensus trees of highly similar topology with strong bootstrap support and high posterior probabilities at most nodes.The Bayesian 50% consensus tree is presented in Fig. 3.The topology of this tree was largely consistent with the trees based on individual loci.All species-level designations were highly supported by our concatenated dataset, including the distinctions between C. filifer and C. cinereus and C. acrolepis and C. longifilis.However, some of the deeper nodes were still poorly resolved.The earliest lineage to branch from the tree included three Southern Ocean species, with the divergent C. dossenus branching next.A Pacific Ocean lineage that included C. filifer, C. cinereus, C. longifilis, C. acrolepis and C. pectoralis was well-supported and branched early.The abyssal lineage was recovered in our multi-locus dataset, with C. striaturus and C. murrayi branching from the basal node of this lineage, a pattern driven by the strong signal in the mitochondrial 16S.The splitting of the two C. striaturus samples in the concatenated dataset is driven by the divergence between these two samples at the nuclear loci with CSI001 nesting in the abyssal lineage at all loci while CSI002 nested in the abyssal lineage in mtDNA trees but in the non-abyssal lineages in the nDNA trees.Bayesian and event-based reconstructions of ancestral biogeography based on extant taxa produced generally consistent results.In broad terms, the analyses suggest an origin for Coryphaenoides in the Southern and Pacific Oceans, with an early vicariance event that led to a splitting of the Southern and Pacific Ocean lineages.Migrations into the Atlantic appeared later, with isolation between Pacific and Atlantic Oceans leading to further diversification.A more recent vicariant event led to the origin of the abyssal lineage.At the base of this lineage are two non-abyssal species with reconstructions at this node indicating a Southern Ocean/Pacific Ocean origin.If we consider only the truly abyssal taxa it appears that a vicariant event isolated Southern Ocean and Atlantic Ocean lineages with subsequent diversification of the abyssal species occurring in the Atlantic.All abyssal species reside in the Atlantic Ocean with the exception of the recently derived taxa C. yaquinae.Using two mitochondrial and two nuclear loci we produced a well-resolved phylogeny for Coryphaenoides.We found that the morphological subgenera of Cohen et al. are inconsistent with the molecular data, and, as previously indicated, C. pectoralis nests within Coryphaenoides.Consistent with earlier studies limited by poor taxon representation and low resolution, we found that species inhabiting waters below the 4000 m isobaths formed a well-supported lineage.Branching from the basal node of this divergent lineage were two species found in non-abyssal depths: C. striaturus and C. murrayi.Based on morphological evidence from Cohen et al., and the placement of these two species in the mitochondrial trees, these may represent abyssal species that have immigrated to shallower habitats.The finding that three of the four individuals of C. striaturus and C. murrayi possessed nuclear alleles that nested within the non-abyssal lineage may be evidence of historical introgression between the major lineages, however the nuclear gene trees on their own are relatively poorly resolved and so incomplete lineage sorting is another are possible interpretation.There are several range-restricted lineages within Coryphaenoides.The monophyletic C. acrolepis, C. longifilis, C. pectoralis, C. filifer, and C. cinereus are all restricted to the North Pacific Ocean, whereas C. serrulatus, C. subserrulatus and C. mcmillani are found in the Southern Ocean.All of the abyssal species reside in the Atlantic Ocean with the exception of C. yaquinae.Furthermore, four of the eight abyssal species are also found in other ocean basins.Coryphaenoides rudis is the only widely distributed non-abyssal species.In general, abyssal species have broader depth ranges than non-abyssal species and also tend to have broad horizontal distributions; a finding that may be attributed to the older and abiotically more uniform waters at depths below 1000 m.Our phylogeny is consistent with a secondary invasion into abyssal waters, and perhaps from ancestors outside of the Atlantic Ocean.However, most species at abyssal depths are found in the Atlantic, suggesting that adaptations associated with living at great depth originated there.Some species may have then become dependent on abyssal habitat, while the recent origin of C. yaquinae together with its isolation in Pacific Ocean suggests origination of that species after its abyssal ancestor migrated into the North Pacific.Cohen et al. included five subgenera within Coryphaenoides based on morphological characters.Since then there has been some minor taxonomic “reshuffling” within the genus and six new species have been described.While the authors considered this arrangement to be a putative approximation, a revision is yet to be published, and therefore it remains the current hypothesis for intra-generic relationships.In general the subgenera are not well-supported by the molecular phylogeny presented here.The morphological subgenus Bogoslovius separates C. longifilis from all other Coryphaenoides, but in our treatment C. longifilis was part of a well-resolved lineage that includes the closely related C. acrolepis, C. pectoralis, C. cinereus and C. filifer.Similarly, the morphological subgenus Lionurus separates C. carapinus, but in our phylogeny this species nested among the other abyssal species.Moreover, the abyssal species are split among three morphological subgenera, whereas in our phylogeny they formed a monophyletic lineage.Lastly, the non-abyssal C. serrulatus and C. subserrulatus are classified among abyssal species in the morphological subgenus Chalinura, but here they formed a distinct Southern Ocean lineage at the base of the tree.While there is no clear evidence at this time that any of these species groups warrant genus-level designations it is clear that a taxonomic revision, which incorporates molecular data, is warranted.Our findings support earlier, less complete phylogenies, which indicated that abyssal species form a monophyletic group.We were able to sample all but one abyssal species.Two other species have been recorded at 3900 m but are not sampled here.Due to inadequate sampling of much of the world’s deep oceans it is difficult to obtain a full representation of the species named in the genus; however, we provide a broad representation and nearly complete data for the abyssal lineage.There is no available fossil record for Coryphaenoides and no reliable molecular clock has been calibrated, so any estimates of divergence times need to be interpreted with caution.Using COI, and assuming a rate of transversions of 0.3–0.7% per million years, Morita calculated a divergence time between abyssal and the non-abyssal Coryphaenoides to be between 3.2 and 7.6 My.Similar calculations based on the COI data presented here provided comparable estimates of between 3.7 and 8.7 My.These dates place the split between abyssal and non-abyssal lineages as late as the Miocene.Species diversity for the genus is highest in the Pacific with 35 of the 66 recognized species recorded only from that ocean.Only one of these is found at abyssal depths.Another 13 species are restricted to the Atlantic, including three abyssal species.If we consider only the abyssal species, a Southern/Atlantic Ocean origin is supported by the RASP results with S-DIVA favoring an Atlantic origin.However, if we consider C. striaturus and C. murrayi to be abyssal species that have secondarily moved into shallower habitat, then a Southern Ocean/Pacific Ocean origin is favored by both analyses.There is circumstantial evidence to support a Southern Ocean origin for this lineage including the restriction of the abyssal C. filicauda to the southern hemisphere.Furthermore, the Southern Ocean species C. serrulatus, C. subserrulatus, C. striaturus and C. murrayi are morphologically similar to several abyssal species as indicated by their inclusion in the subgenus Chalinura, with C. mediterraneus, C. brevibarbis, C. profundicolus, and C. leptolepis.While the origin of the abyssal lineage remains ambiguous the geographic ranges of these species indicate that diversification within the lineage likely occurred in the North Atlantic.When the Atlantic Ocean first began to form around 150 Mya it consisted of two isolated basins in the north and south.Around 80–65 Mya a deep-water connection between the two basins was achieved with modern circulation patterns becoming established around 35 Mya.While the ocean floor expanded as the continental plates moved away from each other at the Mid-Atlantic Ridge, the depth of the ocean floor also increased with time.As new crust formed at about 2600 m it cooled and contracted increasing depths down to over 5500 m.The current topology of the Atlantic was achieved only around 10 Mya, very near the estimated time of colonization of the North Atlantic by the abyssal lineage of Coryphaenoides but we urge caution when interpreting these values.Furthermore, the exterior position of the abyssal lineage in the phylogenetic tree fits the concept of recent colonization of the deep sea by species from shallower habitats.Changes in species diversity and composition with depth are often attributed to the strong bathymetric gradients such as pressure, temperature, and food availability; factors that are thought to influence the scale and rate of evolutionary change.Compared to abyssal depths, the bathyal environment experiences a greater influx of nutrients, a more complex current regime, and more complex topography, which are reflected in the more pronounced horizontal heterogeneity of the demersal fauna.As evolutionary dynamics can be influenced by environmental heterogeneity and the intensity of environmental gradients, the depth-differentiation hypothesis predicts that rates of evolution are highest where heterogeneity is the greatest and environmental gradients are most intense.Evidence has been found in deep phylogenetic-level breaks along bathyal depths in clams, gastropods, amphipods, polychaetes, and hydrozoans, while population-level signals have been detected in bivalves, fishes, and octocorals.For instance, in the Atlantic bivalve Deminucula atacellana population differentiation was greater among individuals separated by hundreds of meters along a vertical slope than between individuals separated by thousands of kilometers across the ocean.If species origination is higher in the bathyal zone, as predicted under the depth-differentiation hypothesis, one might expect species diversity to be highest at these depths.Vinogradova was the first to show an unimodal diversity-depth pattern at broad taxonomic scales.She compiled global species records by depth and recovered a strong unimodal pattern with an increase in species diversity from the continental shelf to bathyal depths but with a dramatic drop in the number of species at abyssal depths.She found a peak in species diversity at around 2000 m; a pattern she also recovered from the analysis of individual groups.Since then similar unimodal patterns have been recovered at regional scales across a diversity of taxa but there are exceptions.Interestingly, species of Coryphaenoides mirror this pattern with a peak in the number of taxa found at 1000 m that gradually declines to abyssal depths.However, species diversity patterns are not solely driven by differential speciation rates.The bathyal habitats in the North Atlantic support larger population sizes, as well as, greater numbers of species compared to the abyssal seafloor: a pattern that is at least in part due to changes in food availability with depth.The bathyal habitat is coupled to the deep-scattering layer; a mid-water mass of small fishes, cephalopods, crustaceans, and zooplankton that provides a rich source and variety of prey items.However, beyond 1500 m depth along the continental slopes the benthic and pelagic systems become increasing decoupled resulting in a the lack of food and low particulate organic carbon influx.With biomass in the abyss below ∼1 g m−2 it is difficult to imagine how populations at these depths are maintained.This has led some to speculate that the abyss may function as an evolutionary sink with most populations maintained by immigration from bathyal depths, though this is disputed for wide ocean basins such as the Pacific.Our phylogenetic treatment of Coryphaenoides indicates that the morphologically based analyses used to date are insufficient to resolve the relationships among species.Taxa inhabiting waters deeper than 4000 m form a distinct and well-supported lineage, which also includes two non-abyssal species, C. striaturus and C. murrayi, which diverge from the basal node.Examination of individual gene trees suggests that these two species may have been involved in historical introgression events between abyssal and non-abyssal taxa, as their mtDNA is abyssal in origin but most of their nuclear alleles fall within the non-abyssal lineage, though further nuclear DNA data would help confirm this.All abyssal species are found in the North Atlantic with the exception of C. yaquinae, thus far found only in the North Pacific, and C. filicauda, thus far only in the Southern Ocean.Biogeographic reconstructions indicate that the genus may have originated in the Southern/Pacific Oceans with both dispersal and vicariant events playing important roles in diversification of the group.Species’ distributions support this as well, with species diversity highest in the Pacific Ocean.The abyssal lineage seems to have arisen secondarily and likely originated in the Southern/Pacific Oceans, but maximum diversification of this lineage may have occurred in the North Atlantic Ocean.Importantly, our phylogeny indicates that adaptation to the deepest oceans happened only once in this group, suggesting that movement into the abyssal realm required unique adaptations; once this novel habitat was colonized the group diversified.
Here we consider the role of depth as a driver of evolution in a genus of deep-sea fishes. We provide a phylogeny for the genus Coryphaenoides (Gadiformes: Macrouridae) that represents the breadth of habitat use and distributions for these species. In our consensus phylogeny species found at abyssal depths (>4000 m) form a well-supported lineage, which interestingly also includes two non-abyssal species, C. striaturus and C. murrayi, diverging from the basal node of that lineage. Biogeographic analyses suggest the genus may have originated in the Southern and Pacific Oceans where contemporary species diversity is highest. The abyssal lineage seems to have arisen secondarily and likely originated in the Southern/Pacific Oceans but diversification of this lineage occurred in the Northern Atlantic Ocean. All abyssal species are found in the North Atlantic with the exception of C. yaquinae in the North Pacific and C. filicauda in the Southern Ocean. Abyssal species tend to have broad depth ranges and wide distributions, indicating that the stability of the deep oceans and the ability to live across wide depths may promote population connectivity and facilitate large ranges. We also confirm that morphologically defined subgenera do not agree with our phylogeny and that the Giant grenadier (formerly Albatrossia pectoralis) belongs to Coryphaenoides, indicating that a taxonomic revision of the genus is needed. We discuss the implications of our findings for understanding the radiation and diversification of this genus, and the likely role of adaptation to the abyss.
201
Data for efficiency comparison of raw pumice and manganese-modified pumice for removal phenol from aqueous environments—Application of response surface methodology
Table 1 shows the experimental conditions and results of central composite design.The obtained data indicated the maximum efficiency removal of phenol was obtained 89.14% and 100% for RWP and MMP respectively.Tables 2 and 3 revealed the estimated regression coefficients and ANOVA dataset from the central composite design experiments for RWP and MMP respectively.Table 4 indicated Analysis of variance for fit of Phenol removal efficiency by RWP and MMP.Table 5 shows the parameters of Langmuir and Freundlich isotherms for phenol adsorption on RWP and MMP.The acquired data indicated the data were obeyed the Langmuir isotherm for RWP and MMP.Also, Table 6 indicates kinetic model parameters.The revealed data were obey the pseudo second order for RWP and MMP.Fig. 1 illustrates the Fourier transform infrared spectroscopy and XRD patterns of RWP and MMP.Fig. 2 demonstrates the SEM images of RWP and MMP.Fig. 3 shows trend of phenol removal efficiency by RWP.Fig. 4 shows the response surface plots for phenol removal efficiency by RWP.Fig. 5 indicated the normal probability plot of residual related to phenol removal efficiency by RWP.Fig. 6 shows the response surface plots for phenol removal efficiency by MMP.Fig. 7 indicated the normal probability plot of residual related to phenol removal efficiency by MMP.Early preparations of raw scoria powder were performed according to Moradi et al. study .The coating of particles with manganese was carried out as follows: 150 mL of 0.01 M Mn2 solution and certain amount of raw pumice powder were transferred to a beaker.Then, pH was adjusted via HCl and NaOH 0.5 M.The beaker was putted on shaker at ambient temperature for 72 h and dried at 105 °C for 24 h. the uncoated Mn was removed several times by distilled water and dried in 105 °C for 24 h .The Fourier transform infrared spectroscopy analysis was conducted by WQF-510 Model.The chemical characteristics and surface morphology were determined by an XRD and scanning electron microscope respectively were used to Characteristics of RWP and MMP .Because the existence of many parameters which affected the results of experiments, achieve to the optimal conditions of experiments is an important strategy for determining the effective parameters and reducing the costs.Hence, attention to mathematical methods was developed to evaluate the obtained data.RSM based on central composite design is a proper method to determine the best conditions of experiments for minimization of number of experiments and to survey of the relationship between the measured responses and number of independent variables with the goal of optimizing the response . Table 7 illustrated- the experimental range and level of the independent variables.The sorption experiments were carried out in batch reactor.Initial concentration of phenol, adsorbent dose, pH, contacted time and ambient temperature were selected as variables.The residual phenol was determined by UV/VIS spectrophotometer at λmax 500 nm .Where the Ce is equilibrium concentration, qe is phenol adsorbed at equilibrium, q0 and b are the Langmuir constants related to the capacity and energy of adsorption, respectively .Where Kf and n are Freundlich constants corresponded to adsorption capacity and adsorption intensity, respectively .The kinetics were investigated via adsorption of certain concentration of phenol at different contact time.Kinetic study is essential for provide information on the factors affecting it reaction speed.Several kinetics include pseudo-first-order, pseudo-second-order, intraparticle diffusion and elovich were used to controlling mechanisms of the adsorption process.The equations of kinetic models are expressed as follows :
Present deadest collection was aimed to evaluate the efficiency of raw pumice (RWP) and Mn-modified pumice (MMP). Response surface methodology (RSM) based on the central composite designs (CCD) was applied to evaluate the effects of independent variables including pH, adsorbents dosage, contact time and adsorbate concentration on the response function and the best response values were predicted. The Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD) and scanning electron microscopy (SEM) were used to characterize the adsorbents. Based on acquired data, the maximum efficiency removal of phenol was obtained 89.14% and 100% for raw and Mn-modified pumice respectively. The obtained data showed pH was effective parameter on phenol removal among the different variables. Evaluation of data using isotherms and kinetics models showed the fitted with Langmuir isotherm and pseudo second order kinetic for both adsorbents. According to obtained data was observed that modification of pumice can improve the efficiency removal of phenol to meet the effluent standards.
202
Preparation of oil-in-water nanoemulsions at large-scale using premix membrane emulsification and Shirasu Porous Glass (SPG) membranes
Nanoemulsions find a wide range of applications in cosmetics , pharmaceuticals or food industry and are defined by their droplet size which has to be smaller than 1000 nm, or 500 nm or 100 nm, depending on the definition used.Generally, nanoemulsions are oil-in-water emulsions as the oil phase is dispersed into the water continuous phase, but can also be water-in-oil emulsions when the water phase is dispersed into the oil continuous phase.Mini-emulsions, ultra-fine emulsions or sub-micron emulsions can also be used to name this type of emulsion.Small droplet size confers a high stability, unique texture and drug delivery properties .In dermatology or cosmetics, these general characteristics are completed by specific properties such as uniform deposition on the skin, enhanced penetration thanks to large surface area and small droplet size, modified release and drug carrier properties, film formation on the skin, pleasant aesthetic character and skin feel .Nanoemulsions are also the first step of numerous encapsulation techniques in order to create nanocolloids such as nanocapsules or nanospheres.Nanoemulsions are produced by two main types of processes, low and high energy processes.Low energy processes depends on the physicochemical properties and therefore require the use of specific surfactants and/or co-surfactants at high concentration and rely on the spontaneous formation of oil droplets.Several techniques are available such as phase inversion composition, phase inversion temperature, emulsification in the micro-emulsification domain and nanoprecipitation, which involves a water soluble solvent .These techniques require specific compositions which may not be suitable for cosmetics or pharmaceutical applications.High energy processes are on the contrary suitable for a larger range of formulations as the nanoemulsions are generated using mechanical devices with intensive disruptive forces that breakup the oil and water phases .Among these techniques, ultrasound or sonication is based on the cavitation mechanism.It requires high energy input and can only be applied at a very small scale.High-pressure homogenization needs also high energy input.Unfortunately, only 0.1% of the energy input is actually used for emulsification, while the remaining energy is dissipated as heat .Both processes can generate nanoemulsions with very small droplet size, but usually broad size distribution are obtained with sonication and several cycles are needeed with HPH to obtain monodisperse droplets.More recently, other processes, that require less energy like membrane emulsification, have been developed .The advantages of membrane emulsification are low energy requirement leading to no temperature increase during emulsification and low shear rate which gives better stability for shear sensitive actives.In addition, membrane emulsification allows a good control of the droplets size, which depends on the membrane pore size, and narrow particle size distribution.The two main configurations are direct membrane emulsification and premix membrane emulsification.In DME, the dispersed phase is pushed through the membrane pores into a stirring or cross-flowing continuous phase and small droplets are formed at the membrane/continuous phase interface.For the preparation of nanomemulsions, membranes with very small pores have to be used, therefore DME leads to very low flowrates of the dispersed phase and may not be suitable for scale-up.In PME, a coarse emulsion called premix is pushed through the membrane pores, reducing the droplet size and size distribution.The mechanism of droplet formation in PME is related to the break-up of large droplets within the membrane due to wall shear stress inside the membrane pores.In general, higher flowrates are more effective for droplet disruption due to the higher stress applied on the droplets inside the pores, which leads to a decrease in particles size and size distribution.Also, to reduce the droplet size and make size distribution narrower, the emulsion can pass through the membrane several times.PME has several potential advantages compared to DME .The flowrate of the product emulsion is generally much higher, higher droplet concentrations are obtained, the mean droplet size are smaller than in DME, the experimental set-up is simpler and the process is easier to control and operate.Like in DME, the droplet size can be controlled by the membrane pore size.For the production of nanoemulsions, PME is particularly attractive as it can lead to higher flowrate than in DME.Bunjes et al. prepared nanoemulsions by PME with droplet size lower or around 200 nm with narrow size distribution .Depending on the membrane material and thickness, up to 21 extrusion cycles through polymeric membranes were required or only 1 extrusion cycle through Shirasu Porous Glass membranes .This result was explained by the high pore tortuosity and thickness of the SPG membranes.SPG membranes are the most commonly used membranes for emulsification.They present the advantages of high porosity, interconnected micropores, narrow pore size distribution, large range of pore sizes available and low manufacturing cost .Oh et al. reported the preparation of microemulsions by coupling microemulsification prior to the application of a SPG membrane process .Bunjes and Joseph described the production of few milliliters of nanoemulsions using PME with SPG membranes.Indeed, the production of nanoemulsions by membrane emulsification is challenging especially for large volumes at high flowrates.In general, scale-up production of nanocolloids is an issue .Membrane emulsification which is known to be scaled-up can be a possible alternative to more classical processes.However, only some studies reported large-scale production of nanocolloids by membranes, for example for the production of liposomes .The aim of this study was to investigate the preparation at large-scale of O/W nanoemulsions using PME.For that, a set-up based on PME with SPG membranes was developed with a high pressure pump which allowed working pressure with SPG membranes up to 60 bar, flowrates up to 200 mL/min and volumes of preparation up to 500 mL.In addition, to increase the flowrate of the nanoemulsions obtained, a membrane with length of 125 mm was used in most of the experiments.With this set-up, the effect of several parameters was investigated, including process parameters and emulsion formulation.The emulsions obtained were characterized by their mean droplet size and/or size distribution.Ultrapure water was obtained using a Synergy unit system.Ethylhexyl palmitate was purchased from Eigenmann & Veronelli, Tween 20 and Span 80 from Sigma Aldrich, and Derquim+ from Derquim.The experimental set-up used for the preparation of nanoemulsions by PME is shown in Fig. 1.The set-up was composed of a high pressure benchtop single cylinder pump BTSP 500-5.The pump is made of high grade stainless steel and was equipped with a pressure sensor, two pneumatic valves for tank feeding and outlet delivery, a control panel and a storage tank of 500 mL.Pressurization was obtained via a motor-driven piston.A computer was connected to the pump for data acquisition.The flowrate, pressure and volume injected were recorded every second with the software.The maximum pressure delivered by the pump was 344 bar and flowrate 200 mL/min.The membrane module was connected to the pump with high pressure fittings.Hydrophilic SPG membranes were provided by SPG Technology Co. Ltd.These membranes are tubular with an inner diameter of 8.5 mm and thickness of 0.8 mm.In most experiments, the membrane length was 125 mm, however, for some tests, membranes with length of 20 mm were used.Membranes with mean pore size of 0.2, 0.3, 0.4, 0.5, 0.6 and 0.8 μm were investigated; the mean pore size data are given by the manufacturer.The membranes were able to resist to transmembrane pressure up to 60 bar.The membrane modules used were respectively a tubular module and an external pressure microkit module for membranes with length 125 mm and 20 mm.Both modules were supplied by SPG Technology.For the 125 mm membrane, the cross-flow tubular module was adapted to be used in PME.The premix was pushed from the external part of the tube to the internal part in a similar way as in the external pressure microkit.The effective length of the membranes was reduced due to sealing rings placed at both ends of the membrane tube.Therefore, the effective length was respectively 12 mm and 115 mm, for the 20 mm and 125 mm membranes.The effective membrane area of the 20 mm membrane was about 10 times the one of the 125 mm membrane.Ultrapure water was used as the continuous phase and EHP as the dispersed phase.The required HLB of EHP was given by the supplier as RHLB = 9.The surfactant system chosen to stabilize the emulsion was Tween 20, HLB = 16.7, as the hydrophilic surfactant and Span 80, HLB = 4.3, as the hydrophobic surfactant.In most experiments, the composition was for the continuous phase, 1.9% Tween 20 and 85% water, and for the dispersed phase 3.1% Span 80 and 10% EHP.The overall surfactant concentration was then 5%.The surfactants and high concentrations were chosen to ensure that the newly formed droplets were immediately covered with surfactants, hence preventing the increase in droplet size.The influence of oil concentration was investigated at 5, 10, 20, 30 and 40%.The surfactant concentration to oil concentration ratio was kept constant at 0.5, so the total surfactant concentration in the formulation was respectively 2.5, 5, 10, 15 and 20%.In addition, the influence of surfactant concentration was investigated at 2.5, 5, 10, 15 and 20% while maintaining the oil percentage at 10% so the oil concentration to surfactant concentration ratio was respectively 0.25, 0.5, 1, 1.5 and 2.Preparations were all performed at room temperature.Both phases were first prepared separately.The continuous phase was obtained by dissolution of Tween 20 in water under magnetic stirring at 600 rpm and the dispersed phase by dissolution of Span 80 in EHP using the same procedure.The two phases were then mixed under magnetic stirring for 10 min to produce the premix.This premix was then placed in the feed tank and pumped in the syringe pump.First 20 mL premix was injected in order to remove air from the experimental set-up and fill it with premix.Most of the experiments were then carried out with volumes of injection of 40 mL to minimize time and material consumption.The nanoemulsion produced flew from the membrane tube under gravity and was collected in a beaker placed beneath the module.Larger volumes of premix were prepared to test the scalability of the technique.In most experiments, the flowrate was set to 200 mL/min.To investigate the effect of flowrate, the following flowrates were set: 5, 10, 20, 50, 75, 100 and 150 mL/min.The transmembrane flux is equal to the flowrate divided by the effective membrane area.Before each use, the membrane was carefully cleaned until recovery of its hydrodynamic resistance to water.For hydrodynamic resistance measurements, water was permeated through the membrane at different flowrates between 10 and 200 mL/min and the resulting pressure was measured.Rm was estimated from the slop of the pure water flowrate versus resulting pressure .The cleaning procedure consists in three injections through the membrane of 500 mL of a 1% Derquim + solution at 70 °C at 200 mL/min and then three injections of 500 mL of pure water at room temperature and 200 mL/min.The membrane resistance to water was recovered after this treatment.All measurements have been done in triplicate, the values reported were the average of the three measurements.The droplet size was also measured by means of dynamic light scattering using a Zetasizer Nano Z.Data processing of the DLS measurements were done with the Zetasizer software by both cumulants and distribution analysis.Results were z-average which is the mean size and the size distribution in intensity.Before measurements, the nanoemulsions samples were diluted in ultrapure water.The measurements were realized at 25 °C and the values reported were the average of three repeated measurements.For the investigation of the effect of oil concentration, the dynamic viscosity of the emulsions was measured for each sample.The measurements were realized using a rheometer MCR 302 equipped with the CP50 module and the software Rheocompass at 25 °C.All experiments in this section were performed at a composition of 10% EHP and 5% overall surfactant concentration.The premix was obtained by the same procedure for all experiments as described in Section 2.The droplet size distribution of the premixes were similar for all experiments and are presented in Sections 3.1.3 and 3.1.5.The volume of premix injected through the membrane was tested up to 500 mL with 125 mm length and 0.5 μm pore size membrane at 200 mL/min, to investigate the scalability of the process.For all volumes injected, the resulting pressures were almost the same, 27.2 bar at 40 mL and 28.3 bar at 500 mL.As detailed below, the resulting pressure was the sum of the pressure needed for droplet disruption, pressure due to flow of the premix through the membrane pores and pressure in the pipes from the pump to the membrane module.A constant pressure during injection of 500 mL of premix indicates very low membrane fouling as well as the absence of filtration.This is highly favorable to scaling-up and suggests that larger volumes of premix can be treated.Also, the droplet size was almost the same at the different volumes injected, with a mean droplet size of 678 ± 6 nm from 40 mL to 500 mL.This also suggests that the process can be scaled-up and that large volumes of nanoemulsions can be obtained.Next experiments were performed at 40 mL, as it is expected that the results of the resulting pressure and droplet size should not be affected significantly by increasing volume.The effect of flowrate was investigated from 10 to 200 mL/min on both nanoemulsion droplet size and resulting pressure with 125 mm length and 0.5 μm pore size membrane.The pressure profiles were measured at various flowrates.A typical pressure profile was divided into three parts.The first part corresponded to the pressurization of the fluid inside the high pressure syringe pump, which led to a dramatic increase in pressure.Then, the pressure remained constant and this constant value obtained was termed “resulting pressure”, which was recorded for all experiments.Finally, the premix was almost totally injected and a decrease in pressure was observed.The fact that the resulting pressure remained constant during almost all the premix injection proved that no filtration occurred, no internal and/or external membrane fouling or change in the product nanoemulsion.This was observed at all flowrates.Indeed, in PME, larger droplets can be retained by the membrane surface if the shear stress through the membrane pores is too low, leading to a filtration phenomenon .When increasing the flowrate, the resulting pressure increased from 9.06 ± 0.08 bar at 10 mL/min to 24.7 ± 0.20 bar at 200 mL/min mainly because ΔPflow increased.The influence of flowrate on droplet size is presented in Fig. 4.The droplet size decreased with flowrate while the span was higher at the lowest flow rate of 10 mL/min and then stabilized.The ratio between droplet size and membrane pore size was 1.52 at 10 mL/min and 1.34 at 200 mL/min.Indeed, at higher flowrate the wall shear stress applied to the droplets inside the membrane porous microstructures is higher, so smaller and more mondisperse droplets are obtained.Previous studies on preparation of emulsions with droplet size of several microns also observed a decrease in droplet size when increasing the flowrate .The influence of pore size on droplets sizes was tested for six membranes with pore sizes ranging from 0.2 to 0.8 μm with a 125 mm length membrane at 200 mL/min except for smallest size.As expected, the membrane pore size greatly influenced the droplet size of the nanoemulsions product.The droplet size decreased linearly with pore size with a ratio between droplet and pore size equal to 1.26 and a regression coefficient R2 = 0.99.Therefore, the droplet size can be controlled by changing the pore size.When preparing emulsions with droplet size of some few microns, Vladislajevic et al. , showed that the ratio between droplet and pore size was in the range from 1.51 to 0.98 as the mean pore size varied from 5.4 to 20.3 μm.The droplet to pore size ratio obtained here for nanoemulsions was therefore in the same range as the ones previously obtained for emulsions.However, the mean droplet size was a linear function of the mean pore size, while a non-linear correlation was obtained at higher pore size.According to the membrane used, the droplet size of the premix was reduced by a factor from 15 to 58.The span was also reduced from 1.62 to a mean value of 0.49 ± 0.08.Vladisavljević et al. showed that most SPG membranes have a relative span of pore size distribution in the range 0.4–0.6.Therefore, the span of the nanoemulsions obtained by PME was close to the typical spans of the SPG pore size distribution.A similar observation was reported for emulsions with droplet size of several microns obtained by DME .For membranes with pore size larger than 0.3 μm, nanoemulsions were prepared at 200 ml/min.For the 0.2 μm and 0.3 μm membranes, the flowrate was set at 5 ml/min and 100 ml/min respectively, to maintain the resulting pressure below 60 bar.As it can be seen in Fig. 5, the resulting pressure increased with a decrease in membrane pore size due to the increase in ΔPflow.Moreover, at smaller pore size, the disruption had to be more intense to create smaller droplets so ΔPdis increased as well.As a result, the resulting pressure increased drastically with a decrease in membrane pore size.It was then impossible to prepare nanoemulsions through a 0.1 μm pore size membrane for this nanoemulsion composition.Usually in PME, SPG membranes with length of 20 mm have been used in a microkit module .Here, for all our experiments, a 125 mm length membrane was used, in addition with a 20 mm membrane to investigate the effect of membrane length both were 0.5 μm pore size membrane investigated at different flowrates.The longest membrane required lower pressure but led to larger droplet size than the short membrane.Indeed, the droplet size decreased with an increase in pressure, independently of the membrane length.Nearly the same droplet size and resulting pressure were observed at 10 mL/min with the short membrane and 100 mL/min with the longest membrane.Indeed, in these two experiments, the transmembrane flux was almost identical.Thus, the wall shear stress in pores which governs droplets disruption was the same.In addition, ΔPdis and ΔPpipe did not change while changing membrane length.The 125 mm membrane was then used at a flowrate 10 times the one of the 20 mm membrane with no change in resulting pressure and droplet size.Regarding droplet size, the shortest membrane led to smaller droplets.The droplet to pore size ratio even reached 1.05 at 200 mL/min for the short membrane.This means that nearly half of the droplet size distribution was below the pore size.This is explained by a phenomenon occurring at high shear stress induced by high flowrate within the pores called “snap-off” due to localized shear forces which explains the creation of droplets smaller than the pore size.This phenomenon was only observed with the 20 mm membrane, because the transmembrane flux was then 10 times the one of 125 mm membrane.In this case, the premix droplets underwent a more intense snap-off whereas with the 125 mm membrane, droplet break-up mechanisms due to interfacial tension and steric hindrance between droplets were predominant.The effect of cycle number was investigated with 125 mm length and 0.5 μm pore size membrane at 200 mL/min.After one cycle, the droplet size of the premix was reduced by a ratio of 22, 1.06 from the first to the second cycle, 1.03 from the second to the third cycle and 1.01 for all following cycles.The span of the premix equal to 1.89 decreased after one cycle to 0.43 and was almost constant during the following cycles.Therefore, one cycle was sufficient to obtain nanoemulsions with droplet size around 630 nm.The following cycles did not significantly disrupt further the droplets.Previous studies on PME with SPG membranes have investigated the effect of cycle number for emulsions with droplets of several microns .The cycle number required to reach a constant droplet size and span depended on a number of parameters such as membrane pore size, viscosity of the emulsions and pressure applied.Generally, more than three cycles were needed.In our study, only one cycle was sufficient to produce droplets with small size and span.This may due to the highest pressure applied which means higher shear stress and more effective disruption.The resulting pressure shows the same trend: during the first cycle, it stabilized at 26.7 bar, at 10.2 bar during the second cycle and remained constant at 8.45 ± 0.4 bar for all following cycles.The constant pressure obtained after the second cycle can be explained by the fact that no more droplet disruption occurred in the membrane pores, which means that ΔPdis = 0 and therefore ΔPr = ΔPflow + ΔPpipe was constant.The effect of cycle number has been investigated previously with polymeric membranes for the preparation of nanoemulsions .Polymeric membranes are very common in size reduction of liposomes with a technique called filter extrusion , so their applications to the production of nanoemulsions is particularly attractive.In PME with polymeric membranes, several cycles were required to reach constant droplet size and dispersity.For polycarbonate membranes, about 10 cycles, depending on the flowrate, were needed and usually 21 extrusion cycles were performed with most polymeric membranes .Our study confirms that SPG membranes decrease the cycle number needed compared to most polymeric membranes as previously reported .This may be attributed to the higher tortuous pore structure and thickness of SPG membranes.All experiments in this section were performed with 125 mm length and 0.5 μm pore size membrane.The premix was obtained by the same procedure for all experiments as described in Section 2.The droplet size distribution of the premixes were similar for all experiments and are presented in Sections 3.1.3 and 3.1.5.The effect of the dispersed phase concentration has been investigated at 6 concentrations from 1% to 40% and a flow rate of 150 mL/min.Flowrate of 200 mL/min could not be used at 40% of oil because the premix could not pass through the membrane at a pressure below 60 bar, so for all concentrations, the flow rate was set to 150 mL/min.In addition, for all preparations, the surfactant concentration to oil concentration ratio was 0.5 in order to have the same stabilization conditions.The effect on droplet size and resulting pressure is presented in Fig. 8.The resulting pressure increased proportionally to the oil concentration with a regression coefficient R2 = 0.96.On the other hand, the viscosity of the nanoemulsions also increased with oil content but at a much higher rate.The increase in pressure was multiplied by around 2 to from 20% to 40%, while the increase in dynamic viscosity was multiplied by around 4.Indeed, an increase in oil content led to an increase in ΔPdis as a result of the larger volume of droplets to be disrupted.Also, ΔPflow and ΔPpipe increased due to the higher viscosity of the nanoemulsion.Both phenomenon led to an overall increase of ΔPr.However, the viscosity may have less impact than oil concentration as ΔPr increased linearly with oil percentage and not exponentially like the viscosity.In addition, droplet size were expected to remain constant as surfactant to oil ratio has been kept constant, but surprisingly droplet size decreased proportionally to the oil content.This can be explained as the overall concentration of surfactant increased with oil concentration even if the ratio was maintained constant.Break-up due to interfacial tension effects might be more effective and so led to smaller droplets.It can be also explained as the increase of viscosity of emulsion increases with increasing oil content, leading to increased shear stress inside the membrane pores and smaller droplet size .Disruption at 40% oil concentration was very effective and resulted in droplet with droplet size around 502 nm close to membrane pore size.The effect of surfactant concentration has been tested from 2.5% to 20% maintaining EHP concentration at 10% and at 200 mL/min.Droplet size decreased with surfactant concentration, from 677 nm to 570 nm, respectively at 2.5% and 20%.The decrease in droplet size can be explained by the fact that surfactant concentration governs two phenomena, first, the interfacial tension that induces break-up due to Rayleigh and Laplace instabilities and secondly the adsorption kinetic of the surfactant at the interface .Kinetic of adsorption of the surfactant at the newly created interfaces depends on the local concentration of surfactant in both phases.At high surfactant concentrations, the resulting droplets are stabilized faster than at low concentrations, so there is not sufficient time for droplet coalescence to occur and the resulting droplet size is smaller.The resulting pressure decreased when increasing the surfactant concentration from 33.1 bar to 17.7 bar, respectively at 2.5% and 20% of total surfactant.However, dynamic viscosity of the premix emulsion increased significantly when increasing surfactant concentration as Tween 20 and Span 80 are viscous liquids.Similarly to the previous section, viscosity may not be a major parameter that can explain the ΔPdis variation.Indeed, ΔPdis decreased because disruption required less energy when more surfactants are used in the formulation.However, the pressure stabilized at about 15 bar at higher surfactant concentration.At the highest pressure of 33.1 bar, the 2.5% surfactant concentration did not lead to smallest droplets compared to what was observed previously where the highest resulting pressure gave the smaller droplet size.This confirms that interfacial tension at equilibrium and dynamic interfacial tension are key factors that govern the resulting pressure and droplet size in PME for nanoemulsions production.This was already pointed out for the preparation of emulsions by PME .All nanoemulsions have been tested in stability at room temperature by measuring the droplet size distribution versus time by DLS and LD.Depending on droplet sizes, dispersions suffered reversible process of creaming, at different kinetics due to the difference in density between the dispersed and the continuous phases.However none of them showed irreversible coalescence leading to significant increase in droplets sizes within 9 months.In Fig. 11, size distributions of nanoemulsion obtained with a 125 mm length and 0.2 μm pore size membrane at 5 mL/min are presented as an example of all stability results obtained.It was observed that DLS measurements obtained with the distribution method and LD measurements did not give exactly the same size distribution and mean droplet size.The average size are not expressed by the same parameters: Z-average measured by cumulant method by DLS and D50 by LD, derived from size distribution in intensity and volume respectively.The measurements are also based on different principles.Nevertheless, both methods suggest no significant increase in droplet size within 9 months at room temperature.This means that no irreversible phenomenon such as coalescence or Ostwald ripening occurred.Nanoemulsions formed were very stable because the amount of surfactant was sufficient for long term stability.Moreover, the energy input in PME was not sufficient enough to create new interfaces that are not well stabilized by surfactant.In addition, PME creates nanoemulsions with low polydispersity which are less sensitive to Ostwald ripening.This long term stability is sufficient for applications in cosmetics or pharmaceutics.In this study, O/W nanoemulsions were prepared successfully by PME and SPG membranes at high flowrate.For that, a controlled set-up was developed including a high pressure syringe pump with data acquisition.Maximum values were respectively for the pressure, volume of premix treated and flowrate: 60 bar, 500 mL and 200 mL/min, except for the membranes with the smaller pores which were used at lower flowrates to keep the pressure below 60 bar.The effect of several parameters was investigated, related to the process: volume of premix, membrane pore size, flowrate, cycle number and formulation: oil and surfactant concentrations.The process was shown to be scalable up to 500 mL.Indeed, 500 mL of nanoemulsions produced had the same droplet size as 40 mL.In addition, the pressure was constant during injection of 500 mL which suggested no membrane filtration, no membrane fouling or change in the nanoemulsion obtained.The resulting pressure was found to be a key parameter which governed the production of nanoemulsions by PME.First, it has to be minimized so the premix can pass through the membrane pores at moderate pressure.Then, pressure controlled transmembrane flux and therefore wall shear stress inside the micropores which allowed droplet disruption.In general, nanoemulsions with smaller droplets were obtained at higher pressures.The resulting pressure was the sum of ΔPflow, ΔPdis and ΔPpipe.For each parameter investigated, the relative influence of these three terms was discussed.In addition, the droplet size of the nanoemulsion product was highly dependent on other process parameters or formulation.In particular, a linear relationship was found between droplet size and pore size which suggests that the droplet size can be easily tuned.Parameters which influence the emulsion characteristics at micro-scale are also important at nano-scale.However, the formulation characteristics such as oil or surfactant concentrations appeared to have a greater effect than expected.As the oil/water interface increased with a decrease in droplet size, the effect of oil and surfactant concentration seems more important at nano-scale.In conclusion, this study showed that PME with SPG membranes produced monodisperse nanoemulsions down to 260 nm with controlled size and very long stability over time.The nanoemulsions were produced in only one cycle at moderate pressure, which can be appropriate for encapsulation of sensitive actives.The technique is expected to be scalable to larger volumes and used as a continuous process with two high pressure syringe pumps in parallel.
Nanoemulsions are increasingly used in cosmetics, pharmaceutics and food. They are produced usually by low or high energy techniques. In this study, a process involving moderate pressure in the range 10–60 bar was proposed as an alternative, in particular for the encapsulation of sensitive actives or applications that require a precise droplet size control. Oil-in-water (O/W) nanoemulsions were prepared by premix membrane emulsification (PME) using a set-up with a controlled high pressure syringe pump and 125 mm long SPG membrane. A coarse emulsion (premix) was injected through the membrane with pore size between 0.2 and 0.8 μm in order to reduce and homogenize the droplet size. The effect of several parameters was investigated: process parameters (scalability, cycle number, membrane pore size, flowrate) or formulation (oil and surfactant concentrations). Nanoemulsions were prepared at large scale up to 500 mL at production rate up to 200 mL/min, pressure below 60 bar and one cycle. The droplet size was linearly related to the membrane pore size and highly monodisperse nanoemulsions of around 260 nm in diameter and stable for 9 months at room temperature were achieved with the smallest pore size membrane (0.2 μm). Moreover, the mechanisms involved in PME for nanoemulsions were discussed, such as flow through the membrane pores and droplet disruption by wall shear stress inside the membrane porous structure.
203
Biochemical principles enabling metabolic cooperativity and phenotypic heterogeneity at the single cell level
To be competitive in their environment, cells from all kingdoms of life have evolved a series of mechanisms to sense nutrients and uptake the metabolites needed for their growth and survival .Metabolite uptake saves energy, carbon and nitrogen, and is further beneficial by reducing the number of biochemical reactions that need to run in parallel, increasing metabolic efficiency .At the same time, biosynthetically active cells are known to release a complex spectrum of metabolites , and within communities, such as colonies, biofilms and tissues, these metabolites can enrich the extracellular space.The export of metabolites coupled with the preference for cellular import allows for the exchange of metabolites between cells and when this occurs within microbial communities, such metabolite exchange permits the survival of otherwise unculturable cells.The extent of this metabolite exchange in microbial environments is demonstrated by current estimates, that up to 90% of bacterial species are not metabolically viable outside a community environment .Further, it is becoming increasingly clear that metabolite export and import are equally important for cells that do not completely depend on metabolite sharing for growth, and that a broad spectrum of metabolites are involved in these exchange events .Moreover, when individual cells switch from biosynthesis to uptake for a metabolite their physiology shows to be fundamentally altered .In this review we discuss these exchange interactions, from the point of view that they do not solely emerge to confer a selective advantage in the ecological and evolutionary sense, but also as a consequence of basic biochemical properties that underlie the function of the metabolic network.In order for metabolite exchange to be of biological relevance, a few basic conditions need to be fulfilled.First, cells have to export metabolites at a rate where relevant extracellular concentrations can be achieved in the given community, tissue or environment."The rate of accumulation is constrained by the environment and cell density of the community or tissue: but equally what a ‘relevant’ concentration is depends on the molecule's chemical nature, cost of biosynthesis, and the essentialness of its function.The extracellular presence of highly costly metabolites, such as thiamine, can be physiologically relevant at sub-micromolar concentrations, while much higher concentrations are involved when cells share abundant cellular metabolites, like amino acids, nucleotides or polyamines .Related to this is the second requirement, that neighbouring cells need to sense and uptake a particular metabolite.Indeed, mechanisms that meet these conditions exist for a broad range of metabolites, released by both eukaryotic and prokaryotic cells .The impact of metabolite uptake on cell physiology depends on the ability of cells to re-configure their metabolism to prioritise import over self-synthesis.Without the ability to reconfigure their metabolism, metabolite import would not provide any physiological advantage.Furthermore, due to flux coupling within the metabolic network , these re-configurations have shown to have a system-wide impact, affecting regulation at the transcriptome, proteome and metabolome level; As a consequence, the physiology of the cell, its stress resistance and growth, as well as its response to gene deletions, can be altered to various degrees .The mechanisms inducing metabolic re-configuration have been best studied for amino acids and nucleotides.The response to extracellular nutrient presence is, at least in part, explained by concerted feedback inhibition of the biosynthetic pathways involved.This is demonstrated by feedback-resistant mutants, which have been identified for several amino acid producing pathways, where cells are deficient in sensing or uptake processes and continue to produce a particular metabolite even though it remains present in the extracellular environment .Why are cells producing more metabolites than needed by themselves?,At first, there are important ecological-evolutionary models to consider in respect to the evolution of metabolite exchange, cheating and cooperativity .However, in this review we concentrate only on several metabolic network properties that result in metabolite release, irrespective of there being an ecological benefit to the cell.Indeed, many groundbreaking findings, considering the principles of metabolite overproduction and release, are not described within the ecological context but emerge from research into biotechnology.In the fermentation process for example, metabolite export can both be desired or undesired.We illustrate this situation for the brewing process, a well-studied example, in which high quantities of metabolites including ethanol, amino acids, and flavour compounds are released from yeast, defining the very nature of the product, the fermented beverage.One overriding reason for the overproduction of certain metabolites can be found in the topological structure of the metabolic network itself.The metabolic network is constrained by the underlying chemistry, in particular the stoichiometry of chemical reactions, thermodynamics, and the availability of catalysts and cofactors ."Many reactions within the metabolic network have been increasingly speculated to originate back to a time of non-enzymatic chemistry, wherein the evolution of early metabolic pathways was constrained by the reaction spectrum achievable, with metabolism's first catalysts – and not by the exact need of the cell .Furthermore, as metabolic reactions can also interfere with one another, due to common moieties and structural similarity between metabolites, the metabolic network is also constrained in its expansion .The result of these processes is a remarkable conservation of the metabolic network topological structure among the kingdoms of life, which is in stark contrast with the great variability of individual metabolic requirements of different organisms within different environmental niches.Therefore despite metabolism being highly regulated, it still needs to overcome this deterministic stoichiometry.As a result, in order for metabolic flux to permit all metabolites being produced at a rate that fulfils the minimum demand, some need to be overproduced.This is a consequence of the high interconnectivity within the metabolic network that prevents fluxes through biosynthetic pathways occurring independently to one another.For example, in yeast, >25% metabolites are involved in >3 reactions, leading to system-wide flux coupling, whereby it can be common for an increase in the production of one metabolite leading to flux changes for several other metabolites across the metabolic network .And with thousands of metabolites being connected by hundreds of enzyme-catalysed reactions, a frequent solution for achieving a growth-optimal flux distribution is to export, rather than recycle, a subset of the metabolites that have been overproduced.Prominent examples for this have been shown in Escherichia coli that, remarkably, lacks a degradation pathway for many expensive amino acids, leading to metabolite export being the sole option to avoid accumulation beyond their optimal concentration range .Moreover, an imbalance in amino groups can cause overflow metabolism whereby some amino acids can be used as sinks connected to transamination reactions .Another example is the preferential export of uracil for balancing pyrimidine biosynthesis.Here, when there is excess biosynthetic flux in the pyrimidine pathway of E. coli, feedback inhibition occurs in a downstream pathway step for uridine monophosphate kinase by uridine triphosphate, in effect triggering the export of uracil .When both oxygen and nutrients are abundant, cells often switch from respiration to fermentation, a process which can be related to the ‘Warburg effect’ in cancer cells or the ‘Crabtree effect’ in yeast.Under these conditions, there is a high degree of overflow metabolism, leading to the extracellular accumulation of energetically expensive carbohydrates.Despite their high ATP demand, rapidly growing cells export lactate, ethanol or acetate, instead of oxidizing pyruvate in the Krebs cycle cycle), which would generate more ATP through oxidative phosphorylation.Several explanations have been proposed for why overflow metabolism occurs in this context.One of the most recently discussed causes is proteome resource allocation .Under nutrient rich conditions, respiratory enzymes are substituted by smaller glycolytic enzymes allowing extra space, for example, for additional substrate transporters and proteome allocation for other processes such as translation .Glycolysis consequently becomes the pathway of choice when intermediates need to be turned over quickly, despite enzymes in both pathways having similar kcats, meaning both modes of metabolism operate at the same speed, and despite oxidative phosphorylation generating more mmol ATP per gram dry weight per hour.Therefore after correcting for protein production, the small molecular weights of glycolytic enzymes mean that glycolysis is more catalytically efficient per unit protein than oxidative phosphorylation, for generating ATP .In view of the textbook-promoted picture of ATP production as a limiting factor in central carbon metabolism, some additional restrictions need to be considered.This perception most likely originated from, and applies to, the study of energy metabolism in the skeletal muscle, which was the focus of many key metabolic studies in the 1960s and 1970s.When exercising, ATP available in skeletal muscle cells is consumed rapidly.The ATP pool then needs to be replenished readily, first with metabolites with a higher phosphotransfer potential, such as phosphocreatine .Once phosphocreatine becomes limiting as well, other metabolic sources of ATP have to be activated.This is achieved in the following order a) glycolysis, b) the Krebs cycle and oxidative phosphorylation, which occur during mid-level exercise, and c) β-oxidation of fatty acids, during long-term exercise and in the adult heart .In the context of the Warburg effect, one has to consider however that there is a fundamental difference between ATP consumption in the active muscle and when a cell needs ATP for biomass growth.The latter process couples ATP consumption by definition to anabolism, which requires access not only to ATP, but also to biosynthetic precursors.Full glucose oxidation over the Krebs cycle produces more ATP, but has a negative carbon balance: In each complete round of the Krebs cycle, two carbon molecules are converted into CO2, and are lost to the cell in the absence of carbon fixation mechanisms.The carbon balance is equally negative once carbon equivalents are converted into fatty acids, as β-oxidation depends on the Krebs cycle to generate ATP from acetyl-CoA, with two carbon units entering the Krebs cycle as acetyl-CoA, which are then released as CO2.For this reason, mammals can not recreate a sufficient glucose pool from carbon equivalents once they have been oxidized in the Krebs cycle or converted into fatty acids.Instead, glucose oxidation over glycolysis has a much more favourable carbon balance permitting the cell greater metabolic efficiency; During fermentative metabolism lactate is excreted from cells, however, it is not ‘lost’, as lactate can be re-imported into cells, serving as a substrate for gluconeogenesis.In mammals, this process is known as the Cori cycle , an archetypical example of metabolic cooperativity between tissues.The Cori cycle appears to also be implicated in metabolic decision making.The lactate circulated can indeed not only be used as a source for gluconeogenesis, but can also serve as a substrate for the Krebs cycle to replenish ATP when required.Indeed, recent results indicate that in mice, the majority of Krebs cycle activity is fed through lactate .This situation appears to also be associated with the ongoing exchange of lactate between cancer cells, a metabolic feature typical to many glycolytic tumour cells .Another facet of this problem is described in the Membrane Real Estate Hypothesis .This addresses the problem that oxidative phosphorylation has its own metabolic demands, with it depending on a supply of oxygen to function as well as concomitantly, needing to export CO2 produced in the Krebs cycle.When cells grow bigger, they have a less favourable surface-to-volume ratio, limiting gas exchange for oxidative phosphorylation .For similar reasons, a restriction in oxidative phosphorylation emerges also in solid tumours that are characterized by hypoxia .The choice between fermentative and oxidative metabolism also affects redox homoeostasis, with the electron transport chain and glycolysis respectively having different effects on the release of reducing or oxidizing molecules, during ATP generation .The maintenance of redox balance is indeed one reason why microbes use overflow metabolism when there is excess glucose available, as secreted metabolites can act as electron sinks for NADH, to regenerate NAD+ for glycolysis to occur .In E. coli, the overflow of acetate and other fermentation metabolites , and in Saccharomyces cerevisiae, glycerol and ethanol production , for instance, can be manipulated by interfering with the NADH/NAD+ balance.Indeed, redox homoeostasis is reflected mainly by the redox cofactors, nicotinamide adenine dinucleotide+ and NADH) and flavin adenine dinucleotide .Although the redox potential of a cell is affected by both oxygen and glucose utilisation, when glucose is in excess, the cellular response is similar regardless of aerobic or anaerobic conditions - metabolic intermediates accumulate.This is due to the rate of glucose consumption being greater than the capacity for reduced redox cofactors to be oxidised.In glycolysis, glucose is oxidised to two molecules of pyruvate, additionally two molecules each of ATP and NADH are formed.To acquire enough ATP through this low ATP yield pathway, cells consume high amounts of glucose and subsequently generate high levels of reducing equivalents.As NAD+ is a substrate for glyceraldehyde-3-phosphate dehydrogenase in glycolysis, NADH needs to be reoxidised to maintain glycolytic flux .One way this is achieved is by transferring the reducing equivalents to partially oxidised metabolic intermediates, such as lactate and ethanol, which are then released by the cell .Overflow product formation is therefore considered a rapid way in which cells restore a high NAD+/NADH ratio, resulting in the excretion of multiple catabolites such as lactate, acetate, succinate, alcohols and CO2.We consider it an unanswered question for which among the above listed arguments is the main cause of overflow metabolism in cells showing Warburg/ Crabtree effects.It is however intuitive that multiple metabolic constraints apply in parallel.Depending on the species and metabolic state, the different constraints may contribute to a different extent.In other words, its intuitive there is more than one single cause of the Warburg or Crabtree effect.Nonetheless, this example shows that in typical overflow, metabolites are removed from cells through active and energy consuming transport processes, revealing that metabolite export must confer an advantage in typical metabolic situations, otherwise cells would employ alternative mechanisms of metabolite removal, such as catabolism and degradation.In parallel however, another important source of extracellular metabolites is leakage of metabolites through the cell membrane and non-selective transport .Membranes can only have obtained their present structures upon the evolution of modern transport systems, explaining why our cells could not have evolved when membranes were completely sealed already in early evolution .It is therefore most likely that membranes evolved to be highly selective barriers over time, with changes in their lipid composition and a myriad transport systems originating in order to fulfil the specific function of regulating the intracellular pool of metabolites .Membrane permeability however, although intrinsic, increases with age, size of the cell, adverse environmental conditions and/ or properties of the leaking metabolites themselves .As a consequence, any variable that causes a change in membrane permeability also has a direct impact on metabolite leakage.Some of these variables being temperature, pH, osmolarity and nutrient availability, as well as the cell cycle, growth rate and changes in lipid metabolism .Many membrane transporters and channels also demonstrate varying degrees of specificity: Exporting metabolite A can often not be achieved without also exporting at least some of a structurally similar metabolite B, wherein the given transport system, such as efflux pumps and transmembrane channels, lack the required discriminatory power to distinguish between the two metabolites .The basic problem causative to this is a finite structural diversity that exists within the small-molecule metabolome, with the vast majority of cellular metabolites possessing homologues with highly overlapping structural features .Recently, it was proposed that these conditions could explain the existence of low affinity transporters when nutrients are plentiful, as they allow sufficient import while preventing leakage of expensive metabolites .A related cause of overflow metabolism is so called metabolite repair.Not only transporters, but also enzymes are promiscuous, and together with non-enzymatic reactions, a large number of metabolites lacking biological function are formed within the cell .Some of these metabolites are toxic for cells; in many cases for the simple reason that they possess structural similarity to the metabolites they derive from, and hence act as competitive inhibitors for the associated metabolic enzymes .By exporting metabolites formed by non-enzymatic or promiscuous reactivity for which no specific export system exists , promiscuous clearance subsequently prevents deleterious effects on metabolism.Under some conditions, programmed or spontaneous cell death can also be a highly relevant source of extracellular metabolites, and can indeed be an evolutionary adaptation.In microbes, the recovery of metabolites released from cell death, when nutrients become limiting, helps remaining living cells to obtain resources against competitors, exploiting rapid cell growth followed by programmed cell death.This type of ‘harvesting’ strategy at the community level has also shown to confer population survival up to several years, whereby cell death and growth is balanced by nutrient input originating only from dead cells .Interestingly at the point of starvation prior to cell death, some cells are also known to release expensive secondary metabolites with antibiotic properties as a survival strategy.Such secondary metabolites, when taken up by neighbouring competitors, lead to cell death and the release of nutrients that can be taken up by producer cells to exit starvation .During apoptosis and other forms of programmed lysis, the degradation of cellular proteins, nucleic acids, lipids and polysaccharides by endogenous enzymes is known to occur; these breakdown catabolites such as amino acids and sugars are subsequently exported from the cell, at the expense of a decline in cell biomass, cell density, and an increasingly leaky cell wall before death occurs .In bacteria, cell death has also shown to be involved in biofilm formation, providing cells with nutrients, enzymes and polymers required for the biofilm matrix, or with signals that trigger specific developmental and evolutionary processes such as sporulation or horizontal gene transfer .Moreover, programmed cell death in yeast and bacteria has shown to support community metabolism and the feeding of younger cells when nutrients become limiting .Successful survival in competitive environments, where resources are limited, relies on nutrient sensors and transporters to efficiently bind and uptake metabolites required by the cell .These are active within communities and cellular tissues, where the source of matrix metabolites are the co-growing cells.Cells nurtured from the so-called ‘pool of shared goods’ can feedback-inhibit their respective biosynthetic pathways and hence become metabolically different to a cell producing the respective metabolite."As the metabolic network is tightly interlinked with the cell's response to its environment, the change from metabolite synthesis to uptake has a broad impact on gene expression and cell physiology .Phenotypically, this situation is relevant as metabolism is highly interconnected with the stress response machinery, demonstrated by the fact that a metabolically reconfigured cell will respond differently to a stress perturbation and have a different chance of survival, relative to the same cell that had not undergone any metabolic changes.A helpful model for studying the biological impact of cells re-configuring their metabolism from biosynthesis to uptake, are auxotrophic marker alleles, which confer nutrient dependency of otherwise prototrophic species.Auxotrophic markers have been exploited for the specific reason of being essential metabolic deficiencies that can be complemented with extracellular metabolites, as genetic selection markers in laboratory experiments.In S. cerevisiae, it has been shown that the transcriptional response induced through four commonly exploited auxotrophic markers, interfering with histidine, leucine, uracil and methionine metabolism, affects the expression of up to ⅔ rds of cellular transcripts.Differentially expressed transcripts also show to overlap with one third of the broad range of transcriptional changes reported across a series of independent gene expression datasets, revealing that a majority of cellular responses to gene loss are confounded by metabolism .The rationale for this observation is the tight interconnectivity within metabolism, leading to network-scale reconfiguration when cells switch from metabolite uptake to synthesis.Indeed, the transcriptional changes induced by auxotrophy correlate strongly with the metabolic flux distribution.As a consequence, the metabolic background is not confounding gene expression experiments in a linear manner but will lead to different impacts on cell physiology when the same perturbations are applied.These metabolic differences affect the survival chances of cells in a variety of stress situations.For instance, yeast cells which differ in either uptaking or synthesizing methionine have altered survival when cells are exposed to oxidative stress, induced by the thiol oxidizing reagent diamide.This phenotype depends on NADP+ to NADPH reduction in the pentose phosphate pathway.When cells do not need to synthesize methionine, NADP availability increases for the anti-oxidative machinery .Another example is the dissimilar response of uracil producing and uracil consuming cells to the oxidant hydrogen peroxide, even when these cells grow together under the same conditions within the same colony.Here, the mitochondrial network in uracil producers undergo a significantly higher degree of mitochondrial fission compared to uracil consumers, and at the same time, cells mount a higher resistance to oxidant treatment, suggesting their altered uracil metabolism confers a benefit to their survival .A third case is the role of the polyamine exporter TPO1, that determines timing of the oxidant-induced cell cycle through the export of antioxidant polyamines, that as a consequence, become available to co-growing cells during stress conditions ."Not only does the exchange of metabolites have a direct metabolic role but also those metabolites with a signalling function contribute to a cell's stress response. "When it comes to interspecies' landscapes, these metabolite interactions can have sizeable physiological consequences. "Gut bacteria, for instance, have shown to secrete N-acyl amides that activate G-protein coupled receptors of the host's intestinal cells , similarly, the physiology and fitness of the fruit fly, Drosophila melanogaster, has shown to be influenced by the metabolites produced from their gut microbiome .The intrinsic ability of cells to take up extracellular metabolites, and the physiological changes that emerge from this process, understandably can alter the phenotype of the cell to a broad extent.As cells sense nutrients independently, this situation can be associated with phenotypic differences that arise between cells in a population.Phenotypic heterogeneity is known to enable bet-hedging strategies, providing a population of isogenic cells with the flexibility to adapt to a constantly changing environment .It has also been associated with infection , formation of persister cells , resistance to environmental stresses and triggering specific developmental processes .In yeast, metabolic divergences between cells adapted to the uptake of a specific metabolite and those which have a broad metabolic functionality, allow the community to find strategies for both the efficient adaptation to new environments as well as to grow at a competitive rate when conditions stay constant .Single cells with non-genetic phenotypic differences in metabolism can partially be explained by stochastic noise in gene expression .This “noise” is believed to propagate to the cell cycle , growth rate , epigenetic modifications and differences in transcriptional activity .Alternatively, metabolite exchange interactions that allow the specialisation of single cells in metabolism, are a biochemical source of heterogeneity.For instance, cell-to-cell differences in gene expression attributed to noise have shown to decline when amino acids are supplemented, giving reason to speculate that gene expression variability at the single cell level is, at least in part, caused by metabolism .Indeed, a metabolic cause of cell heterogeneity has been supported by multiple observations: at the single cell level, measuring lactulose abundance in an E. coli population showed that the levels of this enzyme is variable across the population causing growth fluctuations, and heterogeneity .Another example is provided in self-establishing communities, that allow the tracking of individual metabotypes in the community context .In SeMeCos, a progressive increase in metabolic co-dependencies as a colony forms is achieved via the stochastic segregation of mini-chromosomes that contain essential metabolic enzymes, to complement genomic auxotrophies.This way, cells overcome their inability to cooperate as observed upon direct mixing of auxotrophs .A key lesson to be learned from SeMeCos is that there are several metabotypes that do not make successful cooperators, while other combinations of the same auxotrophic alleles are compatible with effective cooperation .One potential explanation for the latter is that all metabolite export, sensing and import is semi-selective.This means that although yeast cells release a broad spectrum of metabolites , as several belong to the same chemical category, such as aromatic or branched chain amino acids, they are therefore coordinately regulated, synthesized and transported .Co-synthesis and co-transport hence puts constraints on the ability of cells to exchange connected metabolites independently from one another.Metabotypes that are successful cooperators in SeMeCos, diverge strongly in their stress tolerance in a metabolism-dependent manner .In contrast, inefficient cooperators do not diverge in stress tolerance, even though they possess the same auxotrophic alleles and co-grow inside the same colony .This indicates that active metabolite exchange is responsible for the phenotypic diversity of the cooperating single cells.A role of metabolic cooperativity in the establishment of cellular heterogeneity may also be of therapeutic relevance, as it indicates a therapeutic window to address cellular heterogeneity without genetic intervention.While noise in gene expression is difficult to target pharmacologically, metabolic exchange interactions are accessible by targeting the extracellular space, and could therefore be altered by using intelligently designed metabolic inhibitors.Finally, metabolic cooperativity can also arise as a consequence of spatial heterogeneity, whereby cells diverge in stress tolerance due to spatial and, concomitantly, temporal differences in access to nutrients.For bacteria and yeast, where cells can grow into colonies on agar, the cells that locate closer to the bottom of the colony have a more abundant supply of nutrients than cells located proximal to the top.As the colony develops, subpopulations emerge with mixed metabolic specializations, caused by varying access of colony cells to available nutrients.Cells with different spatial localisation will subsequently undertake different uptake and self-synthesis activity.The metabolic niche will therefore determine the phenotype of the cells, which then also leads to divergence in stress tolerance between individual cells within the same colony .A related situation has recently been described in bacterial biofilms, in which metabolic exchange activity results in collective growth oscillations that spread over spatial distances, allowing populations to have increased resiliance to chemical attack, as well as to increase in size and viability .Metabolite exchange interactions are an indispensable feature of cellular physiology in both prokaryotic and eukaryotic cells, and can affect the physiology of both auxotrophic and prototrophic cells.The underlying principles of metabolite export and import emerge both from evolutionary adaptations to metabolite exchange, but also from the consequences of fundamental functional constraints operating within the metabolic network.Overflow metabolism, concerning cells exhibiting the Warburg and Crabtree effect, as well as metabolite diffusion, through leaky membranes or non-selective transport, all lead to the enrichment of the cellular environment for a diverse array of metabolites.In parallel, cells have evolved the ability to sense a wide spectrum of metabolites and typically tend to prefer import over biosynthesis.This situation enables individual cells to exploit the exometabolome in order to specialize in metabolism, that is, to streamline the number of active metabolic reactions necessary for growth, while also optimising their survival chances.As the reprogramming of metabolism has wide ranging physiological implications, the affected cells with different metabotypes can subsequently diverge extensively on the phenotypic level.Elucidating the biological impact of metabolism-induced non-genetic phenotypic heterogeneity, wherein cells dynamically reconfigure metabolism based on nutrient availability, will shed light on this key but barely understood feature of single-cell physiology.
All biosynthetically active cells release metabolites, in part due to membrane leakage and cell lysis, but also in part due to overflow metabolism and ATP-dependent membrane export. At the same time, cells are adapted to sense and take up extracellular nutrients when available, to minimize the number of biochemical reactions that have to operate within a cell in parallel, and ultimately, to gain metabolic efficiency and biomass. Within colonies, biofilms or tissues, the co-occurrence of metabolite export and import enables the sharing of metabolites as well as metabolic specialization of single cells. In this review we discuss emerging biochemical concepts that give reasoning for why cells overproduce and release metabolites, and how these form the foundations for cooperative metabolite exchange activity between cells. We place particular emphasis on discussing the role of overflow metabolism in cells that exhibit either the Warburg or Crabtree effect. Furthermore, we discuss the profound physiological changes that cells undergo when their metabolism switches from metabolite synthesis to uptake, providing an explanation why metabolic specialization results in non-genotypic heterogeneity at the single cell level.
204
Differing response properties of cervical and ocular vestibular evoked myogenic potentials evoked by air-conducted stimulation
The vestibular apparatus has strong connectivity to both the eyes and neck mediating the vestibulo–ocular and vestibulocollic reflexes.Developments in vestibular research have given rise to non-invasive methods for assessment of these pathways by means of short latency evoked responses in the target muscles.Originally recorded over the sternocleidomastoid muscles, the earliest response was termed a vestibular evoked myogenic potential.These potentials are now commonly referred to as a cervical VEMP.A subsequently discovered myogenic response in peri-ocular locations was termed by analogy ocular VEMPs.Both recording sites are characterised by a series of short latency positive and negative waves which occur both ipsilaterally and contralaterally to a monaural air-conducted stimulus.For the cVEMP montage these include the ipsilateral p13, n23 and contralateral n12, p24 and n30 peaks, and for the oVEMP montage these are the contralateral n10, p16, n21 and ipsilateral n13 peaks.Only the earlier potentials have been firmly established as being vestibular-dependent and, for the cVEMP montage in particular, the later peaks are unlikely to be of vestibular origin.VEMPs have proven to have useful diagnostic applications as well as providing a tool to investigate the properties of the human vestibular system.It is generally agreed that VEMPs when activated by acoustic stimulation are a manifestation of the otolith-ocular or otolith-collic pathways but different modes of acoustic stimulation may produce different patterns of end-organ activation.There is evidence that mid-frequency AC sound stimulation may be selective for the saccule, whilst low-frequency vibration of the head appears to be more selective for the utricle, especially if the direction of vibration is aligned within the plane of morphological polarisation of utricular hair-cells.Recent work by Zhang et al. has provided evidence that both sound and vibration may produce distinct resonances at about 100 and 500 Hz, suggestive that the two resonance peaks are not specific to the two modes of stimulation, but to the different dynamic responses of the vestibular end organs.The matter remains controversial, however.Whilst the saccule has been shown to be responsive to acoustic stimuli, the projections to the eyes have been reported to be weak and the utricle has been proposed as an alternative source of the AC oVEMP.At this stage there is consensus that the responsible fibers are likely to arise from the otolith organs and travel via the superior vestibular nerve.A fundamental property of any reflex is the input–output relationship – how the reflex response varies as a consequence of changes in the afferent input.An early study of the cVEMP identified the adequate air-conducted stimulus as being of high intensity, with larger responses occurring with higher stimulus intensities.Lim et al. reported a linear relationship between click intensity, measured in decibels, and reflex cVEMP amplitude.No similar study has been performed for the oVEMP using AC stimuli, although Todd et al. showed that low-frequency vibration-evoked oVEMPs followed a power-law relationship.The present study was designed to explore systematically the behaviour of the cVEMP and oVEMP reflexes and associated potentials to changes in stimulus amplitude whilst controlling for the effects of background activation.Our objective was to determine whether the relationship between intensity and reflex amplitude was the same for the different peaks recorded using the cVEMP and oVEMP montages and, more specifically, between the early cVEMP and oVEMP potentials, a finding that might be expected if both arose from the same receptor.A possible complicating factor is saturation of the cVEMP, which is known to be an inhibitory reflex.In addition, we wished to compare the thresholds for the responses as this might also indicate whether the same end organ was likely to generate both.One patient with superior canal dehiscence was studied to compare with our findings in healthy subjects.Fifteen healthy adults aged 18–57 with no history of vestibular dysfunction participated in this study.Eleven subjects were tested at Prince of Wales Hospital, Sydney and 4 subjects at University of Manchester.One patient with unilateral superior canal dehiscence also participated and was tested in Sydney.Dehiscence of the left superior canal had been previously confirmed in this patient using high-resolution CT imaging of the temporal bone and VEMP testing.Subjects gave written consent according to the Declaration of Helsinki before the experiment and the study was approved by the local ethics committees in Sydney and Manchester.Stimuli were generated using custom software and a CED laboratory interface, and signal amplification was achieved using a custom amplifier.Subjects were presented with sinusoidal 500 Hz, 2 ms tone bursts at a rate of ∼5 Hz.Stimuli were delivered using audiometric headphones.The output was calibrated using a type 4192 pressure field microphone with a 4153 artificial ear and a 2260 sound level meter.The stimulus polarity was alternated to reduce stimulus artefact.Electromyographic activity was recorded simultaneously from the SCMs and below the eyes using self-adhesive Ag/AgCl electrodes.AC cVEMPs and oVEMPs obtained concurrently or separately yield the same results, and we have employed the simultaneous recording technique to shorten the procedure and to ensure the same conditions were applied to both reflexes.For the cVEMP montage, the active recording electrodes were placed on the upper third of the muscle belly and the reference electrodes on the sternal end of the clavicles.An earth electrode was placed above the lateral third of the clavicle.Subjects reclined to ∼30 degrees above horizontal and were required to lift their heads to activate the SCM muscles for the duration of the recording.For the oVEMP montage, electrodes were placed on the orbital margin inferior to both eyes and reference electrodes were positioned approximately 3 cm below them.A custom-made headband was used to secure a small laser pointer that projected a red spot onto the ceiling.The pointer was positioned to produce an elevated gaze of ∼30 degrees for each subject and this was used as a constant point of reference for eye elevation regardless of slight changes in head position.Amplitudes were measured from the extraocular muscles both contralateral and ipsilateral to the stimulated ear.EMG was recorded for both the cVEMP and oVEMP montages from 20 ms before to 100 ms after stimulus onset and averaged over 200–250 individual trials using SIGNAL software.Peaks were named using polarity and mean latency.For clarity, as we have analysed a number of peaks for both recording sites, we have used the prefix i- or c- when referring to a peak ipsilateral or contralateral to the stimulus.For the SCD patient, fewer individual trials were conducted at the high intensities due to the response being easily detected and also to minimise patient discomfort.Each subject was stimulated in one ear which was chosen using a pseudo-randomised approach.The SCD patient was stimulated using her left ear.Electrode impedance was maintained below 10 kΩ before recordings commenced.Intensity values are expressed as dB peak sound pressure level and the intensity used initially was 135 dB.Stimulus intensity was then decreased to 129 dB, and successively reduced in 3 dB steps thereafter, with 105 dB being the lowest intensity recorded for all subjects.The order was reversed for one subject and for another subject testing began at 114 dB, increased in 3 dB increments to 129 dB, and then finished with 111, 108, and 105 dB.Some subjects were not tested at 126 and 120 dB and 135 dB.Recordings were repeated for all intensities for one subject, from 123 dB and below for seven subjects, from 120 dB and below for two subjects, and from 117 dB onwards for four subjects.One subject had repeat recordings for intensities 123, 120 and 117 dB only.A grand average record was made for each intensity but measurements of all individual recordings were also made.The recording protocol for the SCD patient was the same as for the healthy subjects but recordings at lower intensities were added as responses were still clearly present.Repeat recordings for the SCD patient were made from 93 dB and below.For all subjects the initial and the repeat recording of intensities were checked offline for reproducibility of peaks within the subjects and averaged to produce a single file for that intensity.A set criterion of 2.5 standard deviations above or below the mean prestimulus activity was used to determine objectively whether a response was present for any given stimulus level.The latencies of any significant peak also had to be appropriate.Amplitudes were measured where the peaks were above the significance criterion.When they were not, to avoid bias, the amplitude value at the average latency for the peak was used.Correlations between peak amplitudes were performed for group-averaged amplitude values.Latencies were only measured for significant peaks.Signal-to-noise ratio was defined as the ratio of the peak amplitude in question to the prestimulus standard deviation.The prestimulus standard deviation was measured for the lowest three intensities as a guide to the sensitivity of our method.A threshold for each peak was determined for all subjects.We allowed a peak to fail to reach significance in a single intensity trial if it returned for at least the next lowest intensity.Subjects with no responses to the loudest and second-loudest stimuli were assumed to have a threshold of 141 dB for the missing peaks.The overall thresholds for each subject were determined based on the c-n10 for the oVEMP, and for the cVEMP based on the i-p13.Comparison of data between laboratories showed only one peak amplitude difference and two peak latency differences at baseline and all analyses were conducted on the combined data.An ANOVA showed a significant side to side difference for a single peak and stimulus condition only thus channels for the subjects stimulated on the right were exchanged so that a grand average could be made, with the side of stimulation effectively being on the left in all cases.There was no significant difference in background SCM EMG activity across intensities.The prestimulus standard deviation was 3.0 μV for the grand averaged cVEMP and 0.1 μV for the grand averaged oVEMP.For the individual subjects these values were higher, being on average 6.4 μV for the cVEMP to 0.3 μV for the oVEMP.We tested the reliability of our significance criterion by measuring how many of the averaged trials showed a peak above our criteria during the prestimulus record.Of 290 averaged recordings for each modality, 4.8% of cVEMPs, and 18% of oVEMPS showed a peak exceeding the 2.5 times standard deviation criterion during the prestimulus interval.Grand average traces are shown in Figs. 2 and 3 and represent the mean of approximately 3000 individual trials.The baseline recordings indicated that multiple peaks were significant by our criteria.For the cVEMP montage these were the i-p13, n23 and the c-n12, p24, n30 peaks.For the oVEMP montage, the c-n10, p16, n21 and i-n13 peaks were above our criterion.These peaks were therefore measured for this and the remaining intensities in all subjects.The SNR for the baseline peaks varied from 32 to 3.0.Measured using our significance criterion for the grand averaged recording of the cVEMP, the i-n23 had the lowest threshold and the c-n12 had the highest.Using the grand averaged oVEMP record, the i-n13 had the lowest threshold whilst the c-p16 and c-n21 had the highest.The mean baseline amplitudes obtained for the individual measurements of the cVEMP and related peaks were 96.9 μV, 133.1 μV, 21.9 μV, 39.1 μV and 49.5 μV.For the oVEMP the mean baseline amplitudes were 3.8 μV, 2.6 μV, 2.1 μV and 2.7 μV.At the baseline intensity, the number of subjects with responses meeting our criteria varied, for the cVEMP, the responders were: 14, 6, 10 and 10 and for the oVEMP 13, 11, 10 and 13.The proportion of subjects showing responses fell with reducing intensity but even for the least intense stimulus, 6 of 15 subjects still showed a significant cVEMP i-p13 peak and 4 of 15 showed an oVEMP c-n10 peak.The two subjects who were tested with different intensity order showed responses similar to those of the other subjects.The mean thresholds measured from the individual data were mostly similar to those using the grand average.For the cVEMP montage, the i-n23 had the lowest threshold overall and the c-n12 the highest.One subject had responses at every intensity for the i-p13 peak whilst two other subjects had similar responses for the n23 peak.For the oVEMP montage, the c-n10, p16 and n21 all had mean thresholds around 120 dB.One subject had consistent responses for both the c-n10 and c-p16 peaks, whilst one subject had the same for the i-n13.ANOVA analysis of individual thresholds, with Bonferroni correction, showed that the cVEMP c-n12 had a significantly higher threshold than the i-p13 and i-n23 responses and for the oVEMP i-n13 response.The oVEMP c-n10 response had a significantly higher threshold than the i-n13 response.There was a trend for the cVEMP i-p13 response to have a lower threshold than the oVEMP c-n10 but this did not reach significance after correction.The raw amplitudes versus sound intensity plots were curvilinear for all the potentials measured and all showed highly significant quadratic components.The logarithmically-transformed amplitudes were more linear and were regressed against sound intensity.For the cVEMP peaks, the gradients of the regressions ranged from 0.379 to 0.787 and for the oVEMP peaks, from 0.374 to 0.553.Testing the transformed amplitudes using the quadratic fit showed different findings for the cVEMP and oVEMP peaks.For the cVEMP, only the n23 showed a significant improvement in fit with the addition of a quadratic term, whilst the other 4 peaks were fitted well using a linear relationship alone.In contrast, for the oVEMP, only the i-n13 was well fitted with the linear regression alone whilst the other peaks showed significant improvements in fit with a positive quadratic term, indicating a concave gradient with increasing intensity.Comparing the regressions fitted using the lower and upper 5 intensities confirmed the findings with the quadratic regression.The cVEMP i-n23 showed a significant reduction in gradient for the higher intensities.The oVEMP c-n10, c-p16 and c-n21 potentials all showed significant increases in gradients for the higher intensities.Fig. 5 shows the behaviour of the cVEMP i-p13-n23 potential compared to that of the oVEMP c-n10-p16 potentials for the lower and higher intensities.The oVEMP amplitudes for the lower intensities were larger than expected from the relationship shown for the higher intensity stimuli.For individual subjects, the gradients for the transformed cVEMP i-p13-n23 response varied from 0.57 to 0.99 and for the oVEMP c-n10-p16 response from 0.09 to 1.07.The mean latencies for the peaks recorded using the cVEMP montage showed no significant change with decreasing intensity.For the oVEMP, there was no significant change in response latency with decreasing intensity for either the c-p16 or the c-n21 peaks.The c-n10 latency increased as the intensity decreased with latencies at 126–108 dB being significantly longer than at 135 dB.Latencies for the i-n13 peak also increased as intensity decreased with latencies at 105 dB being significantly longer than latencies at all intensities from 135 to 114 dB.The SCD patient showed substantially larger responses than the normal subjects at all intensities for the oVEMP c-n10 and p16 peaks and for all but the highest intensities for the cVEMP i-p13 and n23 responses, with saturation occurring for the patient for the cVEMP potential and possibly for the oVEMP at the most intense stimuli.For both the cVEMP i-p13 and oVEMP c-n10 the patient’s thresholds were 93 dB pSPL.The oVEMP c-n10/p16 responses were persistent from baseline down to 93 dB, as were responses for the cVEMP i-p13.The patient’s cVEMP i-p13-n23 gradient was significantly less than that of the normal subjects’ whilst the oVEMP c-n10-p16 gradient lay within the normal range.We have used an objective measure of the presence or absence of a response for both the cVEMP and oVEMP montages and have examined their properties over a 30 dB range.Our findings complement the normative values reported by Rosengren et al. by defining the changes with intensity for cVEMPs and oVEMPs.Our baseline values are higher than those reported by Rosengren et al., probably due to the slightly more intense stimulus we used and the younger average age of our subjects.Lim et al. conclusion that raw cVEMP reflex amplitude was linearly related to stimulus intensity was based upon observations at only 3 intensities, a sample not adequate to detect the non-linearity of the relationship.McNerny and Burkard compared cVEMPs for AC and BC over a 30 dB range but reported the relationship was simply “monotonic” without further characterising it.Measuring responses with low intensity stimuli is difficult and requires consideration of signal to noise ratio.Todd et al. showed that the subjective detection of both cVEMPs and oVEMPs followed similar relations when plotted against SNR, with a steep increase in the proportion of true positives occurring with SNR of 2 and above.We tried to reduce the subjective element in determining the presence or absence of a response by making this determination using a statistical criterion.Our approach has demonstrated that some normal subjects can have responses to relatively low intensity stimuli.In particular, some normal subjects can have responses to stimuli of 105 dB pSPL for both the i-p13 and n23 peaks of the cVEMP and the c-n10 peak of the oVEMP.A power law relationship implies no definite threshold but diminishingly small responses as the stimulus gets less intense.Any threshold determination therefore will be strongly affected by the number of trials averaged.Our estimates of threshold nevertheless were similar to previous reports for AC thresholds for cVEMPs and oVEMPs, with the cVEMP i-p13 threshold being on average 7.1 dB lower than that for the oVEMP c-n10 response using individual records.The cVEMP i-p13 and oVEMP c-n10 thresholds were closer however when measured using the grand average traces.We have shown that the relationship for the potentials recorded with the cVEMP montage are well fitted using a logarithmic transformation of reflex amplitude and, in particular, the fits for the mean p13 and n23 potentials had r2 values of over 0.95.The dB intensity measure for the stimulus is proportional to the energy in the waveform which Rosengren et al. have shown is an important determinant of p13-n23 cVEMP amplitude.The n23 response however did show evidence of significant curvature, probably due to saturation of the underlying inhibitory pathway.The average gradient for the two potentials, 0.76, implies that the cVEMP p13-n23 reflex amplitude increased by 2.4 times for a 10 dB increase in intensity over the range tested.The fit was not confined to peaks of proven vestibular origin as the i-n30 peak, which is likely to be of cochlear origin, was also well fitted, albeit with a lower gradient.Todd et al. found that a power law relationship fitted the oVEMP responses evoked by head acceleration over a 50 dB range.They reported a gradient of 0.66, slightly higher than our findings for the oVEMP c-n10 and c-p16 peaks evoked by AC stimuli.In contrast to the peaks from the cVEMP montage, most of the oVEMP peaks showed a relationship which was not well fitted using a simple linear relationship even after logarithmic transformation.Nearly all showed a significant increase in the gradient of the relationship once the stimulus exceeded a certain level, the sole exception being the one ipsilateral response.It may be significant that it was the crossed pathways for the oVEMP which showed the apparent thresholds whereas the ipsilateral projections for both the oVEMP and cVEMP showed consistent behaviour throughout the range of stimuli presented.For the lower intensity levels, the average oVEMP peak amplitudes were all less than 1 μV.The amplitudes predicted from the relationship based upon more intense stimulation would have been very small and it is possible that, despite our bipolar recording montage, that other non-myogenic sources might be contributing to the contralateral potentials for the low stimulus intensities.For example, Todd et al. reported that there were deep sources, possibly within the cerebellum, which were co-active with oVEMPs and auditory-evoked responses have been recorded from the cerebellum.It may be that small responses recorded with the oVEMP montage do not originate solely from extraocular muscles, in contrast to what has been directly demonstrated for responses to intense stimuli.When the oVEMP response was initially reported it was assumed that the AC-evoked response was likely to arise in the same way as the AC-evoked cVEMP and initially this appeared to be the case.Todd et al. found similar tuning for AC-evoked cVEMPs and oVEMPs, with a broad peak between 400 and 800 Hz which they explained as likely to be a consequence of the resonance properties of the saccule.Conversely they found evidence of a lower resonant frequency, around 100 Hz, for what they took to be utricular responses, observations that they explained in terms of the structures of the two otoliths.Dissociations in the findings for AC-evoked cVEMPs and oVEMPs have been recognised to occur in vestibular neuritis, a condition with preferential involvement of the superior division of the vestibular nerve.Typically the AC-evoked oVEMP is lost in this condition whereas the AC-evoked cVEMP is often less affected.The latter authors suggested that saccular fibres travelling in the superior division of the vestibular nerve might thus be responsible for evoking the oVEMP.Alternatively it has been proposed that the effects of AC stimuli are mediated by utricular fibres because the saccular projection to extraocular muscles is weak when intracellular recordings have been made.A problem with accepting these intracellular findings as relevant to humans is that a crossed projection from the utricule to inferior oblique motoneurons, the proposed basis of the c-n10 potential, has not been demonstrable using these techniques in cats.One explanation of our findings would be that the apparent oVEMP behaviour is indicative of recruitment of utricular afferents causing increase in the gradient of the responses.The behaviour of the cVEMP c-n12 response, which is consistent with a crossed utricular effect, might be expected to be a guide in this regard, but this peak had the lowest SNR of the group and cannot be relied upon too heavily.One reason to be cautious in attributing the threshold and change in gradient to recruitment of utricular afferents is the high gradients shown using the more intense stimuli for the oVEMPs recorded contralaterally.This implies involvement of afferents with a high affinity for the stimulus.The gradients are even higher than those for the p13 and n23 peaks of the cVEMP, peaks which may be taken to be indicative of the pattern of recruitment of saccular afferents.Alternatively, the change in gradient may simply be a property of the crossed pathway that mediates the responses.One way to resolve this issue may be to investigate the pattern of response to a stimulus which is more specific for utricular fibres, to see whether the gradient change is still present.Our findings about the differing properties of the cVEMP i-p13/n23 response and the oVEMP c-n10/p16 response might also be relevant to their differing responses to disease.A power-law relationship clearly cannot continue as intensity increases.All reflexes, inhibitory or excitatory, will eventually saturate.It is likely that the underlying relationship is closer to sigmoidal and that our observations represent the behaviour of several of the reflexes before there is an inflection in the curve.In some patients this saturation was evident and in our series the cVEMP i-n23 potential gradient fell with increasing intensity.In SCD a greater proportion of the sound energy is diverted to the vestibular apparatus thus causing much more effective stimulation than in healthy subjects, including afferents arising from the superior canal.This condition illustrates the reflex changes occurring with relatively more intense stimuli.For the SCD patient, the thresholds for both reflexes were the same and lower than for all our normal subjects.For the cVEMP, where amplitude differences between SCD patients and healthy subjects are known to be less reliable using conventional intensities, our patient confirms that the greatest separation from normal values occurs using less intense stimuli.In contrast, the separation for the oVEMP was large for our patient for nearly all intensities, including the loudest.More observations will be required using patients with SCD to determine the optimum level of stimulation for separation from normal responses.
Objective: To determine the amplitude changes of vestibular evoked myogenic potentials (VEMPs) recorded simultaneously from the neck (cVEMPs) and eyes (oVEMPs) in response to 500. Hz, 2. ms air-conducted sound pips over a 30. dB range. Methods: Fifteen healthy volunteers (mean age 29, range 18-57. years old) and one patient with unilateral superior canal dehiscence (SCD) were studied. The stimulus was reduced in increments to 105. dB pSPL for the normals (81. dB pSPL for the SCD patient). A statistical criterion was used to detect responses. Results: Ipsilateral (i-p13/n23) and contralateral (c-n12/p24/n30) peaks for the cVEMP montage and contralateral (c-n10/p16/n21) and ipsilateral (i-n13) peaks for the oVEMP montage were present for the baseline intensity. For the lowest intensity, 6/15 subjects had responses for the i-p13 cVEMP potential and 4/15 had c-n10 oVEMP responses. The SCD patient showed larger responses for nearly all intensities. The cVEMP potentials were generally well fitted by a power law relationship, but the oVEMP c-n10, p16 and n21 potentials showed a significant increase in gradient for the higher intensities. Conclusion: Most oVEMP and cVEMP responses follow a power law relationship but crossed oVEMP responses showed a change in gradient above a threshold. Significance: The pattern of response to AC stimulation may be a property of the pathways underlying the potentials. © 2013 International Federation of Clinical Neurophysiology.
205
Inferences about moral character moderate the impact of consequences on blame and praise
A longstanding question in moral psychology is a concern with the criteria people use when assigning blame to others’ actions.Theories of blame highlight several critical factors in determining an agent’s blameworthiness for a bad outcome.The first step is detecting some bad outcome that violates a social norm.Next comes an evaluation of whether the agent caused the outcome, followed by an assessment of whether the agent intended the outcome.People are considered more blameworthy for harmful actions than equally harmful omissions because the former are viewed as more causal than the latter.Moreover, people are blamed more for intentional compared to unintentional harms.Causation and malintent are each alone sufficient to ascribe judgments of blame for bad outcomes.In the case of accidental harms, people blame agents for bad outcomes that they caused but did not intend.There is also evidence that people blame agents for bad outcomes that they intend or desire but do not cause.Other work has highlighted how inferences about moral character impact the assignment of blame and praise.For example, judges and juries frequently condemn repeat offenders to harsher penalties than first-time offenders for equivalent crimes, and conviction rates are correlated with jurors’ knowledge of a defendant’s previous crimes, particularly when past crimes are similar to a current offence.In the laboratory, people assign more blame to dislikable agents than likable agents.These observations are consistent with a person-centered approach to moral judgment, which posits that evaluations of a person’s moral character bleed into evaluations of that person’s actions.In other words, despite being instructed to assess whether an act is blameworthy, people may instead evaluate whether the person is blameworthy.In line with this view, there is evidence that evaluations of causation and intent are themselves sensitive to inferences about an agent’s character.That is, people tend to conflate moral evaluations of agents with their perceptions of agents’ intentions and causation.For example, in the culpable control model of blame, a desire to assign blame to disliked agents influences perceptions of their control over an accident.In an early demonstration of this phenomenon, participants were told that a man speeding home got into a car accident, leaving another person severely injured.The man was described as rushing home to hide either an anniversary present or a vial of cocaine from his parents.Participants judged the delinquent cocaine-hiding individual as having more control by comparison to the virtuous present-hiding man.Similar effects are seen when participants are given more general information about the agent’s character.People also judge an agent who breaks a rule as being more causally responsible for an outcome that breaks a rule than an agent who takes the same action but does not break a rule, suggesting negative moral evaluations increase causal attributions.Moral judgments of agents also affect evaluations of intent.For instance, harmful foreseen side-effects are seen as more intentional than helpful foreseen side effects, suggesting that negative moral evaluations lower the threshold for inferring intentionality.In a study where participants played an economic game with agents who were either trustworthy or untrustworthy, and then evaluated the extent to which the agents intended various positive and negative outcomes, the untrustworthy agent was more likely to be evaluated as intending negative outcomes than the trustworthy agent.Greater activation was seen in the right temporoparietal junction, a region implicated in evaluating intent, when assigning blame to an untrustworthy relative to a trustworthy agent.Thus there is a substantial literature supporting a ‘person-as-moralist’ view of blame attribution, which posits that people are fundamentally motivated to assess the goodness and badness of others, and perceive others’ intent and causation in a way that is consistent with their moral evaluations.To assign blame and praise it is necessary to infer an agent’s mental state based on their actions, by considering the likely end consequences of their action.Recent work has shown that from an early age people readily infer people’s intentions by observing their decisions, deploying a “naïve utility calculus” that assumes people’s choices are aimed at maximizing desirable consequences and minimizing undesirable consequences, where desirability is evaluated with respect to the agent’s preferences.This means that in situations where agents make deterministic choices, their intentions can be inferred from the consequences of their choices.Evaluations of moral character are intimately linked to inferences about intentions, where accumulated evidence of bad intent leads to a judgment of bad character.What remains unknown is whether, and how, the formation of character beliefs impacts on moral judgments of individual actions.In other words, when people repeatedly observe an agent bring about either harmful or helpful consequences, do learnt inferences about the agent’s character influence how people make judgments regarding the agent’s individual acts?,Our research addresses several open questions.First, although studies have shown that perceptions of character influence separate assessments of consequences, causation, and blameworthiness, it remains unknown how precisely character evaluations affect the degree to which consequences and causation shape blame attributions.Second, the bulk of research in this area has focused on judgments of blameworthiness for harmful actions with less attention to how people judge praiseworthiness for helpful actions.Furthermore, those studies that have investigated praiseworthy actions have generally used scenarios that differ from those used in studies of blame not only in terms of their moral status but also in terms of their typicality.For example, studies of blame typically assess violent and/or criminal acts such as assault, theft, and murder, while studies of praise typically assess good deeds such as donating to charity, giving away possessions or helping others with daily tasks.Thus, our understanding of how consequences and causation impact judgments of blame versus praise, and their potential moderation by character assessments, is limited by the fact that previous studies of blame and praise are not easily comparable.In the current study we used a novel task to explore how inferences about moral character influence the impact of consequences and causation on judgments of blame and praise for harmful and helpful actions.Participants evaluated the blameworthiness or praiseworthiness of several agents’ harmful or helpful actions.These varied across trials, in terms of their consequences and also in terms of the degree to which the actions caused a better or worse outcome for a victim.In Study 1, participants evaluated a total of four agents: two with good character, and two with bad character.In Study 2 we replicate the effects of Study 1 in a truncated task where participants evaluated one agent with good character and one agent with bad character.We used linear mixed models to assess the extent to which blame and praise judgments were sensitive to the agents’ consequences, the agents’ causation of the outcomes, the agents’ character, and the interactions among these factors.The advantage of this approach is that it allows us to capture the influence of consequences, causation, and character on integrated moral judgments, without requiring participants to directly report their explicit evaluations of these cognitive subcomponents.For example, we can measure whether the effects of perceived causation on blame differs for good and bad agents, without asking participants directly about the perceived causation of good vs. bad agents.With this approach we can more closely approximate the way assessments of consequences and causation influence blame judgments in everyday life, where people might assign blame using implicit, rather than explicit, evaluations of causation and consequences.We manipulated the agents’ consequences by having the agents choose, on each trial, between a harmful option that yields a higher monetary reward at the expense of delivering a larger number of painful electric shocks to an anonymous victim, and a helpful option that yields a lower monetary reward but results in fewer painful shocks delivered to the victim.Across trials we varied the amount of profit and pain that result from the harmful relative to the helpful option.Thus, an agent in choosing the harmful option might inflict a small or large amount of pain on the victim for a small or large profit.Likewise, for helpful actions, an agent might sacrifice a small or large amount of money to reduce the victim’s pain by a small or large amount.We predicted that participants would infer the agents’ intentions from their choices and assign blame and praise accordingly: an agent who is willing to inflict a given amount of pain for a small profit should be blamed more than an agent who is only willing to inflict the same amount of pain for a much larger profit.Likewise, an agent who is willing to sacrifice a large amount of money to reduce pain by a given amount should be evaluated as more praiseworthy than an agent who is only willing to sacrifice less money to achieve the same benefit.Such evaluations would be consistent with the idea that people infer the intentions of others according to a “naïve utility calculus” where agents choose so as to minimize costs and maximize rewards.We manipulated causation by having the agents cause the harmful and helpful outcomes either via an overt action, or via inaction.Previous work has shown that the well-documented ‘omission bias’, whereby harm brought about by an action is judged worse than harm brought about by a failure to act, can be explained primarily by causal attribution.That is, an agent who brings about harm via an action is seen as causing the harm more than an agent who brings about harm by failing to act.At the start of each trial, a default option was highlighted and the agent could switch from the default to the alternative by pressing a key within a time limit.On half the trials the agent switched, while on the other half the agent did nothing.Crucially, across trials we matched the amount of help and harm that resulted from switching versus doing nothing, so that the agents brought about identical outcomes both via action and inaction.We predicted that participants would assign more blame for the same harmful outcomes brought about via action than inaction, and that they would assign more praise for the same helpful outcomes brought about via action than inaction, consistent with previous studies.Finally, we manipulated character by having agents choose according to different exchange rates for money and pain: good agents required a high profit to inflict pain on others, and were willing to sacrifice large amounts of money to reduce a victim’s pain; bad agents required only a small profit to inflict pain and were only willing to sacrifice small amounts of money to reduce a victim’s pain.Previous studies investigating how people actually make choices in this setting demonstrated the amount of money people are willing to trade for others’ pain correlates with morally relevant traits, including empathy and psychopathy.Although good and bad agents by definition made different choices, on a subset of trials they made the same choice.Consistent with studies showing people assign more blame to disliked individuals, we predicted that bad agents would be blamed more than good agents, even when making identical choices.Two studies were conducted at the Wellcome Trust Centre for Neuroimaging in London, UK and were approved by University College London Research Ethics Committee.Participants in both studies completed a battery of trait questionnaires online prior to attending a single testing session.Each session included two participants who were led to separate testing rooms without seeing one another to ensure complete anonymity.After providing informed consent, a titration procedure was used to familiarize participants with the electric shock stimuli that would be used in the experiment.Subjects were then randomly assigned to one of two roles: the ‘decider’ who engaged in a moral decision task, or the ‘receiver’ who completed a moral judgment task.In Study 1, participants assigned to the role of the receiver completed the moral judgment task.In Study 2, participants assigned to the role of the decider in an entirely separate sample completed the moral judgment task after completing the moral decision task.Here, we focus on behavior in the moral judgment task alone.Data from the moral decision task in Study 2 is reported elsewhere.Healthy volunteers were recruited from the UCL psychology department and the Institute of Cognitive Neuroscience participant pools.All participants provided written informed consent prior to participation and were financially compensated for their time.Participants with a history of systemic or neurological disorders, psychiatric disorders, medication/drug use, pregnant women, and more than two years’ study of psychology were excluded from participation.Furthermore, to minimize variability in participants’ experiences with the experimental stimuli, we excluded participants previously enrolled in studies involving electric shocks.Power calculations indicated that to detect effects of moderate size with 80% power, we required a sample of at least 34 participants.The current samples were thus adequately powered to detect moderate effects of our experimental manipulations.As previously stated, participants entered the laboratory in pairs and were then randomized into the role of either ‘decider’ or ‘receiver’.Both participants were then informed of the decider’s task, which involved choosing between delivering more painful electric shocks for a larger profit, and delivering fewer shocks but for a smaller profit.For each trial of the moral decision task, there was a default option and an alternative.The default option would automatically be implemented if the decider did nothing, but deciders could switch from the default to the alternative by making an active response.The decider alone received money from their decisions, but shocks were sometimes allocated to the decider and sometimes allocated to the receiver.Participants were informed that at the end of the decider’s task, one of the decider’s choices would be randomly selected and implemented.Thus, participants assigned to the role of the receiver were aware that they could receive harmful outcomes resulting from the decisions of another person.Conversely, participants assigned to the role of the decider were aware that their decisions could result in a degree of harm to another person.In Study 1, participants completed a moral judgment task in which they evaluated sequences of 30–32 choices made by four fictional deciders, presented one at a time in random order, for a total of 124 trials.After observing a given choice, participants provided a moral judgment of the choice on a continuous visual analogue scale ranging from 0 to 1.In Study 2, participants completed a similar task where they evaluated sequences of 30 choices made by two agents, presented one at a time in random order, for a total of 60 trials.Participants in Study 1 were instructed that the agents whose choices they were evaluating reflected the choices of previous deciders and were not the choices of the current decider in the next room.Participants in Study 2 were instructed that the agents whose choices they were evaluating reflected the choices of previous deciders.For full instructions and trial parameters, see Supplementary Materials).Across trials for a given agent we manipulated the following factors:Consequences: the difference in the number of shocks and amount of money that resulted from the agent’s choice.These numbers could be negative or positive.The difference in number of shocks ranged from −9 to 9, while the difference in amount of money ranged from -£9.90 to £9.90.Thus, in Fig. 2a, the difference in shocks was equal to 1 shock and the difference in money was equal to £5.00.The precise amounts of shocks and money resulting from harmful and helpful choices were sufficiently de-correlated across trials enabling us to examine independent effects of shocks and money on judgments in our regression analysis.Additionally, this manipulation enabled a parametric analysis examining harmfulness and profit on a continuous scale.Causation: on half the trials, agents chose to switch from the default to the alternative option.On the other half, agents chose to stick with the default option.Action and inaction trials were matched in terms of consequences so we could directly compare judgments of harmful actions with equally harmful inactions, and helpful actions with equally helpful inactions.Because actions are perceived as more causal than inactions, this manipulation enabled us to investigate the extent to which moral judgments are sensitive to differences in the agents’ causal role for bringing about the outcome.Across trials, the default number of shocks varied from 1 to 20, while the default amount of money was always £10.00.Character: To investigate how character influences judgments we manipulated the moral preferences of the agents.Specifically, each agent’s moral preferences were determined by a computational model of moral decision-making validated in previous experiments.In this model, the subjective cost of harming another is quantified by a harm aversion parameter, κ.When ln → −∞, agents are minimally harm-averse and will accept any number of shocks to increase their profits; as ln → ∞, agents become increasingly harm-averse and will pay increasing amounts of money to avoid a single shock.Participants in Study 2 only evaluated the choices of agents B1 and G1 after completing the moral decision task.This allowed us to focus our analysis on choices where agents faced identical choice sets and behaved similarly most of the time.In both studies, three sequences of trials were generated and randomized across participants.See Supplemental Materials for details about agent simulations.Also in both studies, after observing the full sequence of choices for each agent, participants rated two aspects of the agent’s character and three aspects of the agent’s choices.Each rating was provided on a continuous visual analogue scale ranging from 0 to 1.The exact wordings of the questions were as follows:Kindness: “In your opinion, how KIND was this Decider?,Trustworthiness: “How much would you TRUST this Decider?,Harmfulness: “Please consider all of the Decider’s choices.In your opinion, what proportion of the Decider’s choices were HARMFUL?,Helpfulness: “Please consider all of the Decider’s choices.In your opinion, what proportion of the Decider’s choices were HELPFUL?,Selfishness: “Please consider all of the Decider’s choices.In your opinion, how SELFISH was this Decider?,Our primary analysis used a categorical character regressor in the model.However, to verify our results we also fit the model described in Eq. substituting the categorical ‘character’ regressor with participants’ own subjective ratings of the agents’ kindness.We chose to focus specifically on the kindness character rating because our task was not designed to measure trust.We fit the data using a linear mixed-effects model with random intercepts in R.Estimates of fixed effects are reported along with standard error.Where possible, we confirmed the findings of our linear mixed-effects models with analyses that did not rely on a model.To do this we computed mean judgments for each cell of our 2 × 2 × 2 design on the subset of trials where good and bad agents made identical choices.We entered these mean judgments to a repeated-measures analysis of variance and compared the results of this analysis to the results from our model.Participants’ post hoc ratings of the agents suggested they accurately inferred the agents’ moral character from the choices they made.Relative to bad agents, participants rated good agents’ character as significantly more kind and trustworthy.Participants also rated good agents’ choices as more helpful, less harmful and less selfish than bad agents’ choices.Next we asked whether our within-task manipulations of consequences and causation exerted significant effects on moral judgments.To do this, we computed across all agents and trials the average judgments for each cell of our 2 × 2 × 2 design and subjected these mean judgments to a repeated-measures ANOVA.As expected, there was a significant effect of consequences on moral judgments , indicating that harmful choices were judged more blameworthy than helpful choices.As can be seen in Fig. 3a-b, judgments of helpful choices were above the midpoint of the scale and harmful choices were below the midpoint of the scale.This suggests that participants did in fact believe that helpful actions were deserving of praise, despite the fact that all helpful choices resulted in some degree of harm.The main effect of causation was significant in Study 1 , though not in Study 2 , indicating actions were judged as more praiseworthy than inactions for Study 1 alone.This was qualified by a statistically significant interaction between causation and consequences on moral judgments in both studies .Simple effects analyses showed that participants judged harmful actions as more blameworthy than harmful inactions, and helpful actions as more praiseworthy than helpful inactions.This analysis verified that within our design, moral judgments were strongly influenced both by the consequences of actions and by the causal role the agents played in producing the consequences.Next we examined the estimates from our model), which showed that controlling for all other factors, there was a small but significant effect of moral character on judgment.As predicted, bad agents were ascribed more blame than good agents.We repeated our analysis substituting the categorical ‘character’ regressor with participants’ own subjective ratings of the agents’ kindness and obtained the same results.Our model results were consistent with a complementary analysis in which we computed mean judgments for each cell of our 2 × 2 × 2 design on the subset of trials where good and bad agents made identical choices.Here, we observed a trend towards more favourable judgments of good than bad agents in Study 1 , and significantly more favourable judgments of good than bad agents in Study 2 .Thus, for the exact same choices, bad agents received slightly harsher judgments than good agents, Fig. 4.Parameter estimates for shocks, money, and causation in Eq. were all significantly different from 0, indicating that moral judgments were independently affected by the number of shocks delivered to the victim, the amount of money received by the agent, and whether the agent made an active or passive choice.Furthermore, moral character moderated participants’ sensitivity to consequences.The interaction between character and shocks was significantly negative in both studies.The interaction between character and money was also significantly negative in both studies.Negative parameter estimates indicate that judgments of bad agents’ choices were significantly more sensitive to consequences than judgments of good agents’ choices.Meanwhile judgments of bad and good agents’ choices did not differ in terms of their sensitivity to causation.To illustrate these interaction effects, we estimated the shocks, money and causation parameters separately for the good and bad agents and display these in Fig. 5a–c.In an exploratory analysis we modelled the effects of character, consequences, and their interaction on moral judgments separately for trials where agents harmed vs. helped).Here we observed an effect of harm magnitude on judgments of harmful choices: increases in the number of shocks amplified ascriptions of blame for harmful choices, and decreases in the number of shocks amplified ascriptions of praise for helpful choices.Money also exerted an independent effect on judgments.Across both studies, harmful choices were less blameworthy when accompanied by larger profits.Meanwhile, helpful choices were less praiseworthy when they were accompanied by smaller relative to larger costs in Study 1.In other words, the presence of incentives mitigated both the condemnation of harmful choices and the praiseworthiness of helpful choices.However, praiseworthiness judgments were not influenced by profit magnitude in Study 2.Finally, consistent with the analysis described in Fig. 3a and b and work on the omission bias, our linear model showed that harmful actions were judged as more blameworthy than harmful inactions, whereas helpful actions were judged to be more praiseworthy than helpful inactions.We next investigated the influence of moral character on participants’ sensitivity to consequences for harmful and helpful choices separately.The interaction of character with shocks was significant for both harmful choices and helpful choices.For both harmful and helpful choices, judgments of bad agents were more sensitive to the magnitude of shocks than judgments of good agents.In other words, inferring bad character amplified the effects of increasingly harmful outcomes on blame and also amplified the effects of increasingly helpful outcomes on praise.Character also impacted participants’ sensitivity to money, although these effects were less consistent across harmful and helpful choices.For harmful choices, the magnitude of profit was weighted more strongly in judgments of bad agents than good agents.In other words, the presence of personal incentives mitigated blameworthiness judgments of harmful choices made by bad agents more strongly than was the case for good agents.However for helpful choices, the magnitude of costs was weighted marginally stronger in judgments of bad agents’ choices than good agents in Study 1, but not Study 2.A person-centered perspective suggests moral judgments encompass evaluation of an agent’s character, in addition to evaluation of choice behavior itself.In the present study we investigated whether inferences about moral character shape the relative weights placed upon an agent’s consequences and the degree of imputed causation in attributions of blame and praise.To do this we employed a novel approach that involved modelling independent effects of consequences and causation on moral judgments, during observation of decision sequences made by ‘bad’ and ‘good’ agents.Each decision involved a trade-off between personal profit and pain to a victim, and could result from either actions or inactions.By linking agents to a range of harmful and helpful outcomes, that varied in their costs and benefits, we could evaluate how consequences affected judgments of blame and praise.By framing responses as either action or inaction, we could also assess the extent to which an agent’s causal role in bringing about an outcome influenced participants’ blame and praise judgments.We found that inferences about moral character affected the influence of consequences on moral judgments.Consequences were weighted more heavily in judgments of choices made by bad agents, relative to good agents.In other words, the degree of harm and the degree of personal profit resulting from the agent’s choice were more potent factors in blame and praise assessments of bad agents than was the case for good agents.We also found that although judgments were sensitive to whether agents caused the outcomes via an overt action, or via inaction, this factor was not moderated by the character of the agent.That is, causation was similarly weighted when making judgments of good and bad agents’ choices.We note that examining judgments of events caused by actions versus inactions is just one way to study the impact of causal attributions on blame and praise.Other possible approaches include contrasting events caused with physical contact versus no contact, events caused directly versus indirectly, and events caused as a means versus a side-effect.Future studies should test the potential importance of character on causation using different manipulations to investigate the generalizability of our findings across multiple manipulations of causation.In an exploratory analysis, we found that judgments were more sensitive to the magnitude of shocks not only for bad agents’ harmful choices, but also for their helpful choices.Our findings raise a question as to why participant’s praiseworthiness judgments were especially attuned to the helpful consequences of bad agents.Given that bad agents have historically made self-serving decisions, the more intuitive response might be to mitigate sensitivity to the magnitude of helping and consider their apparently ‘altruistic’ behavior as driven by situational factors.From a strict mental state attribution perspective, this finding is perhaps puzzling.However, an important aspect of our experimental design is that no a priori information was provided to participants about the morality of the agents.Instead, if participants were motivated to learn about the agents’ moral character, they had to gather information across trials to infer on how averse agents were to harming the victim.One possibility is that participants were especially motivated to build accurate predictive models of bad agents, relative to good, because avoiding those who may harm us is an important survival instinct.If participants were highly motivated to build a richer model of bad agents, then we would not expect them to neglect relevant information provided in helpful trials.Because people should be particularly motivated to learn about potential social threats, then they should be more attuned to all the choices threatening agents make.Our analysis indicated that harmful choices were less blameworthy when accompanied by larger profits, replicating previous work showing that observers assign less blame to moral violations resulting in large, relative to small, personal benefits.Furthermore, this effect was more pronounced for bad agents than good agents.That is, the presence of personal incentives mitigated blameworthiness judgments of harmful choices made by bad agents more strongly than was the case for good agents.Meanwhile, we obtained less consistent findings for the effect of personal incentives on judgments of helpful choices across Studies 1 and 2.First, the presence of incentives mitigated the praiseworthiness of helpful choices in Study 1, but not Study 2.Second, judgments of bad agents’ choices were marginally more sensitive to the magnitude of incentives for helpful choices in Study 1, but not Study 2.Thus, it is possible that character only moderates the effect of personal incentives on the blameworthiness of harmful choices, and not the praiseworthiness of helpful choices.However, we caution that the range in the magnitude of incentives for helpful choices was very small for bad agents.Furthermore, other work has shown that agents who help others, in the absence of personal incentives, are judged more favorably than those whose helpful choices can be explained by incentives.Thus, an alternative possibility is that the range in money for helpful choices was too small to observe a main effect of money for helping in Study 2, and an interaction between character and money for helping.Another limitation of our experimental design is that consequences were not dissociated from the intentions of the agents.Thus, it is unclear whether greater sensitivity to consequences for bad, relative to good, agents is driven by an increased sensitivity to intent or consequences.Future studies could dissociate intent and consequences using the current experimental design by randomly varying whether the agents’ intentions are actually implemented.We might speculate that the findings here are motivated by consequences rather than intentions in light of recent work on how people blame and punish accidents, which dissociate consequences and intent.Research on judging accidents shows that moral judgments are sensitive to both consequences and intent, but consequences may play a more dominant role when judging accidents.Notably, sensitivity to accidental consequences appear to matter significantly more when people are asked how much blame or punishments should be attributed to the behavior, than when asked how wrong or permissible it was.Martin and Cushman explain this finding by arguing that punitive behaviors signal to others to adjust their actions.In this sense, punishment is adaptive to the extent that it improves one’s own chance of engaging in future cooperation with past wrongdoers, and thus serves as a ‘teaching signal’.If punishment and reward serve as teaching signals, we might expect them to be more readily endorsed as a function of outcome severity when we infer bad character.That is, teaching signals should be preferentially directed towards those who need to be taught.While we do need to teach someone with a history of bad behavior right from wrong, this is less necessary when we consider someone who has already learned how to cooperate.We employed novel methods to investigate the effects of moral character on how people integrate information about consequences and causation in judgments of choices to help or harm a victim.We validated these methods by replicating previous findings that the magnitude of consequences and causation shape attributions of blame for harmful choices and praise for helpful choices.Character moderated the effects of consequences on judgments, with consequences weighting more strongly in judgments of bad relative to good agents.Our findings support a person-centered approach to moral judgment, and suggest avenues for future research investigating how impressions of morality are formed over time and how these evolving impressions shape subsequent moral judgments.
Moral psychology research has highlighted several factors critical for evaluating the morality of another's choice, including the detection of norm-violating outcomes, the extent to which an agent caused an outcome, and the extent to which the agent intended good or bad consequences, as inferred from observing their decisions. However, person-centered accounts of moral judgment suggest that a motivation to infer the moral character of others can itself impact on an evaluation of their choices. Building on this person-centered account, we examine whether inferences about agents’ moral character shape the sensitivity of moral judgments to the consequences of agents’ choices, and agents’ role in the causation of those consequences. Participants observed and judged sequences of decisions made by agents who were either bad or good, where each decision entailed a trade-off between personal profit and pain for an anonymous victim. Across trials we manipulated the magnitude of profit and pain resulting from the agent's decision (consequences), and whether the outcome was caused via action or inaction (causation). Consistent with previous findings, we found that moral judgments were sensitive to consequences and causation. Furthermore, we show that the inferred character of an agent moderated the extent to which people were sensitive to consequences in their moral judgments. Specifically, participants were more sensitive to the magnitude of consequences in judgments of bad agents’ choices relative to good agents’ choices. We discuss and interpret these findings within a theoretical framework that views moral judgment as a dynamic process at the intersection of attention and social cognition.
206
Micro-tensile strength of a welded turbine disc superalloy
The development of micro-scale experiments has been initiated by the need to evaluate the mechanical behaviour of small volumes of materials and also by a desire to determine how the mechanical properties of a material change when external dimensions are greatly reduced .Micro-tensile testing has been effectively used to characterise the mechanical properties of thin films, e.g. of MEMS.The methods often used in preparing micro-tensile specimens include material deposition on substrate , deep reactive ion etching and electro-discharge machining .All these methods have been successfully used, but cannot be adopted for applications where a particular site of interest within a large volume of material is needed to be characterised.Focussed ion beam is an invaluable tool for fabricating micron-sized structures, due to its ability to deposit Pt or W in controlled shapes, the availability of submicron ion beams, 3-D stages, and fully automated control .Aside from the high accuracy in local positioning through direct visual control, in situ micro-tensile testing also offers insight into the real time material deformation process.Micro-tensile tests have been carried out to characterise the local properties within fusion welds of HY-100 steel and of two dissimilar stainless steels .The strength at the centre of the weld was found to be considerably higher than that away from the centre.However, the dimensions of these microsamples are larger than many welds produced by solid state techniques.For instance, inertia friction welding often produces much narrower welds with a bond line zone about 50 μm wide compared with the 1300 μm of typical fusion welds.Solid state joining is increasingly used in the aeroengine industry for multicomponent turbine engines.Detailed material characterisation within the narrow weld zone, where microstructural variation from the parent material has occurred, is essential to understand the overall mechanical performance of welded joints.The hardness of an inertia friction welded RR1000 superalloy has been reported to be higher than that of the parent material .As a result the true yield stress of the weld zone cannot be measured by standard tensile testing procedures .This paper reports in-situ micro-tensile deformation of IFW RR1000, focussing on the yielding.RR1000 is a recently developed nickel base superalloy processed via powder metallurgy with a nominal chemical composition of 15.0 Cr, 18.5 Co, 5.0 Mo, 3.0 Al, 3.6 Ti, 2.0 Ta, 0.5 Hf, 0.015 B, 0.06 Zr, 0.027 C and balance nickel.A thin cross-section slice was extracted from an inertia welded tube in RR1000 by electro-discharge machining.It was ground and thinned before a wedge shaped sample, which contained both the parent and the weld regions, was cut out.The wedge shaped sample was further ground and polished to a thickness of about 100 µm.Micro-tensile samples with a 13 μm gauge length and 2 µm by 3 µm cross section were prepared from the wedge shape sample using a Quanta 3D FEG SEM FIB with a Ga+ ion source operated at 30 kV.The microtesting system used in this study includes a piezo-electric drive to apply the load to the sample via a miniature load cell with a load capacity of 0.5 N and a resolution of 0.001 mN.The tensile system was under displacement control at a resolution of 40 nm.The applied load and elongation were measured, and used to plot a stress–strain curve, together with the recorded SEM images.Electron back scatter diffraction was employed to determine the loading direction and the active slip systems, in order to evaluate the critical resolved shear stresses.Fig. 2 shows a micro-tensile specimen prepared from the parent RR1000.This specimen was tested by pulling to fracture.Fig. 3b is the stress vs. strain plot obtained from the tensile test.From both the in-situ observation of the formation of slip and the stress–strain results of this test, the yield strength of the alloy and the ultimate tensile strength were determined to be 619 MPa and 699 MPa, respectively.Another sample was prepared from the weld zone where constitutional liquation features were observed on the grain boundaries.EBSD shows that the grain boundary between grains A and B, and that between B and C, are typical high angle grain boundaries.It was observed that the sample started yielding in grain A, the slip bands then propagated into grains B and C, and the sample subsequently failed along the liquated grain boundary between B and C.This suggests that the high angle boundary, where a significant constitutional liquation product was observed, was weakened.Table 2 shows a summary of the tensile data from the experiment."It is generally expected that the yield strength within the weld of a γ' strengthened nickel base alloy should be greater than in the base alloy. "This is because the re-precipitation of a high volume fraction of tertiary γ' in the size range 10–40 nm at the weld increases the strength . "The size and distribution of γ' of the weld and the parent are compared in Fig. 6.It is evident from Table 2 that the critical resolved shear stress in the weld is indeed higher than that in the base alloy.Although there is a slight uncertainty in the alignment of the length axis of the sample and of the sample holder in the z direction, efforts were made to ensure that the flanks of the sample holder and of the sample head are parallel and equally spaced in order to minimise out-of-plane loading of the specimens, as illustrated in Fig. 2.The measured yield strength in the micro-tensile samples is lower than the expected yield strength in the bulk material.This could be a result of the size with crystallographic orientation of the micro-tensile sample .The CRSS of the parent and weld of RR1000 was measured using a micro-tensile test.The results of the micro-tensile tests have shown that the critical resolved shear strength within the weld region is 306 MPa, higher than that of the parent alloy of 255 MPa."The higher strength within the weld region is due to high volume fraction of the re-precipitated tertiary γ' as confirmed via the formula of Brown.The liquated grain boundary appeared weaker in the tensile test.Due to the narrow weld zone and often over-matched strength associated with inertia friction welds, it is difficult to characterize the local mechanical properties of the welded region by using a standard large-scale testpiece geometry.A micro-tensile test method, using focused ion beam machining to make specimens only a few microns in size, for determination of the elastic and plastic properties of candidate materials, is developed.This experimental technique allows the measurement of tensile properties with a much smaller restriction on the volume of material available.The manuscript describes such work carried out on an inertia weld of RR1000, an advanced nickel disc alloy.The study was linked with parallel electron microscopy studies of the effect of precipitate size and distribution, and the effect of some welding features.This provides insight to the hardening and embrittlement of the weld.The results obtained from the current study could also potentially be used for the development and verification of computer models for simulating the mechanical properties of welded joints in RR1000.It is the first time that micro-tensile experiments have been performed on inertia friction welds in RR1000.This paper should be of interest to a broad readership, including those interested in material characterisation, solid state welding of superalloys, modelling of weld mechanical properties, whether from academia or industry.We declare that this manuscript consists of original, unpublished work which is submitted for the first time to this journal and is not under consideration for publication elsewhere.
A micro-tensile testing system coupled with focussed ion beam (FIB) machining was used to characterise the micro-mechanical properties of the weld from a turbine disc alloy. The strength variations between the weld and the base alloy are rationalised via the microstructure obtained. © 2013 Elsevier B.V.
207
The nutritional content and cost of supermarket ready-meals. Cross-sectional analysis
The endemic nature of obesity in many countries has led to increasing attention being paid to dietary practices and choices.One growing area of concern is a perceived decline in home cooking with an increasing reliance on convenience foods, including ready-meals.Ready-meals have been defined as pre-prepared main courses that can be reheated in their container, requiring no further ingredients, and needing only minimal preparation before consumption.The UK has one of the most dynamic ready-meal markets, accounting for more than £1.4 bn in annual sales in the year to January 2014 – a 1.5% year-on-year increase.UK data suggest that in 2003 almost two-thirds of UK households consumed some ready-meals; and in 2006 40% of households ate ready-meals at least once per week.More recent, detailed and population-representative data on frequency of consumption are not available.More than 90% of ready-meals sold in the UK are supermarket own-brand products.Most supermarkets ‘brand’ their own-brand products into premium or luxury, ‘healthier’, economy or value, as well as standard ranges.Fresh and frozen varieties of many meals are available across these ranges."Consumers' reasons for choosing ready-meals particularly focus on the perceived convenience and value for money compared to home cooking.However, the health benefits of ‘healthy’ ranges and any nutritional benefit or loss associated with the price differentials of premium and economy ranges are not clear.Consumption of ready-meals has been associated with higher body weight.This is likely because ready-meals tend to contain high levels of fat and saturated fat.Although a number of previous studies of the nutritional content of supermarket ready-meals have been conducted, these have all been limited in scope.No previous study has systematically explored the nutritional content of the full range of popular ready-meals.Nor has any study explored the cost of ready-meals and any relationship between cost and nutritional content.Thus, our aim was to describe the nutritional content of supermarket ready-meals and explore associations between cost and nutritional content.We conducted a survey of the price and nutritional content of supermarket own-brand ready-meals sold in branches of large supermarket chains in one city in Northern England."Outlets operated by ten supermarkets were included in the study: Aldi, Asda, Cooperative Food, Iceland, Lidl, Marks & Spencer, Morrisons, Sainsbury's, Tesco, Waitrose.Together, these accounted for a combined grocery market share, at the time of data collection, of more than 95%, 2014).As previously, ready-meals were defined as pre-prepared meals, supplied in the container used for cooking, with no further ingredients or preparation required other than heating.We restricted the sample to supermarket own-brand ready-meals intended as single servings.In all cases, both frozen and chilled ranges were searched for and included if present.One supermarket did not sell own-brand ready-meals at the time of data collection and was excluded from further consideration.Meals in four ‘ranges’ were included – luxury, standard, value and ‘healthier’.Although the specific name of each range varied between supermarkets, it was not difficult to place all meals found in one of the four ranges based on explicit branding on packages.Examples of written branding used on packages in different ranges are given in Table 1, although other aspects of branding also plays an important role in identifying meal ranges.However, not all supermarkets sold meals in all ranges and some supermarkets had more than one label within a particular range.In these cases all eligible meals were included.We included six meal types: macaroni cheese, meat lasagne, cottage pie, fish pie, chicken tikka masala, and sweet and sour chicken.These reflect four meal types included in previous work as ‘popular choices’, as well as two additional meal types that reflect expanding tastes in UK ready-meal consumption.Brief descriptions of each meal type are provided in Table 1.Within each supermarket, we identified the number of eligible ranges present and assumed that all six eligible meal types were available in both chilled and frozen versions within these ranges.This gave a total number of potentially eligible meals.However, we cannot be sure that all these potentially eligible meals were produced and sold.The number of eligible meals found in all stores visited, in comparison to the total number of potentially eligible meals, is described in Table 1.At least one representative of each meal type was present in each meal range, and vice versa.All branches of included supermarkets within the study city boundaries were identified from supermarket websites and visited by one researcher over one week in April 2013.In each store, the researcher identified all ready-meals that met the inclusion criteria and recorded the price, weight and nutritional information shown on packaging.Specifically, total energy, fat, saturated fat, carbohydrate, sugar, protein, fibre and salt were recorded.Nutrient content per 100 g of product was also recorded.When meals that had previously been encountered during data collection were found again in a subsequent branch, weight and nutritional information were not re-recorded.Price was recorded on all occasions to allow for the potential for ‘price flexing’ – variations in price of the same product across different branches of the same chain.In these cases the average price of the meal across all branches in which it was found was calculated for use in analysis.A second researcher visited a 10% random sample of included stores during the same week as the first researcher and collected data independently.There was 100% agreement between researchers in the meals identified for inclusion and the price, weight and nutritional content of included meals.All analyses were conducted at the meal level.As there was evidence that some variables were not normally distributed, non-parametric methods were used throughout.The cost, weight and nutritional content of meals overall and within meal ranges and types were described using median and interquartile ranges.Differences between meal ranges and types were explored using Kruskall–Wallis tests.As not all ready-meals have the same weight, similar analyses were conducted for both total nutritional content and nutritional content per 100 g.Median nutritional content per 100 g was compared to current UK guidance on front-of-pack nutrition, or ‘traffic light’, labelling – this indicates ranges for red/high, amber/medium and green/low content of fat, saturated fat, sugar and salt.The number of meals rated as ‘low’ for one, two, three or all four of these nutrients was also calculated.Differences in the number of nutrients that meals were rated ‘low’ for across meal ranges and types were explored using chi-squared tests.Associations between price and both weight and nutritional content, overall and within meal ranges and types, were explored using Spearman rank correlation tests.All analyses were conducted in Stata v13.0.As a large number of statistical tests were performed, a p-value of <0.01 was taken to indicate statistical significance.Ethical permission was not required for this study as it did not include any human or animal participants.Forty one supermarkets met the inclusion criteria and were visited.Out of 360 potentially eligible meals, 166 were found and included in the analysis.There was no difference in the proportion of potentially eligible meals available by meal type, but there was evidence that availability varied by meal range.Meals in the standard range were most likely, and meals in the luxury range least likely, to be available.Table 2 summarises the total cost, weight and nutritional content of included meals and how these varied by meal range and type.Overall, meals cost a median of £2.20 and contained a median of 450 kcal.All variables, except sugar and fibre content, varied significantly across meal ranges.Cost was highest in luxury ranges and lowest in value ranges.However, value ranges also tended to be slightly lighter than other ranges.There was evidence that meals in ‘healthier’ ranges contained less total energy, fat, saturated fat and salt that meals in other ranges – indicating that they were ‘healthier’ on a number of parameters.However, fibre did not vary between meal ranges.In contrast, meals in the luxury ranges tended to have the least healthy profiles with the highest total energy, fat, saturated fat and salt content.Meals in value ranges were particularly low in protein.Although cost, weight and salt content did not vary significantly between meal types, all other aspects of nutritional content did.Total energy was lowest in fish pie and cottage pie and highest in macaroni cheese.Fat and saturated fat were lowest in sweet and sour chicken and highest in macaroni cheese.Sugar was lowest in fish pie and cottage pie, but highest in sweet and sour chicken.Protein was highest in chicken tikka masala and lowest in cottage pie.Fibre was highest in chicken tikka masala and cottage pie, but lowest in macaroni cheese.To take account of differences in product weight, Table 3 summarises the relative nutritional content per 100 g of product of included meals.In general, and despite variations in weight, differences across groups in Table 3 reflected those in Table 2.Shading in Table 3 reflects current UK guidance on front-of-pack nutrition labelling.Overall, meals were rated as medium for fat, high for saturated fat and salt, and low for sugar.As a group, meals in the ‘healthier’ ranges were low for fat, saturated fat and sugar, and medium for salt.Meals in luxury ranges were high for fat, saturated fat and salt, and low in sugar.Macaroni cheese meals had the least healthy profile being rated, as a group, as high in fat, saturated fat and salt, but low in sugar.In contrast, sweet and sour chicken meals were low in fat and saturated fat but high in sugar and salt.Table 4 shows the number of meals in each category that were rated as low for one, two, three or all four of the front-of-pack nutrients.Overall, one-fifth of meals were low for all four nutrients.All meals were rated as low for at least one front-of-pack nutrient.The only significant differences in the number of front-of-pack nutrients that meals were low for were by meal range.No meals in the ‘luxury’ ranges were rated as low for all four nutrients and two-thirds were only low for one nutrient.In contrast, no meals in the ‘healthier’ ranges were low for just one nutrient and two-thirds were low for all four.Correlations between price and both weight and nutritional content are shown in Table 5.Overall there was a strong positive correlation between price and protein content; moderate positive correlations between price and weight, energy and fat content; and weak positive correlations between price and saturated fat and fibre content.Similar correlations were seen within meal ranges and types.Meals that were low for one front-of-pack nutrient cost a median of £2.35, those that were low for two cost £2.20, those that were low for three cost £1.25 and those that were low for all four cost a median of £2.20.This is the first study we are aware of to systematically explore the nutritional content and cost of the full landscape of supermarket own-brand ready-meals.Across 41 branches of nine national supermarkets, we found 166 ready-meals that met our inclusion criteria.Nutritional content varied substantially according to meal range and type.Overall, meals were categorised as high in saturated fat and salt, and low in sugar according to current UK guidance for front-of-pack nutritional labelling.One-fifth of all meals were rated as low for all four front-of-pack nutrients, including two-thirds of meals in ranges specifically marketed as ‘healthier’, but none of the meals specifically marketed as ‘luxury’.The cost of meals was positively associated with weight, total energy, fat, saturated fat, protein and fibre.Meals that were rated as low for three out of the four front-of-pack nutrients were the cheapest, and those that met only one the most expensive.Our methods represent a significant improvement on previous methods used to study the nutritional content of supermarket ready-meals.Unlike previous work, we included a much fuller range of supermarket ready-meals currently available in the UK, identified using systematic methods, and provided a detailed analysis of nutritional content.In particular, we included a wider range of supermarkets, meal ranges, and meal types than previously.Uniquely we also explored the cost of meals, and associations between cost and nutritional content.The range of nutrients included does not include some important micro-nutrients, or other aspects of diet.In particular, we did not have information on the fruit and vegetable content of included ready-meals.This study was conducted in one city in Northern England.Whilst the nutritional content of supermarket ready-meals stated on packaging is unlikely to vary across the UK, there may be small variations in actual nutritional content from batch to batch, and between meals produced in different locations.We relied entirely on the nutritional content as stated on packaging and did not independently verify that this was accurate.In the UK, nutritional information on food packaging is permitted by law to vary by 20% from the values, to allow for fluctuation in manufacturing processes.It is, therefore, possible that there may be some error in the nutritional information we used.In order to capture variations in price across different branches of the same supermarket, we collected price data from all branches of included supermarket chains in the study city.This is likely to increase the generalisability of price data.It is also possible that ready-meal availability varies across the country and that studies in different cities would have identified different eligible meals.The 100% inter-rater agreement on all variables indicates that our data are likely to be highly reliable.Manufacturers of processed foods are constantly reformulating products.Similarly the price of foods is not constant.We collected data over a period of only one week in order to avoid time-related changes in price or nutritional content.However, it is possible that both the price and nutritional content of supermarket own-brand ready-meals have changed in the time since data collection.We did not have any information on sales and were not able to take into account how popular different ready-meals in the sample were."Nor were we able to draw any conclusions on how the ready-meals in the sample contribute to the total diet of consumers: it is not necessarily the case that ready-meals are the least nutritious component of consumers' diets.Previous work has attempted to compare the absolute nutritional content of ready-meals with nutritional standards for meals.One important finding was that ready-meals often contain substantially fewer calories than is recommended for a meal.Whilst we considered conducting a similar analysis, ready-meals are probably more sensibly considered ‘ready-main-courses’, than complete meals.This may explain why the total energy content is less than might be expected.As such, we chose to use the cut-offs for front-of-pack nutritional labelling instead of whole meal nutrient standards.These also allow for comparisons between products of different sizes.As in previous work, we found that ready-meals tended to be high in fat, saturated fat and salt.Unlike eating take-away meals at home, which are recognised by consumers to be less healthy, occasional ‘treats’, ready-meals are primary seen as a convenient alternative to home cooking.Recent population-representative data from the UK suggests that around one-fifth of adults consume take-away meals at home once per week or more often.Although such high quality data on the frequency of consumption of ready-meals is not available, in 2006 it was estimated that 40% of UK households ate such meals at least once per week.The population impact and public health implications of ready-meals may, therefore, be much larger than those of take-aways – despite the latter receiving much more research and media attention.Further work exploring the relative contribution of different foods prepared outside the home, but consumed inside the home, to total diet will help guide intervention developers to the area most likely to achieve the largest population impact.We found that the nutritional content of ready-meals varied across meal range and type.In particular, we found that meals specifically labelled as ‘healthier’ were rated as low in fat, saturated fat and sugar and medium in salt overall, and were much more likely to achieve ‘low’ ratings of all four front-of-pack nutrients than meals in any other category.This suggests that healthier alternatives are available within the ready-meal sector and a simple public health message to avoid all ready-meals may be inappropriate.However, it is worth noting that meals in the ‘healthier’ ranges were not necessarily always ‘low’ in all four front-of-pack nutrients – although consumers may, perhaps, expect this.Further research is required to determine whether consumers are being misled by current branding and whether stricter rules are required on what circumstances ‘healthier’ branding can be used by food manufacturers."Qualitative research has found that being seen to eat healthily and eating foods overtly branded as ‘healthy’ can be socially damaging and is associated with being less popular in both young people and adults – particularly those from less affluent backgrounds.Further work is required to understand how to increase consumer acceptance of healthier products.Avoiding overtly branding such products as ‘healthier’ could also be productive.It is difficult to untangle the relationship between consumer food preferences and manufactured food availability.It is possible that preferences are driven by what is available, or manufacturers may make available what is preferred.In practice, a combination of both of these two scenarios is likely to be operating.This suggests that changing the ‘food supply’ towards healthier manufactured food will not necessarily lead to changes in what consumers eat.It is possible that if the content of products is changed to be healthier, consumers will change what products they choose.The association of widespread reductions in the salt content in manufactured food across the UK with reductions in overall salt intake suggests that it is possible for healthy changes in ‘food supply’ to impact on population diets.The finding that healthier ready-meals are available reinforces the sophistication of the food industry in terms of product formulation.Previous studies have reported mixed findings in terms of reduction in salt content of ready-meals over time, suggesting that consistent progress is not taking place.Given that it is clearly possible to produce healthier ready-meals, more pressure could be placed on the ready-meal industry to improve the nutritional profile of all meals, and not just those specifically labelled as ‘healthier’.As the cost of meals in the ‘healthier’ ranges was not substantially greater than those in ‘standard’ ranges, improving the nutritional profile of ready-meals would seem to be unlikely to lead to any increase in cost to the consumer.We found that the cost of ready-meals was positively associated with weight, energy, fat, saturated fat, protein and fibre.Whilst fat and saturated fat are nutrients that, in population terms, we should be consuming less of, fibre is a nutrient that we should be consuming more of.Thus, consumers who choose more expensive ready-meals are, in general, receiving a mixed health benefit for this expense.This is reinforced by the finding that, in terms of nutrients included in front-of-pack labelling, the cheapest meals were those rated as low on three out of four of these nutrients.Supermarket ready-meals tend to be high in saturated fat and salt, medium in total fat, and low in sugar according to current UK guidance for front-of-pack nutritional labelling.However, nutritional content varied substantially and a number of meals that were low in all these nutrients were available, particularly amongst meals specifically marked as ‘healthier’.The cost of meals was positively associated with weight, energy, fat, saturated fat, protein and fibre, suggesting that consumers do not necessarily have to pay more for healthier meals.Further effort is required to encourage producers to improve the nutritional profile of the full range of ready-meals, and not just those specifically labelled as ‘healthier’.
Background: Over-reliance on convenience foods, including ready-meals, has been suggested as one contributor to obesity. Little research has systematically explored the nutritional content of supermarket ready-meals. We described the nutritional content and cost of UK supermarket ready-meals. Methods: We conducted a survey of supermarket own-brand chilled and frozen ready-meals available in branches of ten national supermarket chains in one city in northern England. Data on price, weight and nutritional content of meals in four ranges ('healthier', luxury, economy and standard) and of six types (macaroni cheese, meat lasagne, cottage pie, chicken tikka masala, fish pie, and sweet and sour chicken) were collected. Nutritional content was compared to ranges used to identify low, medium and high fat, saturated fat, sugar and salt in nationally recommended front-of-pack labelling. Results: 166 ready-meals were included from 41 stores. Overall, ready-meals were high in saturated fat and salt, and low in sugar. One-fifth of meals were low in fat, saturated fat, salt and sugar, including two-thirds of 'healthier' meals. Meals that were low for three out of the four front-of-pack nutrients were the cheapest. Conclusions: Supermarket ready-meals do not have a healthful nutritional profile overall. However, a number of healthier meals were available - particularly amongst meals specifically marked as 'healthier'. There was little evidence that healthier meals necessarily cost more. Further effort is required to encourage producers to improve the nutritional profile of the full range of ready-meals, and not just those specifically labelled as 'healthier'.
208
Effectiveness of cognitive behavioural therapy with people who have autistic spectrum disorders: A systematic review and meta-analysis
Autism spectrum disorders are a range of neurodevelopmental disorders characterised by difficulties with social communication and interaction across contexts, as well as restricted and repetitive patterns of behaviour, interests and activities.The phenotype incorporates a range of symptoms across multiple domains, including cognitive, behavioural, affective and sensory symptoms.Sleeping and eating difficulties, synaesthesia, as well as affective dysregulation, and difficulties with initiation, planning and organisation are often present.The prevalence amongst 4 year olds has been estimated to be approximately 13.4 per 1000, while the adult prevalence has been estimated to be 9.8 per 10,000.There has been a marked increase in psychosocial interventions that aim to treat the symptoms or features of ASDs.In the United Kingdom, the National Institute for Health and Care Excellence recommended that people with ASDs should be offered age-appropriate psychosocial interventions for comorbid mental health problems and the core symptoms of ASDs.There are a large number of interventions claiming to treat symptoms of ASDs, even though the evidence base is poor.However, there is evidence to support the use of applied behaviour analysis in the treatment of symptoms of ASDs, and the authors of a Cochrane review concluded that early and intensive behavioural interventions can lead to improvements in adaptive, and communicative behaviour, as well as social skills."Nevertheless, there are few studies examining the effectiveness of these types of interventions with adults, as opposed to children, with ASDs.Alongside this, psychiatric comorbidity amongst people with ASDs is elevated, prompting many to consider how to adapt and deliver psychological therapies for children, adolescents and adults with ASDs.Several meta-analytic or narrative reviews involving studies that recruited samples of children and adolescents have been completed in this area examining the effectiveness of cognitive behavioural therapy for anxiety disorders or social skills training.While all of the aforementioned studies have concluded that CBT and associated interventions for anxiety amongst children with ASDs appear to be promising, none have considered CBT across the lifespan.Further, none of the previously completed meta-analyses have: considered CBT, as opposed to applied behavioural analysis, when used as a treatment for the actual symptoms or features of ASDs, rather than the treatment of anxiety disorders, included studies involving adult participants, and included other affective disorders, such as depression, alongside anxiety disorders.In order to address these weaknesses, we completed a comprehensive meta-analysis and systematic review of the literature which aimed to investigate the effectiveness of cognitive behavioural therapy across the lifespan for either affective disorders more broadly, while focusing on anxiety disorders as well, or the symptoms and features associated with ASDs.A supplementary aim was to investigate whether there are differences in outcome for children, adolescents and adults.Relevant studies were identified by systematic searches of the following electronic databases: PsycINFO; MEDLINE; CINAHL Plus, Web of Science, as well as Google Scholar.The Cochrane Library was searched to identify any existing systematic reviews.The key search terms and how they were combined are found in Table 1.Terms were searched using English and American terminology, spelling, and truncation to ensure that all variant word endings were identified.Alongside this, the ancestry method was used to identify any further papers that may have met eligibility criteria.The grey or fugitive literature was also searched in an attempt to minimise publication bias.An initial search was completed via http://www.opengrey.eu which includes research reports, dissertations and conference papers.Dissertation Abstracts – International and the Comprehensive Dissertation Index were also searched, as well as trial registers.The final search for studies was completed on 29 January 2016.The review was registered with PROSPERO, an international database of systematic reviews in health and social care, in order to provide transparency to the review process and to avoid duplication of research effort.Initially, titles and abstracts were screened for eligibility, and studies were included if all of the following criteria were met: participants had a diagnosis of Autism Spectrum Disorder, and diagnosis was made by a qualified clinician and/or using a standardised diagnostic assessment; studies used a control or comparison group design, e.g. waiting list or treatment as usual, with or without randomisation; a clinician-led CBT intervention, either individual or group-based, incorporating both cognitive and behavioural components was used.Interventions in which CBT theory and principles were utilised to teach or improve behavioural patterns, e.g. social skills, were included, provided that this was explicitly stated; use of at least one validated and standardised outcome measure of either core features of ASDs, i.e. difficulties in social interaction, impaired social communication or restricted or repetitive patterns of behaviour and interests, or co-occurring symptoms of mental disorder, e.g. anxiety, depression, and written in English.Studies that aimed to treat affective disorders or symptoms of ASDs were analysed separately for two reasons: the “target” of the intervention was separate in these studies, with one group focusing on trying to treat symptoms of affective disorders, while the other attempted to reduce difficulties or symptoms associated with having an ASD, and CBT for either incorporated psychoeducation, skills teaching, skills practice, behavioural experiments, and cognitive restructuring.However, the description of the interventions across studies was at times sparse, and it was at times difficult to ascertain the degree to which cognitive restructuring was used within some of the interventions.As a consequence, it was clear that the intervention incorporated both cognitive and behavioural components for some studies, while for others, this was less clear, although in all instances, the interventions were described by the authors as using both cognitive and behavioural methods However, it is important to bear in mind that CBT incorporates both cognitive and behavioural components, although for some disorders there is a clear focus on behavioural interventions when delivering CBT.We excluded any studies that solely made use of behavioural methods alone.Studies were excluded if any of the following criteria were met: the methodology used was a single case, case series, qualitative, meta-analysis or review articles; the design of the study was such that the effect of the CBT intervention could not be isolated from other treatment methods, e.g. psychotropic medication;, the primary intervention was applied behavioural analysis or behaviour modification, or behavioural activation as a stand-alone treatment; and the dataset had been used within a previously included study to avoid double counting of data.No limits were applied to the date of publication, age of participants or whether the study has been published in a peer review journal.Studies that were non-randomised were not excluded.While this represents an inherent weakness by increasing the risk of bias, the decision was made to include non-randomised studies at this stage considering the likelihood that few definitive trials within this area have been completed.Following the removal of duplicate studies, the systematic search of the electronic databases returned 2332 potentially eligible studies.Following an initial screen of the titles and abstracts, 2263 were excluded.In addition to the remaining 69 studies, a further 102 were identified using the ancestry method, and two were located from searching the grey literature.The resulting total number of papers retrieved was 173, six of which were protocols.The authors of protocols were contacted directly to try to source outcome data; two of these research groups provided data, while the remaining four did not respond and were excluded.A further 107 papers were excluded because they did not include a comparison or control group, five were excluded because they had made use of a pre-existing dataset that had been previously included, four were excluded because they did not include cognitive-behavioural components within the intervention, one was excluded due to a lack of validated or standardised outcome measures, one was excluded because the effects of CBT could not be isolated and one was excluded because we were unable to trace the paper.The remaining 50 studies met the eligibility criteria, although two studies were excluded at this stage because the published data were insufficient and we could not calculate effect sizes; the authors did not respond to our request for further data.Forty-eight studies, involving 2099 participants were therefore included in the quantitative synthesis.Fig. 1 depicts a PRISMA flow diagram, outlining the identification, screening and inclusion or exclusion of articles throughout the process.Reasons for article rejection are clearly indicated.The eligibility criteria were applied by two authors independently, and inter-rater reliability was excellent, 96.5%, k = 0.92, 95% CI .The standardised mean difference was calculated to estimate the difference between the treatment and control conditions."Cohen's d was transformed into Hedge's g using correction factor J to correct for possible positive bias due to small sample sizes. "The magnitude of Hedge's g was interpreted using Cohen's convention as small, medium, and large.The variance and standard error of g for each study was calculated.As outcome measures may take the form of self-, clinician- or informant-reports, and there is evidence to suggest that people with ASD may have difficulties with judging their own social or communicative behaviour effect sizes were calculated individually for each type of outcome measure where possible.In this context, an informant-based outcome measure was a rating of clinical symptomatology provided by a third party who was not the clinician or the participant.Often, this person was a family member.The analysis was undertaken using RevMan Version 5.3.A random effects model was used for the following reasons: heterogeneity was anticipated as data came from a variety of sources and we could not assume a common effect size; and inferences made from random effects models are unconditional and can be applied to a population of studies larger than the sample.Heterogeneity was thought to be associated with whether CBT was delivered as a group or individually, the age range of participants, and symptom severity.This was explored using the I2 statistic, which describes the percentage of variation across studies due to heterogeneity, rather than chance."The I2 statistic has been chosen rather than Cochran's Q since it enables quantification of the effect of heterogeneity, providing a measure of the degree of inconsistency in results, and it does not inherently depend on the number of studies included in the meta-analysis.The degree and impact of heterogeneity was assessed using the categorisation of low, medium and high, in addition to a quality assessment of the methodology.A sensitivity analysis was also undertaken.Outliers were removed and the weighted mean effect size was recalculated.Publication bias was assessed graphically using funnel plots, plotting summary effect size against standard error; a skewed and asymmetrical plot may indicate a publication bias.Fail-safe N was used to assess the impact of bias by calculating an estimate of the number of new studies averaging a null result that would be required to bring the overall treatment effect to non-significance.A figure exceeding 5n + 10 would indicate that the results could be considered robust to the effects of publication bias.Quality appraisal of included studies was undertaken by two authors independently using the National Institute for Health and Care Excellence Quality Appraisal Checklist for Quantitative Intervention Studies, bearing in mind that the use of such scales has been criticised in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidance.There was ‘moderate’ agreement between the two authors for internal validity, 72.0%; k = 0.48; 95% CI , and ‘good’ agreement for external validity, 84.0%; k = 0.66; 95% CI .The key characteristics of the 50 included studies are found in Appendix A, while the summary quality appraisal ratings for each study are found in Appendix B.A persistent problem across all studies was small sample size, contributing to reduced power.Freitag et al. included the highest number of participants, whilst eight of the studies included in the quantitative synthesis involved less than ten participants per group.Several of these studies were defined by the authors as pilot or feasibility trials.However, a number of studies that were not called pilot or feasibility trials, were in fact lower in quality and had smaller sample sizes than many clearly defined pilot or feasibility trials.Quality appraisal and risk of bias were therefore considered on a study by study basis and sensitivity analysis was conducted by removing studies deemed to be at high risk of bias, rather than those labelled as pilot or feasibility trials.Other common problems included the lack of reporting on participant engagement within intervention sessions, poor reporting on missing data, and minimal information on fidelity checks.Very few studies reported adequate allocation concealment and ten of the studies included in meta-analysis were non-randomised, contributing to a high risk of allocation bias.Due to the nature of the interventions involved, it is not possible for investigators to blind participants to intervention allocation.However, blinding of outcome assessors was possible but was not conducted in the majority of studies, contributing to detection bias.A final common difficulty across studies was failure to specify a primary outcome measure.This complicated the meta-analysis, particularly in studies where a high number of outcome measures were utilised or different measures were used to assess a range of constructs, because we were left to make the decision as to which outcome measure to use within our meta-analysis.We made this decision based upon the predominant hypothesis or research question under investigation.For example, where a study aimed to investigate the effectiveness of an intervention for social skills, we chose the instrument that was used to measure social skills so that the study could be included in our meta-analysis.In some circumstances, researchers made use of more than one measure which was associated with the predominant hypothesis; in these instances, we chose the most commonly used measure across studies in an attempt to reduce heterogeneity.Where there were no commonalities across studies, the authors did not specify their primary outcome measure, and there were multiple measures used, we chose the primary outcome measure at random.The lack of measures validated for use with individuals with ASD was noted, although this is clearly a wider issue that needs attention.Twenty-four of the included studies aimed to examine the effectiveness of CBT for affective disorders, with the bulk attempting to treat anxiety disorders, with others targeting depression or emotion regulation difficulties.Seventeen of these studies involved children and adolescents, whilst four included adult participants.Three studies included both adolescent and adult participants and were therefore assigned to a ‘Mixed Age’ subgroup for analysis.Fifteen of the 24 studies examined group-based CBT, whilst eight reported on individual CBT.The remaining study involved 21 group sessions, as well as three individual sessions.Since this study was predominantly group-based, the decision was made to include it in the ‘group-based’ subgroup when analysing mode of CBT delivery.The majority of studies targeted anxiety.As this was such a large group, a subgroup analysis was conducted to assess potential variations of treatment effects across age groups within this subset of studies.This included studies investigating the treatment of anxiety disorders that had been included in earlier meta-analytic work, but also included additional studies; two studies targeted symptoms of obsessive compulsive disorder were also included within this subset, as was a study investigating depression, anxiety and rumination and a study investigating depression, anxiety and stress.In the latter two studies, only outcomes pertaining specifically to anxiety were used to reduce heterogeneity within the quantitative synthesis as much as possible.In total, 19 studies were included within the anxiety subset.Of the remaining five studies, one targeted anger, one targeted general emotional regulation skills, one targeted insomnia, one targeted self-esteem, quality of life and sense of coherence and one targeted stress and emotional distress.Fourteen studies were defined as randomised controlled trials, seven of which compared a CBT intervention with a waitlist control group, and three compared CBT to treatment as usual.Three randomised controlled trials compared CBT to a non-CBT group-based treatment: either a social recreational program or an anxiety management group.The final randomised controlled trial compared a CBT group to a group which received a placebo drug.This study also included a condition in which participants received melatonin and a condition in which participants received both melatonin and CBT.Participants from these intervention arms were not included as the use of a drug-based comparison group was not utilised in any other included study.Three of the 24 studies investigating CBT for the treatment of affective disorders were quasi-experimental or non-randomised, whilst seven were called pilot studies.Three of the seven pilot studies within this group were randomised, whilst four were not, and six compared a CBT intervention to a waitlist control group, whilst one compared CBT to treatment as usual.As anticipated, there was extensive variation in the outcome measures used across studies.Many studies included outcome measures from various sources, with the most common report type being self-report within studies targeting co-occurring symptoms of affective disorder, followed closely by informant-report outcomes and clinician-rated outcomes.Only one study within this group used a task-based outcome measure.There was also considerable variation in the intensity and content of intervention.The number of sessions ranged from four to 50, whilst the length of each session ranged from 40 to 180 min.The majority of studies used a structured protocol, with 21 of the studies utilised “traditional” CBT methods, with common components including role play, exposure and teaching/ rehearsal of emotional regulation skills.Common adaptations to CBT included an increased emphasis on behavioural rather than cognitive components, the use of social stories and vignettes and increased involvement of family members.One of the studies piloted a videoconferencing CBT intervention designed for delivery in a small, multi-family group format, whilst another study used a modified version of Mindfulness Based Therapy with cognitive elements omitted.Another used a modified Acceptance and Commitment Therapy protocol and participants in the CBT group engaged in daily mindfulness exercises in addition to structured intervention sessions.There were 24 included studies that examined the effectiveness of CBT for symptoms or features of ASD.One study investigated both the effect of CBT on social skills and anxiety and the outcomes pertaining to social skills were included in the meta-analysis.Another intervention study focused upon both social communication and anxiety, but the findings were reported in two separate papers; the decision was made to exclude Fujii et al. as inclusion would have led to the double counting of data.Provencal and DeRosier et al. were excluded as attempts to obtain data required to calculate effect sizes were unsuccessful.The majority of studies targeted social skills, while of the remaining six studies, four targeted Theory of Mind, one targeted affectionate communication and one targeted the perception of facial emotions.A number of studies targeted both social skills and aspects of social cognition.In these circumstances, the primary outcome measure was included, but there was extensive variation in outcome measures across studies.In situations in which the primary outcome measure was not specified, only outcome measures pertaining to social skills were included to avoid comparisons of different constructs across report types.The most common type of outcome measure was informant-report, followed by self-report.In contrast to studies investigating the effectiveness of CBT for affective disorders, seven studies within this group utilised a task-based measures, for example Theory of Mind tasks.Fourteen of the studies were randomised controlled trials, one of which is the only Phase III trial in this area to date."This study compared CBT to treatment as usual, whilst thirteen of the RCT's compared a CBT intervention with a waitlist control group.The final RCT compared CBT to a facilitated play active control group.Three of the remaining ten studies were quasi-experimental or non-randomised, and seven were labelled pilot studies.These studies were included in the initial analysis but the quasi-experimental studies involved a variety of control groups: Ozonoff and Miller compared CBT to no treatment, Laugeson, Frankel, Gantman, Dillon, and Mogil used a waitlist control group and Laugeson, Ellingsen, Sanderson, Tucci, and Bates and Laugeson and Park reported the use of an active control group based on a non-CBT social skills curriculum.Three pilot studies used a waitlist control group, two compared CBT to treatment as usual and one compared CBT to “no intervention”.The remaining study reported the use of an active control group with sessions consisting predominantly of leisure activities.Six of the seven pilot studies within this group were randomised, whilst the remaining study was quasi-experimental.There was considerable variation in the intensity and content of intervention.The number of sessions ranged from five to 70, with Laugeson et al. reporting on an intervention in which children received 30 minute sessions five days per week over a period of 14 weeks.The length of each session ranged from 30 min to whole day sessions.The majority of studies investigating the effectiveness of CBT for core features of ASD used a structured protocol.In terms of treatment content, studies within this group less commonly reported “traditional” CBT methods.Some studies did not directly refer to cognitive behavioural therapy per se, but they explicitly mentioned the inclusion of both cognitive and behavioural techniques in the intervention, and therefore met inclusion criteria for the current study.Content commonly included direct social skills teaching and role play, emotional identification work and problem-solving exercises or discussions.Common adaptations included increased use of social stories and vignettes, increased use of role play and the involvement of family members in intervention sessions and homework activities.Seventeen studies, including 645 participants, included self-reported outcome measures.One study utilised a relevant self-reported outcome measure but it was not possible to include this in the analysis as an attempt to obtain the data necessary to calculate the effect size was unsuccessful.The outcome measures used varied considerably across studies.A random-effects meta-analysis of these trials indicated a small to medium but non-significant effect favouring CBT over waiting-list, treatment as usual or active control as reported by participants, g = 0.24; 95% CI , z = 1.6, p = 0.11,.The analysis revealed a significant amount of heterogeneity, with I2 indicating that 69% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p < 0.001.As one study, had a SMD considerably higher than the other included studies, g ranged from − 0.39 to 0.85, a sensitivity analysis was conducted and this outlier was removed.Exclusion of this study resulted in no significant treatment effect, g = 0.10; 95% CI , z = 1.21, p = 0.23, and I2 reduced markedly to 4%, p = 0.41, indicating the considerable impact that the inclusion of this study had on the pooled SMD.A further sensitivity analysis to remove studies deemed to be at a high risk of bias resulted in a very similar effect, g = 0.09; 95% CI , z = 0.84, p = 0.40.Eleven studies were included within our sub-group analysis focusing on the treatment of anxiety disorders using self-report measures.A random effects meta-analysis of these trials revealed a non-significant small to medium effect size, g = 0.32; 95% CI , z = 1.50, p = 0.13.The analysis revealed a significant amount of heterogeneity, with I2 indicating that 77% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p < 0.001.To complete our sensitivity analysis, Chalfant et al. was removed as it was judged to an outlier, which reduced the effect size to g = 0.08; 95% CI , z = 0.79, p = 0.43, I2 = 0%, p = 0.63.We removed two further studies judged to be at high risk of bias reduced the effect size further, g = 0.01; 95% CI , z = 0.12, p = 0.90, I2 = 0%, p = 0.78.The remaining six studies involved using CBT in the treatment of OCD, depression, as well as anxiety and rumination, self-esteem, stress and emotional distress, depression, as well as anxiety and stress.Collectively, these studies were associated with a non-significant small effect size, g = 0.12; 95% CI , z = 0.66, p = 0.51.However, heterogeneity was not significant, I2 = 41%, p = 0.13.Removal of three studies judged to be at high risk of bias increased the effect size to g = 0.27; 95% CI , z = 0.81, p = 0.42, I2 = 66%, p = 0.05.Sixteen studies, including 620 participants, made use of informant-reported outcome measures.One study utilised a relevant informant-reported outcome measure but was excluded because we did not obtain the data necessary to calculate the effect size.The outcome measures used varied considerably across studies.The meta-analysis of these trials indicated a significant medium effect favouring CBT over waiting-list, treatment as usual or active control as reported by informants, g = 0.66; 95% CI , z = 3.49, p < 0.001,.The analysis indicated a significant amount of heterogeneity, with I2 indicating that 78% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p < 0.001.Again, Chalfant et al. had a SMD, g = 4.27, considerably higher than the other included studies, g ranged from − 0.39 to 1.21, and a sensitivity analysis was therefore conducted to remove this outlier.Exclusion of this study resulted in a lower treatment effect, g = 0.47; 95% CI , z = 4.17, p < 0.001, although it remained statistically significant.I2 reduced to 38%, p = 0.07, again indicating the impact that the inclusion of this study had on the pooled SMD.A further sensitivity analysis to remove studies deemed to be at a high risk of bias resulted in a very similar effect, g = 0.45; 95% CI , z = 3.24, p = 0.001.Focusing only on the twelve studies that aimed to treat anxiety, our meta-analysis revealed that CBT was associated with a large effect size, g = 0.80; 95% CI , z = 3.42, p < 0.001.The analysis indicated a significant amount of heterogeneity, I2 = 80%, p < 0.001.A further sensitivity analysis, with Chalfant et al. removed reduced the effect size to g = 0.49; 95% CI , z = 4.74, p < 0.001, I2 = 2%, p = 0.42.Removal of studies judged to be at high risk of bias led to a further reduction in effect size, g = 0.46; 95% CI , z = 3.46, p < 0.001, I2 = 22%, p = 0.25.The remaining four studies making use of informant-ratings focused on CBT as a treatment for anger, emotion regulation, including anger and anxiety, insomnia, or OCD.Collective, they were associated with a small to medium non-significant effect size, g = 0.28; 95% CI , z = 0.82, p = 0.41.A significant amount of heterogeneity was also found, I2 = 75%, p < 0.001.We removed Scarpa and Reyes as this study was deemed to be at a high risk of bias and the effect size increased, g = 0.36; 95% CI , z = 0.88, p = 0.38, I2 = 83%, p < 0.05.Thirteen studies, including 514 participants, made use of clinician-rated outcome measures, but there was substantial variation in the type of choice of measure.Two of these studies presented dichotomous data.In order to include these studies in a random-effects meta-analysis, the odds ratio was calculated and re-expressed as a SMD.A random-effects meta-analysis using the Generic Inverse Variance method was conducted as estimates of effect were calculated for the two aforementioned studies.The analysis indicated a significant medium effect favouring CBT over waiting-list, treatment as usual or active control as rated by clinicians, g = 0.73; 95% CI , z = 4.05, p < 0.001,.The analysis again indicated a significant amount of heterogeneity, with I2 indicating that 69% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p < 0.001.Two studies had a SMD, g = 2.51 and g = 2.47 respectively, considerably higher than the other included studies, g ranged from − 0.31 to 1.38, and a sensitivity analysis was conducted to remove these outliers.Exclusion of these studies resulted in a lower treatment effect, g = 0.52; 95% CI , z = 4.06, p < 0.001, although it remained statistically significant.I2 reduced to 36%, p = 0.11, again indicating the impact that the inclusion of these studies had on the pooled SMD.A further sensitivity analysis to remove studies deemed to be at a high risk of bias resulted in a very similar effect, g = 0.59; 95% CI , z = 4.48, p < 0.001.Turning to consider only anxiety, based on clinician-rated outcomes, CBT was associated with a significant large effect size, g = 0.86; 95% CI , z = 4.37, p < 0.001, across the eleven included studies.Heterogeneity was high, I2 = 69%, p < 0.001,.We removed Chalfant et al., and this decreased the effect size, g = 0.60; 95% CI , z = 4.59, p < 0.001, I2 = 27%, p = 0.20, while removing studies judged to be at high risk of bias then increased the effect size, g = 0.63; 95% CI , z = 4.25, p < 0.001, I2 = 35%, p = 0.15.The remaining two studies both investigated CBT as a treatment for OCD and these were associated with a non-significant small effect size, g = 0.08; 95% CI , z = 0.23, p = 0.82.As only one study made use of this type of outcome measure, if was not possible to calculate the pooled SMD.Nine studies, investigated the effectiveness of CBT in treating symptoms associated with ASD and included appropriate self-reported outcome measures.As indicated in Fig. 5, a random-effects meta-analysis of these trials indicated a small, but non-significant effect favouring CBT over waiting-list, treatment as usual or active control, as reported by participants, g = 0.25; 95% CI , z = 1.77, p = 0.08.Heterogeneity was not significant, although I2 indicated that 40% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p = 0.10.A sensitivity analysis to remove studies deemed to be at a high risk of bias resulted in no significant treatment effect, g = 0.10; 95% CI , z = 0.58, p = 0.56.Eighteen studies were included in this analysis revealing a significant small effect favouring CBT over waiting-list, treatment as usual or active control as reported by informants, g = 0.48; 95% CI , z = 5.39, p < 0.001.Heterogeneity was not significant, although I2 indicated that 36% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p = 0.06.A sensitivity analysis to remove studies deemed to be at a high risk of bias resulted in a slightly larger medium treatment effect, g = 0.52; 95% CI , z = 5.63, p < 0.001, with a small reduction in heterogeneity, I2 = 33%, p = 0.12.Six studies, including 153 participants were included.One of these studies presented the outcome as dichotomous data, and therefore the odds ratio was calculated and expressed as a SMD; the generic inverse variance method the estimate of effect was calculated.The analysis indicated a significant “medium” effect favouring CBT over waiting-list, treatment as usual or active control as rated by clinicians, g = 0.65; 95% CI , z = 2.30, p = 0.02.Heterogeneity was non-significant, although I2 indicated that 47% of the variability in estimated treatment effect was due to heterogeneity rather than chance, p = 0.10.One study had a SMD, g = 2.43, considerably higher than the other included studies, g ranged from 0.08 to 1.51.Removing this outlier resulted in a lower treatment effect, g = 0.47; 95% CI , z = 2.40, p = 0.02, although it remained statistically significant.I2 reduced to 1%, p = 0.40, indicating the considerable impact that the inclusion of this study had on the pooled SMD.A further sensitivity analysis to remove studies deemed to be at a high risk of bias resulted in a very similar but lower and non-significant treatment effect, g = 0.44; 95% CI , z = 1.90, p = 0.06.It is highly likely that this is related to the fact that the exclusion of the above studies left only two studies in the analysis, and as such, this analysis should be interpreted with marked caution.Seven studies, incorporating 237 participants, were included in this analysis, which revealed a significant small effect in favour of CBT over waiting-list, treatment as usual or active control on task-based measures, g = 0.35; 95% CI , z = 2.67, p = 0.008.Heterogeneity was not an issue, I2 = 0%, p = 0.58.Removing studies deemed to be at a high risk of bias resulted in a very similar non-significant effect size, g = 0.30; 95% CI , z = 1.42, p = 0.16.Again, it is highly likely that this is related to the fact that the exclusion of the above studies left only three studies in the analysis should therefore be interpreted with marked caution.Further subgroup analysis using self-report outcome measures was not completed because our initial analysis indicated that CBT was not superior to control conditions when used to treat either affective disorders of symptoms associated with autism.While there were 16 studies that made use of informant-report outcome measures when treating affective disorders, none of these included adult participants, and only one study looking at the treatment of symptoms related to autism included adult participants.As such, a subgroup analysis based on informant-report outcome measures was not completed.Subgroup analysis using clinician-rated outcome measures across different age groups was possible, but only for studies that aimed to treat affective disorders.There was substantial variability that appeared due to genuine subgroup differences, rather than sampling error, I2 = 80.2%, p = 0.006, and a large combined effect size in favour of CBT for studies involving children and adolescents, g = 0.95; 95% CI , z = 4.64, p < 0.001, but not for studies involving adults, g = − 0.04; 95% CI , z = 0.15, p = 0.88.Exclusion of two outliers from the studies involving children and adolescents resulted in a lower but significant effect size, g = 0.67; 95% CI , z = 5.28, p < 0.001.The comparison between studies involving children, adolescents and adults is inherently problematic and should be interpreted cautiously because only two studies involving adults were included.Visual inspection of Funnel plots did not reveal significant asymmetry for self-reported outcome measures used within studies that aimed to treat affective disorders.Fail-safe N was not calculated because CBT was not superior to control conditions.A similar analysis could not be completed for studies that focused on symptoms related to autism because there were less than ten.Turning to informant-based outcome measures, used for both studies that focused on affective disorders and symptoms associated with autism, no significant asymmetry was found.For studies involving affective disorders, 281 new studies averaging a null result would be required to bring the overall treatment effect to non-significance.For studies targeting symptoms related to autism, 287 new studies averaging a null result would be needed to again bring the overall treatment effect to non-significance.These figures exceed 5n + 10, and the conclusion that these findings are robust to publication bias is valid.Considering clinician-rated outcome measures, there was no significant asymmetry for studies that treated affective disorders, while a Funnel plot was not created for studies that treated symptoms of autism because there were fewer than ten.Fail-safe N revealed that 227 new studies averaging a null result would be needed to bring the treatment effect to non-significance calculated using clinician-rated outcome measures taken from studies that treated affective disorders.The effect calculated using clinician-rated outcome measures taken from studies treating symptoms associated with autism would become non-significant if only 18 papers averaging a null effect were published suggesting that this finding may be subject to publication bias and influenced by the fewer papers in this area.Whilst it was not possible to examine task-based outcome measures for studies that treated mental disorder, for studies that focused on symptoms related to autism, because the number of papers was less than ten, a Funnel plot could not be created.However, fail-safe N revealed that only 5 new studies averaging a null effect size would bring the overall treatment effect to non-significance.This means that publication bias may feature, and the conclusions are heavily influenced by there being relatively few papers.The results of the meta-analysis indicated that cognitive behavioural therapy is associated with a small to medium effect size when used to treat co-morbid affective disorders with children, adolescents, or adults who have ASDs, but this varied according to whether the outcome data was taken from self-report, informant-report, clinician-report, or task-based measures.CBT was associated with a small and non-significant effect size, g = 0.24, when the analysis was completed using self-report measures, and associated with significant heterogeneity; when studies at risk of bias were excluded, resulting in low heterogeneity, treatment was associated with a small non-significant effect size, g = 0.09.CBT was superior to control conditions when the analysis was completed with either informant- and clinician-report measures, both being associated with a medium effect size, but there was significant heterogeneity; a sensitivity analyses reduced heterogeneity, and revealed that CBT remained superior, and was associated with a medium effect size of, g = 0.45, and, g = 0.59, respectively.Turning to consider CBT for symptoms associated with ASDs, the findings from the meta-analysis were very similar to that found for CBT when used to treat co-morbid affective disorders.CBT, when used as a treatment for the symptoms of ASDs, rather than affective disorders, was associated with an effect size that ranged from small to medium, again, dependent upon the type of outcome measure used.Using data from self-report measures, CBT was associated with a small non-significant effect size, g = 0.25, and while heterogeneity was not significant, excluding studies at risk of bias to reduce heterogeneity reduced the effect size; it remained small and non-significant, g = 0.1.There was evidence that CBT was significantly beneficial when the analysis was based on informant-report measures, and resulted in a small effect size, g = 0.48, which increased to medium following our sensitivity analysis to account for heterogeneity, g = 0.52.Considering clinician-report measures, CBT was found to be significantly superior, and associated with a medium effect size, g = 0.65.Following the exclusion of studies thought to be at risk of bias to reduce heterogeneity, CBT was no longer superior, and associated with a non-significant medium effect size, g = 0.44.Task-based measures, which are both less subjective and completed by the participant, were also evaluated to determine whether CBT is an effective treatment for symptoms of ASDs.The initial findings were significantly in favour of CBT as an effective treatment, and associated with a small effect size, g = 0.35, but the exclusion of studies thought to be at higher risk of bias, led to a non-significant treatment effect, falling in the small range, g = 0.3.Sub-group analysis based on the age of the participants was not completed for self-report measures as there was no evidence that CBT was superior to control conditions, nor was this possible for informant-based measures, as few studies involving adults also included an informant-based measure.It was only possible to undertake a sub-group analysis for the treatment of affective disorders based on clinician-report measures, and the findings indicated that CBT was superior and associated with a large effect size, g = 0.95, when used with children and adolescents, while following our sensitivity analysis, this reduced to a medium effect size, g = 0.67.These effect sizes are lower than that previously reported by Sukhodolsky et al. and Kreslins et al., with both previous meta-analyses having included fewer studies.Turning to consider adults, the results indicated that CBT was not superior to control conditions, and was associated with a small effect size, g = 0.04; interpreting this result is problematic because it is only based on two published studies.Within the current meta-analysis, and those completed previously which focused on the treatment of anxiety amongst children and adolescents, there are substantial differences in treatment efficacy dependent upon the type of outcome measure included within the analysis.Self-report measures, in contrast to informant- and clinician-report measures, are not reliably associated with significant change following treatment.Within the current meta-analysis, this was the case for studies involving children, adolescents or adults who received treatment for affective disorders more broadly.This was also the case for studies where CBT was used to treat the symptoms of ASDs.As discussed previously by both Sukhodolsky et al. and Kreslins et al. it may be the case that individuals with ASDs have difficulties with reporting symptoms because of associated developmental challenges faced by this population leading to difficulties with reliably reporting symptoms.Interestingly, Kreslins et al. suggested that children with ASDs may confuse symptoms of anxiety and ASDs, which may lead to difficulties with completing self-report measures of anxiety.However, it is apparent that adults with ASDs also have these difficulties, as while there are few trials involving adults, those that have been completed had similar difficulties with the use of self-report measures.Alongside this, trials of CBT used to treat symptoms of ASDs, rather than affective disorders, have also encountered similar difficulties with self-report measures.It is perhaps probable that individuals with ASDs may find self-report measures difficult because of their associated developmental problems and further work regarding the development of valid and reliable measures for use with this population is needed.However, it must also be mentioned that perhaps CBT does not bring about change for individuals with ASD, and the results using both informant- and clinician-report measures have been subjected to an observer-expectancy effect, considering that is very difficult to mask informants, and not all studies made use of masked assessors, introducing significant bias.While this may not explain all the variability within the data, it has a role to play, and as such, it is vitally important that future trials ensure that they make use of masked assessors and have satisfactory arrangements for independent data management.Related to these difficulties, there were a variety of issues associated with the included studies, highlighted by the quality appraisal, which need to be considered further.First, the majority of the studies included involved small samples, and trials labelled as feasibility or pilot trials often had larger sample sizes than studies that were not identified as either a feasibility or pilot trial.Eight of the studies included in this meta-analysis had less than ten participants per group.This is problematic, as there are no large scale definitive trials in this area making use of robust methodologies.As such, the conclusions reached within this meta-analysis, and previous meta-analyses are potentially limited.This does not mean that the conclusions are entirely invalid, but it does allow some questions to be raised about validity, which could be addressed in the future with the completion of several large scale definitive trials by different research groups around the world.Related to these issues, the study by Chalfant et al. tended to have a relatively higher standardised mean difference.While this was a randomised trial, the accessors were not masked, and in fact were the actual therapists who carried out the intervention.Considering the lack of blinding and independent data management within this study, there is an inherent increased risk of bias.Several other studies included within this meta-analysis also had a relatively higher standardised mean difference, and the majority of them did not make use of independent data management and analysis, something we would strongly recommend for future trials in this area.Second, studies often did not report sufficient information regarding participant engagement and fidelity, while third, there were issues with adequate allocation concealment that must be addressed within future studies.Fourth, it is important to note that ten studies were not randomised, and few reported that data were managed and analysed independently.Fifth, and again looking forward to the future, researchers in this area need to specify a primary outcome measure within their trials, and further work to develop valid and reliable measures of outcome for use with participants who have ASDs is needed.Sixth, it would be advantageous for researchers to describe their interventions more thoroughly or ensure that they are available for scrutiny, perhaps within public databases.Finally, it is recommended that future trials make use of and adhere to the CONSORT recommendations for reporting randomised control trials to help increase the quality of the evidence that is available.There are a number of strengths associated with the current meta-analysis.Considering strengths, within the current meta-analysis, we attempted to include studies that aimed to treat affective disorders more broadly, rather than just anxiety, and included studies that were designed to evaluate CBT as a treatment for the actual symptoms or core features of ASDs.As such, our work is comprehensive, capturing studies that have attempted to make use of CBT with individuals with ASDs for a variety of problems and this is a marked strength over and above previously completed meta-analytic work.Alongside this, we have included studies with samples of children, adolescents, and adults, or mixed samples, while at the same time, undertaking a subgroup analysis to compare differences between children/adolescents and adults, considering the developmental differences between these populations which may have an impact upon the process of engaging in and completing therapy.We have also made use of an appropriate analytic strategy, and made use of independent reviewers for both screening and the quality appraisal.As such, the current meta-analysis is the most comprehensive to date, covering CBT used to treat either affective disorders or symptoms of autism.Turning to consider weaknesses, there are a variety of problems with many of the included studies which have been mentioned in the preceding paragraph, and these problems need to be considered when interpreting the results of this meta-analysis.While this does not necessarily invalidate our conclusions, it must be considered when interpreting the findings and considering future research.We would suggest that future studies in this area adhere to following recommendations, small-scale studies should be clearly described as feasibility or pilot trials, methods and interventions should be described fully, in line with CONSORT recommendations.Standardised reporting and a more uniform approach to study design would help to minimise heterogeneity across studies, appropriate allocation concealment, randomisation, blinding procedures and independent data management should be considered a priority and should be described fully, where possible, consistent usage of pre-existing outcome measures across studies would be beneficial in order to increase comparability across trials, researchers should specify a primary outcome measure a priori, and participant engagement and fidelity should be clearly reported.Looking forward to the future, considering the marked number of small trials, well-designed definitive trials from different research groups around the world are needed in order to demonstrate that CBT is an empirically validated treatment use with people who have ASDs.To date, there has only been a single definitive trial within this area.Bearing the aforementioned recommendations for future studies in mind, and considering the conclusions from both the current and previous meta-analyses, CBT is at least associated with a small non-significant effect size, and at best, associated with a medium effect size, depending on whether you ask those receiving the treatment, those supporting the treatment, or those delivering the treatment.There are three further comments we would like to add to help in the design of future studies, including the interventions.First, there have been a variety of modelling and pilot studies across different countries, but very few researchers have developed interventions within the spirit of co-production with people with autism and their families.Co-production means working together with those who will receive the intervention when developing and running a clinical trial to ensure that those who are likely to receive the intervention have also genuinely helped design the intervention.While some studies employed this, if used more commonly, such a strategy would lead to improved engagement and outcomes, especially from the point of view of children and adults with autism.Second, many of the reviewed studies focused on delivering group-based interventions for a variety of different problems.While delivering interventions in a group may be more cost effective, this may not be associated with greater effectiveness.The reason for this is that co-morbidity is high amongst people with autism, and within a group there may be participants who have obsessive-compulsive disorder, social phobia, generalised anxiety disorder, depression, or many other psychiatric problems, in addition to the difficulties associated with autism itself.While there are marked similarities, cognitive behavioural therapy for depression is different than cognitive behavioural therapy for obsessive compulsive disorder, and delivering interventions within a group may have prevented therapists form being able to tailor the intervention to address the needs of each individual within the group adequately.Related to this, there are some individuals with ASDs who may be unable or unwilling to access group-based interventions.As such, we recommend that researchers begin to focus more heavily on formulation-driven and trans-diagnostic interventions delivered with individuals, rather than within a group, bearing in mind that there is evidence that individually delivered CBT is associated with stronger effect sizes than group-based CBT for people with intellectual disabilities, another group which tends to have marked co-morbidity.Finally, little to no attention has been paid to therapist competence within this area, including therapist style, integrity, alliance and experience, all of which has been linked to outcomes in a variety of studies involving people without ASDs.Further research is needed into these factors within studies involving people with ASDs in order to potentially help improve outcomes.Related to this, little attention has been paid to the accreditation of cognitive behavioural therapists within the literature.While behavioural therapists are certified through the Behaviour Analyst Certification Board®, those offering cognitive behavioural therapy are not certified in a similar manner in many jurisdictions.In some countries, such as the United Kingdom, there are organisations which accredit cognitive behaviour therapists, namely the British Association for Behavioural and Cognitive Psychotherapies, but this does not mean that therapists have appropriate clinical expertise and experience of working with people who have ASDs in order to ensure that they are able to adapt therapy in a way that is likely to be efficacious.Further, while CBT should be adapted to meet the needs of those with ASDs, we still know relatively little about the effectiveness of many of these adaptations, as they have not been investigated using experimental designs to determine whether they lead to substantial improvements in treatment engagement and outcome.While future definitive trials are certainly needed within this area, alongside this, we also need greater experimental work examining the effectiveness of various adaptations to CBT for use with people who have ASDs.
The aims of this study were to undertake a meta-analytic and systematic appraisal of the literature investigating the effectiveness of cognitive behavioural therapy (CBT) when used with individuals who have autistic spectrum disorders (ASDs) for either a) affective disorders, or b) the symptoms of ASDs. Following a systematic search, 48 studies were included. CBT, used for affective disorders, was associated with a non-significant small to medium effect size, g = 0.24, for self-report measures, a significant medium effect size, g = 0.66, for informant-report measures, and a significant medium effect size, g = 0.73, for clinician-report measures. CBT, used as a treatment for symptoms of ASDs, was associated with a small to medium non-significant effect size, g = 0.25, for self-report measures, a significant small to medium effect size, g = 0.48, for informant-report measures, a significant medium effect size, g = 0.65, for clinician-report measures, and a significant small to medium effect size, g = 0.35, for task-based measures. Sensitivity analyses reduced effect size magnitude, with the exception of that based on informant-report measures for the symptoms of ASDs, which increased, g = 0.52. Definitive trials are needed to demonstrate that CBT is an empirically validated treatment for use with people who have ASDs.
209
Co-precipitation, impregnation and so-gel preparation of Ni catalysts for pyrolysis-catalytic steam reforming of waste plastics
World production of waste plastic grows year by year, as a consequence of the huge demand for plastic materials in every commercial field .However, a significant proportion of waste plastics end up into the waste stream leading to many environmental problems.Plastics in the ocean are of increasing concern due to their persistence and effects on the oceans, wildlife and potentially, humans .The cumulative quantity of waste plastic is predicted to be nearly 250 million tonnes per year by 2025.Therefore, there is an urgent need to develop more effective methods to process waste plastics and improve its utilization efficiency.Chemical recycling processes such as pyrolysis are an effective option to recover energy from waste plastic.A wide distribution of products including gas, chemicals, chars and other products can be obtained from pyrolysis of plastics .Furthermore, pyrolysis with a subsequent catalytic steam reforming process enables the conversion of plastic into more valuable gases such as hydrogen .A 60 g/h scale continuous tank reactor for plastic pyrolysis followed by the catalytic packed-bed reactor for steam reforming was designed by Park and Namioka for the hydrogen-rich gas production from waste polypropylene and polystyrene, while the optimum operating conditions were also studied.Erkiaga et al. compared the products from pyrolysis-steam reforming of HDPE with those from a gasification-steam reforming system by the same authors .Results show that the former one produced a high H2 yield of 81.5% of the maximum stoichiometric value, which was a little bit lower than the later one but enabling a more energy-efficient technology for plastics utilization.The pyrolysis and in-line steam reforming of waste plastics has been reviewed by Lopez et al. , and they reported that more than 30 wt.% of H2 yield with up to 70 vol.% of concentration could be obtained.Catalysts can assist in chain-scission reactions and the breakage of chemical bonds during the pyrolysis-steam reforming process, allowing the decomposition of plastics to occur at a lower temperature and shorten the reaction time.Different types of catalyst such as olivine , Ru , Fe , and Ni catalysts have been investigated for gaseous products from the pyrolysis-reforming of waste plastics.Because of the high activation ability of CC and CH bonds on the Ni metal surface as well as the relatively low cost, Ni based catalysts have been a preferred choice in the process .There has been much reported work in the literature devoted to the selection of the optimum loading content, promoters and preparation method of Ni based catalysts for the pyrolysis-reforming of plastics.By varying the flow rate of reduction gas and metal addition to the Ni catalyst, Mazumder et al. found that the acid–base properties, metal dispersion and crystal size of catalyst can be greatly improved.Wu and Williams suggested that the increase in Ni loading could improve hydrogen production from polypropylene, and Mg modified Ni catalyst showed better coke resistance than the non-modified catalysts.Other promoters such as Ce and Zr were also explored, and the improvement in catalyst intrinsic activity was ascribed to the enhancement of water adsorption/dissociation .In order to obtain a higher Ni dispersion, some novel assisted methods were developed to relieve the diffusion resistance of Ni into the inner structure of catalyst.For example, ethylene glycol and ethylenediaminetetraacetic acid assisted impregnation methods were used to prepare Ni catalysts with good stability and activity for hydrocarbons reforming.Catalyst synthesis methods, in particular, the metal loading method, is a curial factor to be considered for catalyst activity.The physical structure and chemical characterizations of the catalyst, including the porosity, reducibility and stability would be closely related to the preparation process .Impregnation is the most common method for catalyst preparation, because of the simple procedure and the flexibility to include different catalyst promoters.Co-precipitated Ni catalyst was designed to minimize catalyst deactivation and promote hydrogen production from waste hydrocarbons .Meanwhile, sol-gel prepared catalysts have attracted more attention recently .The reinforced impact on Ni dispersion with average size of 20–24 nm was found by using a sol-gel method, leading to superior catalyst activity towards methane reforming .Some reports have compared different catalyst preparation methods, for example, Bibela et al. used a Ni-Ce/Mg-Al catalyst for steam reforming of bio-oil, and found that the wetness impregnated catalyst showed higher carbon conversion than a catalyst prepared via co-precipitation at increasing pH. A sol-gel prepared and promoted Ni/Al2O3 catalyst was reported to benefit the metal-support interaction with better particle size uniformity than an impregnated catalyst .Around twice the hydrogen yield was produced from steam reforming of ethanol with a Ni/SiO2 prepared catalyst using a sol-gel method compared with that by an impregnation method .It is known that the co-precipitation, impregnation and sol-gel methods have been adopted as suitable metal loading alternatives for catalyst synthesis.However, the published literatures concerning the comparison of Ni catalyst made by these three methods for catalytic thermal processing of waste plastics are limited.Considering this, the aim of this present work was to investigate Ni/Al2O3 catalysts prepared via co-precipitation, impregnation and sol-gel methods for the pyrolysis-steam reforming of waste plastics.The catalyst activity was evaluated in terms of the hydrogen and carbon monoxide production, as well as the catalyst coke formation.In addition, it has been shown that different plastics show different pyrolysis behaviour, producing different product hydrocarbons, which may effect the catalytic steam reforming process and catalyst coke formation and product distributions .Therefore, the influence of the type of plastic feedstock on the product selectivity and catalyst activity was also investigated.This work follows on from our previous reports which investigated the influence of different types of catalyst and process parameters on the pyrolysis catalytic steam reforming of waste plastics in relation to hydrogen production,Three different waste plastics, high density polyethylene, polypropylene and polystyrene, which are the most common plastic wastes worldwide, were supplied by Regain Polymers Limited, Castleford, UK.Plastics were collected from real-word waste plastics and mechanically recycled to produce 2–3 mm spheres.The ultimate analysis of the plastic wastes were determined using a Vario Micro Element Analyser, and the results are shown in Table 1.The proximate analyses including moisture, volatiles and ash content of waste plastics were conducted according to ASTM standards E790, E897 and E830, respectively.Briefly, the moisture content was determined by placing 1 g of plastic uniformly in a sample boat in an oven at 105 °C for 1 h.The measurement of volatiles content was operated by using a sealed crucible containing 1 g of plastic in an electric furnace at 950 °C for 7 min, while the ash content was obtained by placing 1 g of plastic in a sample boat in air at 550 °C for 1 h. Results were summarized in Table 1.As the plastics used in this work were from real-world applications instead of the pure polymers, some additives may have been present in the samples.For example, oxygen was detected for the elemental analysis, whereas it would not be present in the pure polymer.Waste HDPE was observed to have the highest ash content of 4.98 wt.%, while the other two plastics show little ash content.The Ni/Al2O3 catalysts prepared using co-precipitation method, impregnation method and sol-gel method were tested to catalyse the pyrolysis-reforming of waste plastics.Ni/Al-Im was obtained by a conventional wet impregnation method.10 g γ-Al2O3 and 5.503 g Ni2·6H2O were mixed in deionized water.The mixture was then stirred using a magnetic stirring apparatus at 100 °C until it turned into slurry.The precursor was dried overnight and calcined at 750 °C for 3 h.The prepared Ni/Al-Co catalyst involved mixing the metallic nitrates of 7.43 g Ni2·6H2O and 99.34 g Al3·9H2O together with 150 ml deionized water, so that a 10 wt.% Ni loading was obtained.The solution was kept at 40 °C with moderate stirring, then the precursor was precipitated with NH4 dropwise until the final pH of around 8 was achieved.The precipitates were filtered and washed with deionized water and then dried at 105 °C overnight, followed by calcination at 750 °C in air for 3 h.The Ni/Al-Sg catalyst with the same Ni loading of 10 wt.% was prepared by a simple sol-gel method.20 g of Aluminium tri-sec-butoxide was firstly dissolved into 150 ml absolute ethanol and stirred for 2.5 h at 50 °C.2.210 g of Ni2·6H2O was dissolved in 8 ml deionised water separately to form the Ni precursor.Then the Ni solution was pipetted into the support solution while maintaining stirring at 75 °C for 0.5 h. 1 M HNO3 was added into above solution until the pH of 4.8 was obtained.After drying at 105 °C overnight, the precursors were calcined at 450 °C in air for 3 h.All of the catalysts were ground and sieved with a size range between 50 and 212 μm.The catalysts used in this work were reduced in 5 vol.% H2 atmosphere at 800 °C for 1 h before each experiment.A schematic diagram of the pyrolysis-catalytic steam reforming reactor system for waste plastics is shown in Fig. 1 .The experimental system consisted essentially of a continuous steam injection system using a water syringe pump, a nitrogen gas supply system, a two-stage stainless tube reactor, a gaseous product condensing system using dry ice, and gas measurement system.The reactor has two separate heating zones, i.e. first stage plastic pyrolysis reactor of 200 mm height and 40 mm i.d; second stage catalytic reactor of 300 mm height and 22 mm i.d.The real temperatures of two zones were monitored by thermocouples placing in the middle of each reactor and controlled separately.The calibration of the reactor temperature was performed before this set of experiments, and the temperature described in this paper was given as the real one.For each experiment, 0.5 g of catalyst was loaded into the second stage where the temperature was maintained at 800 °C.High purity nitrogen was supplied as the inert carrier gas.1 g of plastics were placed in the first stage and then heated from room temperature to 500 °C at 40 °C min−1, and the evolved volatiles passed into the catalyst reactor for reforming.Water was injected into the second stage with a flow rate of 6 g h−1.After the reforming process, the condensable liquids were collected into condensers while the non-condensable gases were collected into a 25 l Tedlar™ gas sample bag off-line gas chromatography measurement.Each experiment was repeated to ensure the reliability of the results.The gas products were separated and quantified by packed column GCs.A Varian 3380 GC packed with 60–80 mesh molecular sieve, coupled with thermal conductivity detector was used to analyse permanent gases.CO2 was determined by another Varian 3380 GC/TCD.Argon was used as the carrier gas for both GCs.Hydrocarbons were analysed using a different Varian 3380 GC/FID coupled with a HayeSep 80–100 mesh molecular sieve column and using nitrogen as carrier gas.Each gas compound mass yield was calculated combining the flow rate of nitrogen and its composition obtained from the GC.The yield of non-reacted pyrolysis oil was calculated as the mass difference between fresh and used condenser system in relation to the total weight of plastic and steam input.Coke yield was determined from the temperature programmed oxidation analysis of the spent catalyst.Residue yield was measured as the mass difference between fresh and the used whole reactor system in relation to the total weight of plastic and steam input.Mass balance was therefore calculated as the sum of gas, liquid and residue obtained in relation to the total plastic and steam input.X-ray diffraction analysis of the fresh catalysts was carried out using a Bruker D8 instrument with Cu Kα radiation operated at 40 kV and 40 mA.In order to explore the distribution of active sites on the catalysts, the Debye-Scherrer equation was used to obtain the average crystal size from the XRD results.The porous properties of the fresh catalysts were determined using a Nova 2200e instrument.Around 0.2 g of each sample was degassed at 300 °C for 2 h prior to the analysis.The specific surface area was calculated using Brunauer, Emmett and Teller method.The total pore volume was determined at a relative pressure P/P0 of 0.99, and the pore distribution was obtained from the desorption isotherms via the BJH method.In order to determine the actual loading of nickel in the catalyst, a Optima 5300DV inductively coupled plasma optical emission spectrometerwas used.About 25 mg of catalyst was previously dissolved in acidic solution, followed by diluting with deionized water to 50 ml in preparation for analysis.The morphologies of the fresh prepared catalysts and the coke deposited on the used catalysts were investigated using a Hitachi SU8230 scanning electron microscope, which was operated at 2 kV and working distance of 3 mm.An energy dispersive X-ray spectroscope was connected to the SEM to study the elemental distribution.A FEI Helios G4 CX Dual Beam SEM with precise focused ion beam was used to analyse the cross-section of the prepared catalysts.Before the analysis, the catalyst was coated with platinum in order to protect the sample during the sectioning process.Fresh catalysts were further examined at a higher magnification by a high-resolution transmission electron microscope coupled with a connected EDXS for microstructure and elemental distribution.For the TEM analysis preparation, samples were initially dispersed well in methanol using an ultrasonic apparatus, and were pipetted on to a carbon film coated copper grid.The coke deposited on the surface of catalyst was characterized by temperature programmed oxidation with a Shimadzu TGA 50.For each TPO analysis, around 25 mg of spent catalyst was heated from room temperature to 800 °C in an air atmosphere at a heating rate of 15 °C min−1 and a holding time of 10 min at 800 °C.The XRD patterns of the fresh catalysts are shown in Fig. 2.The Ni/Al2O3 catalyst prepared by the impregnation method produced sharp peaks compared to the other fresh catalysts.The easily identified peaks centred at 2θ = 44.5, 51.9 and 76.4° corresponding to the, and plane respectively, confirmed the presence of Ni in the cubic form.The aluminium oxides at 37.6, 45.8, 66.8° were also determined.As there was no NiO detected from the XRD results, it demonstrates that the nickel catalyst precursors had been completely reduced into active compounds before each experiment.According to the Scherrer equation, the average crystallite size of Ni based on the main peak at around 2θ at 44.5° was determined to be 26.17, 52.28 and 19.69 nm for the Ni/Al-Co, Ni/Al-Im and Ni/Al-Sg catalysts, respectively.This indicates that a higher Ni dispersion and smaller Ni particles were found for the catalyst prepared by the sol-gel method compared with impregnation and co-precipitation.Table 2 summarizes the BET surface areas and pore size properties of the fresh nickel Ni/Al2O3 catalysts.The Ni/Al-Co and Ni/Al-Im catalyst showed surface areas of 192.24 and 146.41 m2 g−1, respectively.The Ni catalyst produced via the sol-gel method showed a higher surface area of 305.21 m2 g−1 compared to the catalysts obtained by impregnation or co-precipitation.The Ni/Al-Sg catalyst also gave the highest pore volume of 0.915 ml g−1 while Ni/Al-Im generated the lowest.However, the average pore size of these three catalysts were similar, at around 6.6 nm.Therefore, it indicates that the Ni catalyst prepared by the sol-gel method gives a more porous structure compared to the other two methods.The adsorption/desorption isotherms and pore size distribution of fresh catalysts are shown in Fig. 3.All of the physisorption isotherm types for the three catalysts appear to be type IV according to the IUPAC classification .From the pore size distributions, the Ni/Al2O3 catalyst prepared by the sol-gel method shows a quite narrow pore size distribution, while the impregnated prepared catalyst shows a broad distribution.This indicates that compared with Ni/Al-Co and Ni/Al-Im catalyst, the Ni/Al-Sg catalyst produces a more uniform porous structure, and most pores are with a size of around 6.64 nm.Therefore, it may be concluded that a mesostructured Ni/Al2O3 catalyst can be obtained by the sol-gel preparation method.The results of the real nickel loading from ICP-OES analysis was also listed in Table 2.It can be seen the real content of Ni in the co-precipitated and sol-gel catalyst was a little lower than the designed value, while it was excellent agreement for the impregnated Ni/Al-Im catalyst.In summary, the active Ni sites were successfully loaded into the catalyst by different preparation method.The morphologies and the distribution of active metallic Ni for the fresh catalysts were determined by SEM-EDX analysis, as shown in Fig. 4.Compared with the Ni/Al-Co catalyst shown in Fig. 4, which shows a flat surface, the catalyst particles of Ni/Al-Im observed were irregular.The nickel catalyst prepared by the sol-gel method seems to be composed of many small particles in a loose structure.The Ni EDX mapping showed a uniform distribution of Ni particles in the catalysts.In order to investigate the inner structure of the fresh catalysts, the cross-sectional morphologies of catalyst particles were examined by FIB/SEM.From Fig. 5, the Ni/Al-Co catalyst which has a relatively low surface area shows a tight structure, whereas the Ni/Al-Sg catalyst shows a porous inner structure.The observations agree well with the porosity results that show the Ni/Al-Sg catalyst generates a higher surface area and higher pore volume compared with the other catalysts.This type of structure was reported to benefit Ni penetration inside the catalyst particles, and further promote the catalyst activity .The fresh catalysts were further examined under high magnification by TEM and the results are shown in Fig. 6.The images show obvious dark spots, which were ascribed to the presence of metallic Ni.As can be seen, all the Ni particles were well dispersed, and hardly any agglomeration was seen.Statistical analysis of the Ni particle size distributions of the three TEM images was carried out by ImageJ software, and the results are shown in Fig. 6–.More than 95 percent of the Ni particles present were of a size less than 50 nm.The Ni/Al catalyst prepared by the sol-gel method showed the narrowest size distribution, with the smallest average particle size of 15.40 nm.Both Ni/Al-Co and Ni/Al-Im catalysts have ta size distribution concentrated at 15∼30 nm, but they show larger average particle size of 28.91 and 29.60 nm, respectively.Therefore, the sol-gel prepared catalyst exhibited the highest homogeneity and smallest active metal size among the three catalysts, which is in good agreement with the results from XRD analysis.The EDX mappings of the sol-gel synthesized Ni/Al catalyst shown in Fig. 6 also demonstrate that both the Ni and Al were uniformly distributed inside the catalyst.The use of different nickel catalysts prepared via co-precipitation, impregnation and sol-gel methods for the pyrolysis-catalytic steam reforming of waste polyethylene was investigated in this section.The results of syngas production and gas composition are summarized in Table 3.The mass balance of all the experiments in this paper were calculated to be in the range of 92 to 98 wt.%.In addition, results from the repeated trials show that the standard deviations of the hydrogen and carbon monoxide yield were 0.26 and 0.29 mmol g−1 plastic respectively.For the volumetric gas concentrations, the standard deviation was 0.26% for H2 and 0.08% for CO respectively.These data indicates the reliability of the experimental procedure.From Table 3, the highest hydrogen yield of 60.26 mmol g-1plastic was obtained with the Ni/Al catalyst prepared by the sol-gel method, followed by that prepared by the impregnation method.The lowest hydrogen yield of 43.07 mmol -1plastic was obtained with the Ni/Al-Co catalyst.The production of carbon monoxide has the same trend as that of hydrogen yield.Syngas production achieved its maximum with the Ni/Al-Sg catalyst, that is, per unit mass of the polyethylene can yield 83.28 mmol of syngas.The gas composition is also shown in Table 3.It can be observed that the concentration of H2 and CO2 were steadily increased with the catalyst order: Ni/Al-Co < Ni/Al-Im < Ni/Al-Sg, while the content of CH4, CO and C2-C4 were decreased correspondingly.During the pyrolysis-reforming of waste plastic, the thermal decomposition of plastic occurs in the pyrolysis stage as Eq.The pyrolysis volatiles were then steam reformed by the catalyst to produce more valuable gases like hydrogen and carbon monoxide and).As the CH4 and C2–C4 concentrations from Ni/Al-Sg were rather lower, while H2 and CO yields were significantly higher than those from the other two catalysts, it can be concluded that the steam reforming of hydrocarbons) was greatly promoted in the presence of the Ni/Al-Sg catalyst.In addition, the ratio of H2 to CO, which can reveal the degree of waster gas shift reaction Eq., achieved its maximum of 2.62 with the Ni/Al-Sg catalyst.Therefore, the nickel catalyst prepared by sol-gel method displayed the highest activity to both hydrocarbons reforming and water gas shift reactions among the three catalysts investigated.CxHyOz → + Tar + CharCxHy + H2O → CO + H2CO + H2O ⇔ CO2 + H2,Temperature programmed oxidation was used to investigate the coke deposition on the used catalyst.As shown in Fig. 7, the oxidation process involved three main stages: the removal of water in the range of 100∼300 °C, the oxidation of Ni from 300 to 450 °C, and carbonaceous coke combustion from 450 °C onwards, which were also observed in our previous studies .The amount of coke was calculated based on the weight loss of spent catalyst from 450 °C to 800 °C, and the results are shown in Table 3.It can be observed that, during the pyrolysis-catalytic steam reforming of waste polyethylene, the Ni/Al-Sg catalyst produced the highest coke yield of 7.41 wt.% among the three catalysts, but displayed the highest catalyst activity for syngas production.In addition, the Ni/Al-Co catalyst which generated the lowest hydrogen yield produced the least coke formation.It should be noted that the catalytic volatiles thermal cracking Eq. may also be involved during this process.The results regarding syngas production and coke yield with the three catalysts indicate that the Ni/Al-Sg catalyst showed high catalytic activity for both reforming reactions and volatiles thermal cracking reactions.The derivative weight loss thermograms in Fig. 7 showed two distinct peaks at temperatures around 530 and 650 °C.It has been reported that the oxidation peak at lower temperature was related to amorphous coke, while the peak at higher temperature is linked to graphitic filamentous coke oxidation .The coke deposited on the Ni/Al-Sg catalyst appears to be mainly in the form of filamentous carbon, which was also confirmed in the SEM morphology analysis shown in Fig. 8.While for the Ni/Al-Co catalyst SEM results shown in Fig. 8, more coke deposits without any regular shapes were observed.The larger production of amorphous coke on the spent Ni/Al-Co catalyst compared with the other two catalysts could also be responsible for the lower syngas production, since the amorphous coke was considered to be more detrimental to catalyst activity than the filamentous carbons.In addition, compared with the SEM results of the fresh catalysts shown in Fig. 4, the morphologies of the three nickel catalysts did not change significantly.For example, the catalyst prepared by the sol-gel method, maintained its loose structure after the reforming process, indicating the good thermal stability of the catalyst.CxHy → C + H2,Polypropylene was also investigated for the pyrolysis-catalytic steam reforming process for hydrogen production in the presence of the three different Ni/Al catalysts to produce more gases.The gas productions and concentrations are shown in Table 4.The Ni/Al-Sg catalyst displayed the most efficient catalytic activity in terms of the steam reforming of the polypropylene, as the gas yield was 144.03 wt.% which was much higher than the other two catalysts.In addition, much higher hydrogen yield and carbon monoxide yield were obtained by using the Ni/Al-Sg catalyst.The syngas production from PP with the Ni/Al-Sg catalyst was slightly higher than was observed from HDPE, and this phenomenon can also be found with the Ni/Al-Co and Ni/Al-Im catalysts.This may be due to the higher hydrogen and carbon content and lower ash content of PP compared with HDPE, and which suggests more effective hydrocarbons participation in the reforming reactions to obtain more syngas.The nickel catalyst prepared by the co-precipitation method showed the least activity for the reforming process, producing 46.05 mmol H2 g−1plastic and 20.39 mmol CO g−1plastic.The composition of the gases from waste polypropylene were mainly composed of H2, CH4, CO, C2-4 hydrocarbons and CO2.The concentration of H2 and CO with Ni/Al-Sg achieved 59.38 and 26.57 vol.%, respectively.The CH4 and C2-4 gases content with the Ni/Al-Sg catalyst were lower than the case with the other two catalysts, which also indicates the higher catalytic activity of the catalyst made by the sol-gel method.The amount and the type of coke deposition on the three catalysts from the pyrolysis-steam reforming of polypropylene were determined by TPO analysis, as shown in Fig. 9.The amount of carbonaceous coke was calculated and the results are shown in Table 4.The Ni/Al-Co and Ni/Al-Im catalysts produced around 5 to 6 wt.% of coke, lower than the case of the Ni/Al-Sg catalyst which showed a 8.49 wt.% coke yield.From the derivative weight loss results, the peak associated with amorphous carbon was much larger than that of the filamentous carbon with the Ni/Al-Co catalyst.However, for the Ni/Al-Im and Ni/Al-Sg catalysts, produced more filamentous carbons.This phenomenon was also observed with HDPE.It can be deduced that the nickel catalyst prepared by impregnation and sol-gel methods favour the production of filamentous carbonaceous coke from the pyrolysis-steam reforming of waste plastics.The presence of both amorphous carbon and filamentous carbon with the Ni/Al-Co and Ni/Al-Im catalyst were further confirmed by the SEM images shown in Fig. 10 and.Fig. 10 shows that the deposits on the used Ni/Al-Sg catalyst were predominantly filamentous carbon.Furthermore, there was a dense covering of carbon on the catalyst no matter which type of catalyst was used.The amount of carbon deposits from SEM images seems to be larger for PP than the case for HDPE, which was consistent with the TPO results.Wu and Williams used an incipient wetness method prepared Ni/Al2O3 for steam gasification of PP.A potential H2 yield of 26.7 wt.% was obtained, with the gaseous product containing 56.3 vol.% of H2 and 20.0 vol.% of CO.The syngas production can be calculated as 77.62 mmol g−1plastic, which was close to the yield obtained in this study.However, the coke deposition was higher it from this study, which might be due to the lower surface area compared with Ni/Al-Im used here.High hydrogen yields of 21.9 g g−1PP and 52 wt.% were obtained by the same authors from polypropylene with co-precipitation prepared Ni-Mg-Al and co-impregnated Ni/CeO2/Al2O3 catalyst , respectively.It should be noted that the Ni catalysts in these literatures were prepared either at a higher loading or with promoter added, otherwise using at higher catalysis temperature.It also suggests that hydrogen production can be promoted by the use of effective catalyst promoters or by regulation of operational parameters.Czernik and French concluded that many common plastics can be converted into hydrogen by thermo-catalytic process with a microscale reactor interfaced with molecular beam mass spectrometer.A bench-scale plastic pyrolysis-reforming system was also carried out by them using PP as a representative polymer, while 20.5 g/h H2 was generated with a 60 g/h of PP feeding rate.The product distributions in terms of gas yield and composition from the pyrolysis-catalytic steam reforming of waste polystyrene with the different catalysts are displayed in Table 5.The Ni/Al-Co catalyst produced a H2 yield of 51.31 mmol g−1plastic which was a little lower than the yield of 55.04 mmol H2 g−1plastic with the Ni/Al-Im catalyst.Among the three catalysts, the Ni/Al-Sg catalyst produced the maximum H2 yield and CO yield per mass of plastic feedstock, which was also observed with HDPE and PP.However, compared with HDPE and PP, PS shows a comparatively higher yield of CO, with values up to 36.10 mmol g−1plastic with the sol-gel prepared catalyst.Most of the concentrations of CO and CO2 obtained from PS were also larger than the corresponding data from PP or HDPE.It may due to the higher content of elemental carbon in the feedstock.In addition, as the coke yield produced using PS was lower than from PP or HDPE except from those with Ni/Al-Co, it suggests most of the carbon in PS was converted into gas product by participating in catalytic steam reforming reactions Eq. or the water gas shift reaction Eq.The hydrogen content of the product gases fluctuated slightly in the range of 57.90 and 59.32 vol.% depending on the different catalyst applied.The hydrocarbons in the final gas product were relatively low for PS whichever type of catalyst was used, and the concentration of C2-C4 was less than 1.10 vol.%.The water gas shift reaction Eq. was promoted by the Ni/Al-Co catalyst as the H2/CO ratio was higher than with the other catalysts.TPO analysis of the used catalysts from the pyrolysis-catalytic steam reforming of polystyrene was also carried out to characterize the carbonaceous coke deposition on the catalyst, as shown in Fig. 11.The results of the calculated amount of coke produced are shown in Table 5.The Ni/Al-Sg catalyst produced the highest coke yield of 6.14 wt.% even though it produced the largest hydrogen production amongst the three catalysts.It suggests that both steam reforming and decomposition of hydrocarbons Eq. and Eq. were significantly facilitated with the Ni/Al-Sg catalyst during the pyrolysis-steam reforming of waste polystyrene.As for the type of carbon deposits, overlapping derivative weight loss peaks were observed with the Ni/Al-Im and Ni/Al-Sg catalysts, indicating that both amorphous and filamentous coke were produced.This is in agreement with the morphologies observed by SEM images shown in Fig. 12.The Ni/Al catalyst prepared by co-precipitation displayed the deposits in a great proportion of amorphous form, and the derivative TPO peak at lower temperature was more significant than that at higher oxidation temperature.The yield of hydrogen and carbon monoxide from pyrolysis-steam reforming of waste plastics varied with the catalyst preparation method used.Overall, despite the difference in the feedstock, the sol-gel prepared nickel catalyst produced the highest syngas production, while the co-precipitation prepared catalyst produced the lowest syngas production among the three catalysts investigated.In addition, the maximum carbonaceous coke deposition on the catalyst was also obtained with the Ni/Al-Sg catalyst.This suggests that both the hydrocarbon reforming reactions and the hydrocarbon thermal decomposition reactions were promoted more in the presence of the sol-gel prepared catalyst.For example, the largest production of H2 and CO was obtained with the Ni/Al-Sg catalyst with waste polypropylene at 67.00 mmol H2 g−1catalyst and 29.98 mmol CO g−1catalyst and also the highest coke yield of 8.49 wt.%.This is in agreement with previous results from Efika et al. that a sol–gel prepared NiO/SiO2 catalyst generated higher syngas yield than the catalyst made by an incipient wetness method, and the former one also appeared to have more carbon formation on its surface.Although the syngas production achieved the maximum at the presence of Ni/Al-Sg, it should still be noted that the CO content was relatively high.It may be related to the high reforming temperature which was unfavourable to the Reaction due to the exothermic nature of the reaction .Furthermore, a dual functional Ni catalyst with both catalysis and CO2 sorption, for example, a sol-gel prepared Ni/Al catalyst coupled with CaO, was suggested for further study, in order to promote the WGS reaction for higher H2 yield .The catalytic performance in terms of hydrogen yield and CO production was also influenced by physicochemical characteristics e.g. the porosity, and the type of coke deposited.In particular, the increase in the surface area and pore volume could not only improve the dispersion of metal ions, but also facilitate the interaction of reactant molecules with the catalyst internal surface .In addition, the catalyst was generally deactivated by two types of carbonaceous coke, amorphous and filamentous carbon.The filamentous carbon was found to have little influence on the catalytic activities, while the amorphous carbon has been reported to be more detrimental to catalyst activity .Furthermore, these two factors are associated with each other, as Li et al. have suggested that the catalyst activity can be improved by uniform Ni dispersion, while uneven distribution and large Ni particles are the main reason for the formation of non-filamentous coke which leads to the loss of catalyst activity.In this work, the sol-gel prepared Ni catalyst showed a high surface area and uniform Ni dispersion, as evidenced from the BET and TEM results.Furthermore, the coke obtained is in the filamentous form.Therefore, the Ni/Al-Sg prepared catalyst presents an excellent catalytic performance towards syngas production for the pyrolysis-catalytic steam reforming of waste plastics.However, for the co-precipitation prepared Ni catalyst, the catalyst coke deposits were found to be of the monoatomic or amorphous type, though it showed a higher surface area than the Ni/Al-Im catalyst.It suggests that a high activity at the initial reaction stage may occur, but it experienced a rapid deactivation by detrimental coke deposition.The hydrogen and carbon monoxide yield from the pyrolysis-catalytic steam reforming of PP was higher than that observed for HDPE, no matter which catalyst was applied, indicating more syngas production can be obtained per mass of PP compared to HDPE in this work.This may be due to the fact that PP had higher H and C elemental contents compared to HDPE, while the ash content of HDPE was relatively higher.In addition, PS was found to produce the highest CO and syngas yield among the three plastics.Barbarias et al. investigated the valorisation of PP, PE, PS and PET for hydrogen production by pyrolysis-catalytic steam reforming.The pyrolysis volatiles at 500 °C were identified, and results show that nearly 100 wt.% of PS was converted into volatiles with 70.6 wt.% of styrene, while more than 65 wt.% of wax were obtained from polyolefins.Therefore, the higher syngas production from PS in this study may due to the fact that more styrene from PS pyrolysis instead of the wax from polyolefins were introduced into the steam reforming stage.They concluded that the H2 yields from PS was lower than those from polyolefins, while the H2 production from PS in this study was comparative even higher than those from HDPE and PP.Around 38, 35 and 30 wt.% of hydrogen yield were achieved by the same authors at 16.7 gcat min g−1plastic of space time, 700 °C from HDPE, PP and PS respectively.The difference between those values and the yield in this work were attributed to the different reactor system as well as the operational parameters.However, in this study, it is still difficult to evaluate the ability of each plastic for H2 and CO production in relation to C or H elemental content.Therefore, CO conversion Eq. and H2 conversion Eq. were calculated, to reveal the degree of C or H in the gas product.These two indicators essentially reflect the reforming ability of each plastic by catalyst towards H2 or CO.In addition, the coke conversion was also calculated as Eq.CO conversion = /H2 conversion = /Coke conversion =/,The results of these indicators with the Ni/Al-Sg catalyst was taken as an example in relation to the different plastics and the results presented in Fig. 13.From Fig. 13, the ability for H2 production of HDPE and PP was rather close, as the H2 conversion obtained was around 95 wt.%.However, the H2 conversion was significantly increased in the presence of PS, with the highest conversion of 145.11 wt.%.The conversion of over 100 percent was due to the production of H2 from H2O.The CO conversion was gradually increased with the order: HDPE < PP < PS and suggests that the steam reforming reaction of hydrocarbons) was more favourable with PS, generating more H2 and CO.The maximum syngas production of 98.36 mmol g−1plastic was obtained using PS with the Ni/Al-Sg catalyst.In regard to the gas compositions from different plastics, the molar ratio of H2/CO achieved was in the range of 1.72 to 2.62, and it was relatively higher from the polyolefin plastics.Therefore, there should be potential in industrial applications in that the H2 to CO ratio can be tuned to meet the desired ratio by adjusting the mixed proportion of different plastics.From the TPO results related to catalyst carbon coke deposition, PP generated the highest coke yield of 8.48 wt.%, but from Fig. 13, it can be seen that the calculated carbon conversion was in order of PP>HDPE > PS, even though PS has more C content in the feedstock.The results suggest that the coke formation by decomposition of hydrocarbons Eq. was more favourable in the presence of PP.Wu and Williams also found that PP generated the highest coke deposition on used Ni catalysts when the catalyst temperature was 800 °C with a water flow rate of 4.74 g h−1, compared with HDPE and PS.Also, PS produced relatively lower coke yield among the three plastics under variable process conditions.A similar trend was also reported by Acomb et al. , when exploring the pyrolysis-gasification of LDPE, PP and PS, as higher residue yields were obtained from LDPE and PP.Furthermore, the reforming temperature in this work was 800 °C, and Namioka et al. also found that the coke deposition of PP was more apparent than that of PS at higher reforming temperatures.In relation to the type of carbon deposition on the catalyst, the results show that the carbon was mainly in the form of the filamentous type from waste HDPE and PP and Fig. 9), while more amorphous carbon was produced from PS.This phenomenon was especially evident for the Ni/Al-Im and Ni/Al-Sg catalysts, which generated both types of carbon.This can be explained by the difference in the gas composition, as Angeli et al. suggested that the increase in the C-number in the mixed gases favoured the formation of filamentous carbon, and Ochoa et al. reported that the carbonization of adsorbed coke to form multi-walled filamentous carbon can be promoted by the reaction of CH4 dehydrogenation.In this work, HDPE and PP produced a higher content of C1-C4 hydrocarbon gases compared with PS, resulting in the two polyolefin plastics producing more filamentous carbon on the used catalyst.The Ni/Al catalyst prepared by the sol-gel method generated higher H2 and CO yields from waste plastics than the catalysts prepared by co-precipitation and impregnation, due to the higher surface area and fine nickel particle size with uniform dispersion.The Ni/Al-Co catalyst prepared by co-precipitation produced the least syngas yield among three catalyst preparation methods investigated.From the TPO results, the type of carbon deposited on the Ni/Al-Co catalyst was mainly amorphous type carbon while it was in filamentous form for the impregnation and sol-gel prepared catalysts.Thermal decomposition reactions were more favoured with olefin type plastics to produce higher hydrogen and coke, whereas the steam reforming reactions were more significant with polystyrene.The maximum H2 yield of 67.00 mmol g1plastic was obtained from pyrolysis-catalytic steam reforming of waste polypropylene with more hydrocarbons in the product gases, while waste polystyrene generated the highest syngas yield of 98.36 mmol g1plastic with more oxygen-containing gases in the produced gases.
Three Ni/Al2O3 catalysts prepared by co-precipitation, impregnation and sol-gel methods were investigated for the pyrolysis-steam reforming of waste plastics. The influence of Ni loading method on the physicochemical properties and the catalytic activity towards hydrogen and carbon monoxide production were studied. Three different plastic feedstocks were used, high density polyethylene (HDPE), polypropylene (PP) and polystyrene (PS), and compared in relation to syngas production. Results showed that the overall performance of the Ni catalyst prepared by different synthesis method was found to be correlated with the porosity, metal dispersion and the type of coke deposits on the catalyst. The porosity of the catalyst and Ni dispersion were significantly improved using the sol-gel method, producing a catalyst surface area of 305.21 m2/g and average Ni particle size of 15.40 nm, leading to the highest activity among the three catalysts investigated. The least effective catalytic performance was found with the co-precipitation prepared catalyst which was due to the uniform Ni dispersion and the amorphous coke deposits on the catalyst. In regarding to the type of plastic, polypropylene experienced more decomposition reactions at the conditions investigated, resulting in higher hydrogen and coke yield. However, the catalytic steam reforming ability was more evident with polystyrene, producing more hydrogen from the feedstock and converting more carbon into carbon monoxide gases. Overall the maximum syngas production was achieved from polystyrene in the presence of the sol-gel prepared Ni/Al2O3 catalyst, with production of 62.26 mmol H2 g−1plastic and 36.10 mmol CO g−1plastic.
210
Tribology of electric vehicles: A review of critical components, current state and future improvement trends
Electric vehicles powered either by battery, fuel cell or full cell hybrid systems have gained great attention over the past few years around the world as a viable solution to decrease greenhouse gas emissions and to maintain a clean and healthy environment curtailing the adverse effect produced by using internal combustion engines in the transportation and energy production sectors .In comparison to internal combustion engine vehicles, EVs require new components and energy infrastructure to operate efficiently, which implies additional manufacturing and maintenance costs."For example, in the case of EVs powered by battery, 45.3% of the EV's cost corresponds to the cost of battery .So, the main challenge in auto-industry is to develop advanced battery systems; that complements the technology the most.Other energy infrastructure components required can be hydrogen fuel cells, hydrogen storage systems, supercapacitors, photovoltaic cells, automotive thermoelectric generators, regenerative breaking systems, charging technology for energy storages, etc. .According to such required technology, the cost of an EV can be expected to be higher than ICEVs currently since most of such technology has not been efficiently developed and commercially available.Nonetheless, EVs generate lower operating costs in comparison to ICEVs.For instance, the cost of an EV, in terms of energy cost, is about 2 cent/mile while an ICEV is around 12 cents/mile .In addition, according to the recent reports of Holmberg et al. , about only 21.5% from the total fuel energy supplied to an ICEV is used to move the vehicle in contrast to EVs, which use around 77% from the total grid electric energy supplied.It suggests that EVs are about 3.6 times more efficient than ICEVs.The growing interest in employing EVs, not only as passenger cars, but also as heavy-duty vehicles and buses has given potential to carry out different developments and research topics based on energy storage, hydrogen cells, electric motors, micro-electro-mechanical systems and sensors, autonomous driving, thermal and electric efficiency increase, etc.In conjunction with a series of multidisciplinary efforts, the challenge is to consolidate this technology as totally viable and efficient for the transportation sector.Although EVs present a substantial high efficiency in terms of energy consumption, there is a challenge to increase it even more.It can be achieved by reducing energy losses produced in EMs and power electronic devices, charging and discharging of battery, cabin heating and ventilation, air dragging and friction.The last source of loss being potentially decreased via tribological solutions.In line with the recent report of Holmberg and Erdemir , about 57% of the total electric energy supplied to an EV is used to overcome friction losses.The total friction losses considered in the report are distributed as follows: 1% in the EM with a capacity of 75 kW, 3% in the transmission, 41% in the rolling resistance, and 12% in brakes.Since ICEVs tyre technology is being applied similarly in current EVs and considering that EVs are expected to operate under similar rolling conditions to ICEVs, the rolling resistance losses in EVs can approach similar values to ICEVs.Other friction losses should be considered since commercial passenger EVs involve other tribological components which can increase friction losses affecting negatively efficiency and durability.The most common tribological components required in EVs to operate as similar as possible to ICEVs can be seen in Fig. 1.For the analysis in this paper, they were classified into: motor, transmission, steering system, tyres, wheel bearings, constant-velocity joints, kinematic energy recovery system, comfort and safety devices, suspension and MEMS.Each component employing different tribological elements which together produce a considerable amount of friction losses in the vehicle.This paper aims to address a literature survey on the current state and future improvement trends for critical tribological components in EVs with regards to the friction loss reduction and durability enhancement.The review gives an understanding of the most recent achievements in terms of tribological solutions applied to the components and the identification of research gaps for further work and developments according to the improvement of efficiency of EVs.Although ICEVs use various EMs for different secondary operating functions, namely, engine running, fuel pumping, steering, etc., EVs use EMs for propulsion primarily.The efficiency of EMs is about three times superior to ICEs.As a reference, it can be considered the simplest and least efficient EM).It reaches 78% efficiency between 40 and 50 kW .Another of the most remarkable advantages of EMs in comparison to ICEs is that they do not produce soot through their operation.It contributes to the environmental care and no-contamination of lubricating oil with soot, which may extend the oil service life and avoid increase of oil viscosity.More than 100 different topologies of EMs can be found in the configuration of modern vehicles , the most popular being, by means of rotor topology, the DCM, the induction motor, the synchronous permanent-magnet motor, the reluctance motor and synchronous brushed motor, while by means of stator topology, the coreless machine, multiple phases power systems and the in-wheel motor .The rate of efficiency from 1 to 5 for different EM topologies is given in Table 1 .It suggests that SPMs present the highest efficiency while DCMs the lowest.Thus, considering other advantages, namely, high-power density, compact size, reliability, very low noise and minimum maintenance requirements , SPMs are the best current option for the traction in most modern EVs as considered by most automotive industries and researchers around the world .In general, the losses occurring in EVs can be divided into electrical, magnetic, mechanical and stray losses.Electrical and magnetic losses are the largest sources of loss .As a reference, Lukaszczyk reported the percentage distribution of the above losses occurring in a modern IM.Electrical losses being around 64.1%, magnetic losses around 26.7%, mechanical losses above 3.53% and stray losses being around 5.58%.Although mechanical losses only contribute slightly, they can be reduced even more to improve the efficiency and durability of EMs.Even a small improvement in mechanical losses could be significant if all EMs used in an EV were considered.Besides, considering the total amount of EVs expected to replace all ICEVs in the world, even the smallest reduction of mechanical losses could represent significant energy savings and financial losses over the world.Mechanical losses in EMs mainly surge from friction generated in the rotor bearings and/or by sliding of brushes with slip rings, in the case of EMs working with brushes, and from windage or cooling .The friction losses are proportional to the rotor speed generating heat, vibrations and wear.A significant effort to improve the maintenance of EMs has been carried out specifically in rolling bearing lubrication since about 40–60% of all early motor failures are ascribed to the bearings.Most bearing failures are a consequence of improper lubrication with the wrong grease or missing relubrication .Hence, the rotor rolling bearings and the brushes/slip rings can be considered as the critical tribological elements in EMs to be optimized further."Typically, two kinds of rolling bearings are used to allow rotation and relative motion of mechanical elements by producing minimal friction in ICEV's components , but also, they are being transferred to the architecture of EVs taking advantage of the advances achieved in automotive rolling bearing technology.Most of literature about automotive rolling bearings is focused on ICEV applications.In a common ICEV, nearly 100 rolling bearings are used, mainly tapered roller bearings, thrust needle bearings and wheel bearing hubs .Although the friction loss occurring in a single rolling bearing is considered as minimal, even a slight reduction of such friction can represent a decrease in energy consumption if the amount of the total bearings is considered .Friction loss of rolling bearings is expressed in terms of the torque required for rotating it.It is the sum of the torque generated by churning, sliding, rolling and/or seal sliding friction .Churning friction loss may occur only in the churning phase.It is expected to be minimal for a good channelling grease.Sliding friction is related to the viscous shear of the lubricant if it is capable to generate a full lubricant film.In the case that full film cannot be generated, boundary lubrication conditions may be present, but they are expected to be as low as possible.The rolling friction is very large sometimes.It is the result of the non-symmetric elasto-hydrodynamic lubricant pressure distribution and will vanish in the absence of lubrication.The friction loss by rolling resistance can be reduced considerably by using low viscosity oils or greases.Nevertheless, it promotes a decrease of bearing life due to reduction of the lubricant film and achieving BL conditions , which could be minimized by means of lubricant additive technology.The friction loss distribution in some rolling bearings can be seen in Fig. 2.In the first case, the largest source of friction loss is the rolling/viscous shear.This is ascribed to the large viscosity of gear oil.The second case exhibits the sliding as the largest source of friction loss.It is associated with the sliding occurred between rollers and the cage in this kind of bearings.The rolling/viscous shear is lower due to the low viscosity of ATFs.In the last case, the largest source of friction corresponds to that produced by sliding.Since the complete hub unit was considered for this case, it was found that the friction generated by the sliding of the dynamic seal was the largest source of loss.Also, the rolling/viscous shear represents considerable friction losses due to the high viscosity of grease.Technically, three methods can be carried out to reduce friction loss in rolling bearings.One is by changing the design of the bearing without changing the main dimensions.The second one is by downsizing the bearing to decrease torque and weight meanwhile the third one is by optimizing lubrication in terms of the lubricant and/or bearing materials .One example of a successful achievement for reducing friction losses in bearings has been reported in Ref. .The bearings developed reduced friction losses by at least 30% compared to standard bearings.They were designed for light-to-normal load applications such as EMs, pumps, gearboxes and conveyors.Further improvements in rolling bearing life and friction loss can be achieved by implementing advanced coatings and lubricant technologies .Coatings play a role to be protective layers to reduce friction coefficient and increase wear resistance.Coated rolling bearings have been reported to have a ten-fold increase in fatigue lifetime and a seven-fold decrease in wear .Different coatings are being considered for decreasing friction and wear, the most popular being physical vapour deposition and diamond-like carbon coatings .However, other options, namely, coatings of Cr–N, TiN, Ni–SiC, AlMgB14, MoS2, WC/Co, TiAlN, W–C:H, AlMgB14–TiB2; composite coatings with TiN, TiC, or TiB2 particles embedded in Si3N4 or SiC ceramic matrix, and various nanostructured coatings could be also effectively applied to increase life time .The advances in grease and lubricant technology can be found in the following Sections 2.1.3 and 2.2.2, respectively.The brush/slip ring assembly is the most conventional system used in DCMs for conduction of electrical power or signals between stationary and moving parts through a sliding electrical contact.This sliding contact comprises a complex combination of parameters that influence the tribological behaviour .The efficiency of a brush/slip ring assembly depends on the material properties and geometry of both the brush and slip ring, surface boundary film and environmental and operating conditions.High wear resistance, good electrical and heat conductivities, low electrical contact resistance and low friction can enable efficient, reliable and long-term operation of this tribo-system.The brush/slip ring assembly is considered as one of the most critical tribological parts of DCMs .The classic brush/slip ring assembly for EMs consists of stationary brushes made of graphite pressed against a metal rotating slip ring, typically made of copper or bronze .Improvements in terms of contact-material combinations and design for increasing the assembly lifetime have had great attention and development in the last few decades .However, longer lifetimes and lower friction are still required for new applications.The most recent developments, presenting substantially enhanced tribological properties, comprise the application of new contact-material combinations for the brush/slip rings, such as: graphite/graphite and brass fibre and coin-silver/Au plating .Although there has been a significant progress in the performance of brushes/slip ring assembly, further research is still required to reduce friction and wear without compromising electrical conductivity in the electric sliding contact, and thus increase the efficiency of EMs.A conventional grease used for lubrication of bearings, joints or gears is composed of a base oil and different additives used to promote grease thickening.In general, the selection of the base oil and thickener type is based on the required temperature operational range and compatibility with the polymeric elements to be lubricated.The main advantage of greases over oils is their consistency, which prevents the grease from leaking out from the mechanical component.So, the lubrication of bearings and gears with grease is easier than with oils.Thus, about 80–90% of rolling bearings are lubricated with grease .However, the foremost disadvantage of greases is limited lubricity life compromising the prediction of the combined performance of the grease and the lubricated mechanical element.The lubrication mechanisms with grease become much more complex than those using oils.The evolution of grease development can be seen in Fig. 3.A calcium grease consisting on a combination of lime and olive oil or animal fats were the first greases used by humanity to lubricate plain bearings for carriages.The first modern greases based on mineral oil were calcium soaps.Then, aluminium and sodium soaps replaced those greases because they could resist higher temperatures.New greases based on calcium, lithium and barium were developed in the 1930–1940s , lithium-based grease being the most widely used today .The following grease advancements were between the 1950s and 1990s, developing greases in the following order: Aluminium complex and PTFE-based, lithium complex-based, polyurea and calcium sulfonate-based, and polymer grease.The optimization of grease performance is based on the variation of the bleed rate, consistency, mechanical stability, base oil viscosity and oxidation performance to improve both grease and mechanical component life times, to reduce friction and noise, and to resist higher or lower temperatures.Currently, the challenge in tribology research is focused on the fundamental understanding of the lubrication mechanisms by using greases.Besides, the creation of new greases capable to resist more severe lubrication conditions, the development of more accurate predictive tools for grease and lubricated component performance, and the optimization of the grease flow in mechanical components are strategies in progress to enhance the performance of greased tribological components .Recently, the trend of grease research is based on applying nano-technology as reinforcement for different greases since lubricity and grease life has demonstrated noteworthy improvements .The use of synthetic base oils and titanium complex thickeners have been other effective solutions given to generate lower friction torque and higher service life and to resist higher temperatures.The development of biodegradable or eco-friendly greases is another current research trend topic due to increasing demands for environmental protection .Different research works have addressed developments in this matter .Although these greases have demonstrated better lubricity properties than greases based on mineral or synthetic oils, thermal stability is their principal drawback, but it can be improved by using different types of nano-particles or chemical modifications applied to the biodegradable base oil, as explained in Section 2.2.2 .Greases reinforced with different advanced polymers such as rubber , polypropylene and methylpentene have also demonstrated upgraded properties, in particular, for high speed bearings application .Overall, the most successful additives for greases are the MoS2, graphite, polarized-graphite, poly-isobutylenes, non-toxic bismuth and calcium sulfonate complex .All the above solutions could be potentially applied for developing more efficient greases for lubrication of components either for ICEVs or EVs.In a recent report for electric driveline challenges , tackiness of grease has been remarked as the most important property to be improved.Tackiness is a lubricant property that allows sticking and formation of long threads between two separating surfaces while grease redistributes itself between the surfaces.It provides the grease the ability of traveling from one surface to another and sticking appropriately to work efficiently in sliding applications.An EV generally consists of different subsystems forming a coordination among themselves to allow the EV work efficiently.There are multiple technologies which can be applied in EVs to make all the subsystems work together.MEMSs are one of the most popular technology applied in modern vehicles.Examples of the most used MEMS in hybrid vehicles and EVs are illustrated in Fig. 7.They can be defined as miniaturized mechanical and electro-mechanical elements that are made using techniques of microfabrication.The physical dimensions of MEMS can vary from less than one micron to several millimetres .The most common MEMSs applications are MEMS-based sensors for airbags and vehicle stability control systems.MEMS-based sensors convert a mechanical signal to an electrical signal producing friction and wear due to its operation.Accelerometers, gyroscopes, inclinometers, flow and pressure sensors, energy harvesters, oscillators, IR sensors, etc., are examples of modern automotive MEMS-based sensors .Although, MEMSs are currently having good acceptation in different applications, high level of stiction, dynamic friction and wear are restrictive factors for the widespread reliability of MEMSs for EVs .Due to the small scales of MEMSs components, the capillary force, Vander Waal, chemical bonding and electrostatic contributes to adhesion, which in turn significantly influences friction.To overcome such negative factors, self-assembled molecular coatings, hermetic packaging, the use of reactive materials in the package and surface modification have been demonstrated as effective solutions .Therefore, the enhancement of tribological performance of MEMS devices has been reached by increasing the lubricity and hydrophobicity of MEMS surfaces through reducing the intrinsic friction and adhesion forces using thin films, namely, self-assembled monolayers, ionic liquids and DLC coatings .In more recent investigations, the use of different lubricating films, such as: perfluoropolyether , multiply-alkylated cyclopentanes , triazine dithiol monosodium , eco-biodegradable lubricant based on hydroxypropyl methylcellulose and dual or multilayer coatings , have demonstrated superior tribological properties, in particular, to reduce friction in MEMSs.The dual film coating, involving the lubrication synergy of solid and liquid combination, has become the most popular design concept for increasing service life and load carrying capacity for MEMS devices .Finally, most of the tribological components identified for EVs are comprised in the infrastructure of modern and high performance ICEVs.Those components are being transferred and implemented in EVs to take advantage of the advances and improvements achieved to date.However, although all those tribological components have been effectively boosted for use in ICEVs, they still represent a challenge for optimization considering the particular operating conditions of EVs, which has been barely investigated.The current state and gaps in tribological optimizations for the critical components identified for EVs were addressed and discussed generally in this pioneer review considering the most remarkable published literature.However, further specific identification and quantification of research quality and gaps for each tribological element for EVs comprises a more extended specific review work, which would be very valuable and complementary for specific research topics.It could be appropriately carried out through paper grading exercises like that proposed in the scientific article reported by Ishizaka et al. .Power transmissions have been widely investigated and enhanced for the best performance of ICEVs.However, they are being currently optimized for enhancing performance of EVs.Typically, EVs are being basically configured according to the three different drivetrain systems showed in Fig. 4.They are the IWM system, the central motor equipped with a single-speed transmission and the central motor equipped with a multi-speed transmission .The first configuration presents higher efficiency and lower mass due to less moving parts generating lower rotational inertia and avoiding friction losses in gear and differential mechanism .However, IWM systems require high torque traction motors for accelerating the vehicle from zero speed, which reduces efficiency due to heat loss by high current flow needed for that purpose .Also, since IWMs have a low and limited top speed, they are preferred to be used in applications where performance in stop-and-run driving is prioritized over comfort, such as in-city driving and sport cars .For example, it is considered as the unbeatable configuration for solar car competitions .Another disadvantage of this type of drivetrain system is the excessive tyre wear and overuse produced due to tyre slip."Tyre slip is generated by the control inaccuracy to compensate the speed difference of the EV's wheels in curves .In a similar way to ICEs, the efficiency of EMs depends on torque and speed of working, having an optimum working condition and decay of efficiency out of the optimum condition.It depends on the type of EM involving design, structure, materials, weight, etc.Therefore, one approach to increase the vehicle driving range and top speed is by modifying the vehicle powertrain by incorporating a reduction gear box/differential .It provides the EV with the capacity to run at a reduced single-speed or at multi-speed for driving in both city and highway.However, it has a negative effect on the EV efficiency due to more moving parts producing higher friction loses and rotational inertia in the powertrain.The single-speed transmission has been demonstrated to be capable of providing a satisfied dynamic performance .The most common cost-effective solution given for EVs is the configuration comprising a central motor drive adapted to a single-speed transmission acting also as differential.It reduces the drivetrain volume, mass, losses and cost .Nonetheless, the use of multi-speed transmissions/differential in EVs have been demonstrated by different research groups to improve the overall efficiency of the powertrain although it is not necessary in terms of dynamic performance.The multi-speed transmission may enable the EM to operate in higher efficiency regions reducing energy consumption.There are different commercial options of multi-speed transmissions, namely, automated manual transmission , automatic transmission , dual clutch transmission and continuous variable transmission , which has been investigated to be implemented in EVs with expectation to become autonomous in a future.Considering the above transmission options for EVs, the 2-speed transmissions for all the cases have shown the best balance between the advantages of a multiple-speed transmission system and the simplicity of a compact and lightweight drivetrain .On the other hand, strategies by using dual motor input powertrains either coupled to planetary gear transmission or coupled to a parallel axle transmission can provide higher overall efficiency than EVs equipped with a single motor input powertrain .Currently, different options of structure and components for the drivetrain of EVs are being investigated and developed till reaching the best efficiency and performance.Discarding the IWM drivetrains, the rest of drivetrains use a transmission either a single-speed or multi-speed, which represent higher energy consumption due to higher friction losses.Tribological research solutions can be potentially addressed to reduce deficiencies in such transmissions by improving the tribological behaviour of the mechanical elements, such as rolling bearings, gears, synchronizers, dry or wet clutches, lubricating oils, etc., involved in each type of transmission.There are various geared devices comprised in an EV, for example, the transmission, steering system, MEMS, etc.The gears in the transmission can represent the largest sources of friction losses in EVs.As a reference, about 2.75% of the energy supplied in an ICEV equipped with a manual transmission is used to overcome friction produced in gears generating around 8% from the total friction losses in the vehicle .Hence, reducing friction losses in gears of geared devices, in the transmission particularly, the efficiency of an EV could possibly benefited.It is well known that low friction in gears is achieved by using lower viscosity gear oils and efficient additives .The common gear oil additives themselves are based on sulphur and phosphorus chemistry, but other chemicals are introduced into the additive package to provide oxidation stability, anti‐corrosion protection, copper compatibility and good seal compatibility.Boron compounds have been shown to improve the antiwear and extreme‐pressure properties of gear oils, as well as oxidation stability, thermal stability and detergency for gears .Recent developments have demonstrated remarkable enhancements in friction and wear reduction for gears by adding spherical alumina nanoparticles and carbon and graphene nanoparticles .More advancements in lubricating oils are given in Section 2.2.2.In addition, laser surface texturing and coating technology have been applied in gears exhibiting considerable improvement.Coatings made of double-glow plasma surface alloying W–Mo , WC/C, WC/C–CrN, DC and plasma nitriding are the most recent research works reporting on substantial improvement on tribological behaviour of gears.Therefore, further friction and wear reduction in gears for EVs could be reached by developing more advancements on each part of the gear tribo-system, namely, lubricants, additives, surface textures or coatings.However, it may have a larger impact if all those advancements are implemented and improved together.An EV has different components similar to ICEVs which are required to be lubricated with oils; the driveline being the most important.In contrast to oils required for drivelines from ICEVs, those for EV driveline require other critical properties to operate effectively.There are three essential aspects being considered for development of novel EV driveline oils which have not been included in current ICEV driveline oil standards : the first involves the ability of the oil to limit corrosion of copper elements, mainly copper wiring, and to be compatible with polymers used in electric and electronic components, namely, sensors, resins, contacts, etc.It also includes the development of standard methods to evaluate such properties in realistic EV driveline operating environments at high temperatures.The second aspect is to achieve extreme low viscosity meanwhile the third one is the improvement of electric properties.Hence, an effective oil for EV driveline will be expected to provide high performance in electrical compatibility, at extreme low viscosities, at speeds higher than 20 000 RPMs in an operating condition of sustained excursions at high temperatures .The best traditional lubricants used for ICEVs with the possibility to be employed in EV automotive components are made of mineral-based oils formulated with different additives to meet stringent requirements.Currently, lubricants made of synthetic-based oils have a great acceptance due to better lubricity and thermal and oxidative stability than mineral-based oils, promoting a more prolonged life.There is a huge published literature about further optimization of lubricating oils for different applications.The most notable achievements have been done by research focused on low-viscosity oils, vapour phase lubrication, ionic liquids, and nanotechnology-based anti-friction and anti-wear additives .The growing concern over environmental protection has aroused a global widespread alarm on substituting mineral-based products with eco-friendly goods obtained from alternative renewable sources.Thus, bio-lubricants made of different sources, namely, animal fats and vegetable oils have been subject of numerous investigations .Vegetable oils have been demonstrated as the most promising alternatives to substitute automotive mineral-based lubricants since they have wide arrays of physicochemical properties meeting with the most of current requirements for engine lubricants, hydraulic fluids, compressor oils and gear oils, even exhibiting very low friction coefficients.The prospects being considered for future automotive bio-based lubricants are those produced from non-edible oil seed crops, namely, Jatropha curcas, Calophyllum inophyllum, Pongamia pinnata, Hevea brasiliensis, Ricinus communis L., and Simmondsia chinensis.Microalgae which is considered as the third-generation feedstock, has become the latest potential source of bio-based lubricant .The greatest advantages of bio-lubricants are renewability, biodegradability and better lubricity than mineral-based oils meanwhile the principal drawbacks are low oxidative stability and poor low temperature characteristics which limit their full potential for wide-scale usage.Nonetheless, the addition of additives, nano-additives, emulsification, and chemical modification have been recently demonstrated to diminish up those deficiencies .However, it still the subject of further research, in particular, for EV driveline application.Bio-based ionic liquids are also one of the most recent progresses.For example, ionic liquids derived from environmental-friendly and halogen free sources and synthesized from various sources of bio-polymers such as proteins and carboxylic acids have gained great attention for development of bio-lubricants .Dynamic seals, either reciprocating or rotary seals, are used in different components of EVs, for instance, transmission, steering gear box, wheel bearings, actuators, etc.Their function is to generate an elasto-hydrodynamic lubricating film between the seal and shaft surfaces with relative motion while a pumping action from one-side toward the other-side of the seal is formed to prevent fluid leakages .Different types of dynamic seals made of different materials are commercially available for automotive applications.Rubber, ethylene-propylene-diene monomer, nitrile-butadiene rubber, neoprene, fluorelastomer, silicone rubber, Polytetrafluoroethylene are the most common materials employed to manufacture seals at present .The efficiency of a seal is mainly determined by wear resistance, low friction and sealing capacity with the short and long-term use under a wide range of temperatures.The performance is dependent of the compatibility existing between the seal and the lubricant involved .A poor compatibility generates swelling and degradation of the seal with time.Thus, both seal and lubricant should be selected or developed primarily by considering compatibility.Reducing friction in the sealing gap without compromising sealing capacity and durability of dynamic seals could be an important achievement for energy consumption in EVs.However, optimizations have been barely achieved perhaps due to the complexity of different parameters involved, namely, non-linear viscoelastic properties of the seal material, swelling and changes in the seal properties, and long time-consuming performance tests.Recently, the most outstanding optimizations in performance of dynamic seals have been obtained by applying surface texturing techniques either to shafts or seals .The seal materials exhibiting the lowest friction coefficients are PTFE, polyimide, and polyetheretherketone, PTFE presenting the lowest friction coefficient values .These materials could be considered for development of new seals options for components in EVs.Overall, reduction of friction and increase in durability of seals can be achieved by modifying surfaces texture, developing novel composite materials for seals and advanced lubricants.However, studies of compatibility under realistic working environments should be considered primarily in the case of developing new seal materials or lubricants.The function of the steering system is to convert the turning movement applied to the steering wheel by the driver into a change in the steering angle of the steered wheels.Simultaneously, it informs the driver, by means of the haptic feedback, of the current driving situation and the road conditions ."In EVs, the steering system cannot be considered as a source of electric energy consumption since it can be operated by human energy by using pure mechanical steering systems, which could be optimized in terms of comfort and durability by decreasing friction and wear of the sliding components involved in the system.However, considering the future trend of EVs to become autonomous, electric power assisted steering systems will be required."The IWM systems do not need an additional steering system since they are capable to control vehicle's steering by controlling the speed of each wheel independently enabling the steered wheels to turn by the difference of speed in each wheel.In contrast, the EVs having the other drivetrain systems mentioned above, EPASSs are the best option in terms of efficiency due to its on-demand system feature, which allow an operation only when the steered wheels are required to be turned ."Besides, it consumes about 5% of the vehicle's energy in comparison to the hydraulic power assisted steering systems that consumes almost 15% of the total vehicle energy by the continuous action of pumping even if the system is not used .Other advantages of EPASSs are the reduction of steering torque and the return-to-centre performance of the steering wheel when it is steered, giving accuracy, comfort and steering road feeling for driving .According to the research works reported in Refs. , the efficiency of EPASSs in terms of energy consumption can be enhanced further by implementing advanced control techniques.The most common mechanical configurations used for EPASSs in vehicles with independent suspensions of the front axle, which is the case of most of passenger EVs, are the rack-and-pinion gear and the worm gear, having mainly some of the configurations illustrated in Fig. 5, due to lower steering elasticity, less need for space, less weight of the full steering system and lower production costs .Conventional standard EPASSs developed for modern cars are based on a dependable mechanical coupling between the steering wheel and the steered wheels comprising electromechanical components that consume energy and reduce the overall EV efficiency.In general, the steering column unit, the servo unit) and the tie rods are the key components in these systems.Different designs and concepts have been proposed and developed by vehicle manufacturers to improve performance of EPASSs in terms of energy consumption, comfort and safety.For example, EPASSs by means of elliptical wave train engine, belt driven rack, epicyclical gear and by wire, well known as steering-by-wire.The last being the most recent advancement for modern vehicles .It can provide a much better power steering feel, quieter wheel steering, and steering on-demand by eliminating mechanical connection between the steering wheel and the rack.It makes it simpler and more efficient, so it is being highly considered for the development of a new generation of autonomous EVs .In general, the components generating energy losses by friction in EPASSs are the steering column comprising tribological elements such as rolling bearings and universal joints, the electric motor comprising also rolling bearings and brushes/slip rings, the reduction/transmission mechanism involving the pinion and rack gear, the rack guide yoke, the rack bushings, dynamic seals, belt and pulleys and tie rods.If steer-by-wire systems are considered, universal joints from the column are discarded.Hence, the efficiency improvement in terms of friction losses in steering systems could be focused on reducing friction and improving durability of the elements mentioned above or by implementing either IWM or steering by wire systems in EVs.A considerable part of the energy supplied to a vehicle is consumed due to rolling friction of tyres.As stated in the study developed by Holmberg and Erdemir on friction losses in EVs, about 41% from the total electric energy supplied to the EV is used to overcome rolling friction in the tyre-road contact due to hysteresis in the elastomeric tyre during driving.Thus, the efficiency of vehicles, either ICEVs or EVs, could be significantly furthered by reducing rolling friction force, which can be achieved by optimizing the tyre design, operating parameters and materials.There are different reviews dealing with the reduction of rolling resistance of tyres .One of the most effective and classic strategy to reduce rolling friction in tyres is increasing air pressure and monitoring it .However, reducing rolling friction in tyres can imply a decrease in time of acceleration and breaking due to increase in tyre slippage, which is negative in terms of comfort and safety.Therefore, the goal is to achieve low rolling friction, but high traction and breaking friction, which can be reached by modifying the viscoelastic modulus of the tyre material .Different approaches to reduce hysteresis loss of rubber compounds have been conducted broadly .They include optimizations in elastomers, reinforcing particles, and curative packages.The most significant results have been presented by reducing the loading level of reinforcing particles.Nevertheless, the treadwear, traction, and handling performance of tyres is negatively affected.The use of elastomers with low Tg has been effective to reduce the hysteresis of rubber compounds at high temperatures at the expense of wet grip performance.Also, functionalized polymers can provide the flexibility of enhancing the rolling resistance performance of tyres without compromising wet and dry traction.Nowadays, the dynamic properties of rubber compounds used for tyres is being improved by using different reinforcing particles through a variety of methods developed specifically as new tools for compounding engineers and scientists.The new methods can increase the rolling resistance performance without compromising, or even by improving, other performances.In addition, nanoparticle technology can be adopted and investigated also as a new tool to reduce energy consumption by rolling friction and reducing weight and wear of tyres .Some of the most successful nanotechnologies applied for the improvement of tyres are based on rubber nanoparticles, silica carbide, core/shell polymer nanoparticles, poly-poly nano-particles, polyhedral oligomeric silsesquioxanes, carbon nanotubes, graphene, aerogels, nano-diamond and fullerenes .Although these technologies have demonstrated noteworthy progresses, barriers, namely, high cost, unreliable production techniques and uncertainty over environment, health and safety risks limit the use of nanotechnology in the production of commercial tyres, which are future challenges for research.Conventional rolling bearings are used in the road wheel hubs, which are unavoidable components in any vehicle moved by wheels.They permit reduce rotational friction and support radial and axial loads caused by the wheels rotation.Although the wheel hub bearing has changed a great deal with the development of the automobile itself, it is still a source of friction losses and a component that is susceptible to wear and rolling contact fatigue, decreasing durability.Those friction losses and durability can be a key factor for the overall performance and efficiency of EVs.The improvements of these components have been mostly focused on ICEV applications."Nevertheless, considering the great advances raised in ICEV's wheel bearings and the EVs architecture, which is similar to ICEVs, the wheel bearing technology achieved till now is being transferred and adapted to EVs.There are two kinds of automotive hub bearings with different configuration and characteristics used nowadays: double-row tapered roller bearing and double-row angular contact ball bearing.The first is primarily used in the United States while the second is commonly implemented in vehicles from Japan and European countries .The bearings are installed in hubs packed with a specific grease on assembly and sealed with elastomeric lip seals .The development trend of these components is based on a flexible compact structure, convenient maintenance, weight, endurance at high temperatures and speeds, sealing capacity of the seal, reducing fretting and resisting rolling contact fatigue.The advances in the lubrication technology of the bearing hub is also a key parameter to improve performance.The challenges for development of bearing hub greases are based on requirements, such as reducing degradation, resisting high temperatures and shear stresses, water and leakage resistance, compatibility with elastomeric seals and reducing wear and friction.The advances in rolling bearings and grease technology are mentioned in above Sections 2.1.1 and 2.1.3, respectively.Constant-velocity joints are designed and used to transmit power between two shafts with some degree of axial misalignment .Overall, they are used to transmit power from the main driveshaft to the wheels in modern vehicles, including most of EVs.Most developments and advances of these elements for automotive application has been based on ICEVs operating conditions.So, there is a lack of literature about the performance and improvement of these elements under EV operating conditions.These joints consist of an array of rolling elements held by a retaining cage between two raceways.The entire assembly is packed with grease and sealed inside a rubber boot.The rolling elements that can be balls, needles or rollers, the raceway grooves and the cage interact by contact between each other to operate.If the shafts are aligned, the rolling elements are stationary.Otherwise, the rolling elements undergo a low oscillation reciprocating motion producing a high load on the ball race contact and a low velocity.It may be dominated by BL regime.The consequence of these operating parameters are relatively high friction and failures due to contact fatigue and wear of both rolling elements and raceway .However, the most common cause of failure in these mechanical elements is damage to the boot promoting loss of grease and starving the rolling elements from lubrication .The reduction of friction and increase of service-life of constant-velocity joints can be directly achieved by optimization of the rolling elements, lubricity and service-life of grease, but mostly by improving the durability of the boot .Generally, the kinetic energy of moving vehicles is dissipated in terms of friction, wear and heat of breaking materials to deaccelerate or stop the vehicle.Instead of dissipating such energy, it can be recovered to be used further in the vehicle, generating much higher energy consumption efficiency.KERSs, also known as regenerative brakes, are the most effective way to recover kinetic energy from breaking.Basically, they recover the kinetic energy of the vehicle and store it for using it on-demand, mainly, in the acceleration stage, in which much energy is needed.It is an element particularly used in most of EVs and hybrid vehicles for enhancing their energy consumption performance.The best option for energy storage from regenerative brakes is the flywheel .A flywheel stores energy in kinetic form in a rotating mass.The energy stored in a flywheel is proportional to the product of its moment of inertia times the square of its angular velocity.The energy stored per unit mass can be increased by increasing the angular velocity of the flywheel.Flywheels, as storage devices, have many advantages in comparison to other energy storage options, such as: batteries, compressed air, hydrogen and super-capacitators .Flywheels were first developed to recover the braking energy in race cars, but now they are being used for buses and some passenger cars, in particular, hybrid and EVs .Flywheel devices are commonly coupled to the mechanical transmission through a clutch system allowing regenerative braking and power augmentation.Examples of two typical driveline configurations including CVTs and flywheels can be seen in Fig. 6.Those configurations must be capable of accepting power during braking and/or from the primary power source, as well as delivering power to the vehicle for traction and/or auxiliary power loads by engaging or disengaging the clutch, respectively .The energy recovery, storage and delivering efficiency depends strongly on the efficiency of power transmission of the clutch system and friction losses in the flywheel bearings that allow the rotation of the mass.The flywheel systems have recently re-emerged as a promising application for energy storage and KERS due to significant improvements in materials and technology, such as composites , low-friction bearings , magnetic bearings , and power electronics and control techniques .However, reduction of friction losses in bearings and the increase of power transmission efficiency in the clutch are the current challenges in terms of tribological solutions.Superconducting magnetic bearings have been developed and implemented to reach zero values of friction losses in the rotation of flywheels for high energy storage applications, for example, renewable energy plants .However, the operating conditions, namely, cryogenic temperatures and vacuum, required for achieving superconductivity can be a restriction to implement magnetic bearings in automotive flywheels due to the complexity of devices, maintenance and cost.Vehicle air conditioning systems have achieved numerous improvements since the 1940′s.Many changes have been made to accommodate new vehicle designs, improve fuel efficiency, gain environmental acceptability, enhance passenger comfort, provide health benefits and increase passenger safety.Since the entrainment of EVs to the passenger vehicles market, ACSs were required to be implemented in commercial passenger EVs.Thus, the most efficient ACSs developed for ICEVs are being transferred and implemented in modern EVs.Currently, the most common used types of compressors for ACSs are based on configurations such as rotary piston, scroll and variable displacement.The last configuration being the most popular for automotive applications.Scroll compressors are quickly becoming as popular as reciprocating compressors because they do not have as many moving parts and are therefore more reliable.The performance improvement of ACSs has been majorly focused in the compressor configuration and development of more efficient refrigerant and lubricants .The refrigeration industry has moved away from chlorofluorocarbon-based refrigerants such as R12 and R22 to more eco-friendly refrigerants such as the hydrofluorocarbon-based refrigerants R134A, R410A and isobutene R600a.Moreover, the use of CO2 as refrigerant to replace harmful chorofluorocarbon and hydrofluorocarbon refrigerants has gained great interest in the last decade.Nevertheless, the tribology in CO2 environments is not yet very well understood and advances in this area are highly required.The compressor lubricants mainly employed are synthetic polyolester and polyalkylene glycol due to miscibility issues with refrigerants .Recently, nanotechnology has been applied to reinforce compressor lubricants.For instance, the addition of Al2O3 and CuO nanoparticles in compressor lubricants have demonstrated considerable improvements at certain concentrations.The introduction of alternative refrigerants and advanced lubricants to rise the energy efficiency of compressors has also claimed for the development or application of novel materials and coatings in sliding components in the compressor.It allows withstanding severe operating conditions while reducing wear and friction caused by the tendency of using smaller clearances and increased speeds in advanced compressors.The most recent studies have been aimed in using coatings made of polymers , WC/C , TiN and TiAlN , and DLC for sliding components.The friction generated by the operation of the ACS generates a considerable amount of friction losses which contributes to higher energy consumption in EVs.Therefore, the enhancement of the ACS efficiency in terms of tribology can be a significant way to minimize energy consumption.The windshield wipers are needed and required in both ICEVs and EVs to assure a better driver visibility.It is achieved by cleaning the frontal and rear window-glasses removing dirt particles under wiping action using a washer fluid.The wiper also helps to remove water excess and leaves a thin film of water on the windshield in case of raining .Wipers have a slender strip of a rubber compound with carbon filler material supported by 4–8 clips in an arm.Typically, the arm applies a load from 10 to 20 N to the clip hinge centre being distributed almost uniformly along the blade.The speed can vary from 0.5 to 1 m/s generating friction coefficients about 1.4 in dry conditions while 0.15–0.2 in wet conditions .The wiper blade, commonly made of rubber, is expected to work in three different environments: dry, wet and tacky.Dry is considered when glass is dry without any water.Wet condition corresponds to a lubricated condition with water or chemical liquid while tacky condition, also known as drying condition, corresponds to a transitory regime between wet and dry glass when the water is evaporated from the glass surface .In general, during operation, wiper blades are subjected to sliding friction, stick–slip, wear, and different environmental effects , mainly, UV radiation, changes in temperature and unusually interaction with acids from acid rain.Nowadays, there is a growing research interest in the lubrication of windscreen wipers.Considering the high levels of friction generated even in wet condition, it represents a considerable energy consumption source for EVs, which makes it an important issue for optimization.In some extent, the reports provided in Refs. contribute with the understanding and optimization of the lubrication and contact phenomena existing in operation of wiper blades by proposing some parameters, materials and techniques to reduce friction, vibrations, noise and wear.Nevertheless, this topic is still scarcely investigated, so it is considered a subject of further research.Moreover, the entire windshield wiper system involves two wiper blades attached to two arms, which are connected to a mechanism being moved by an EM.In the mechanism, the bars are linked through rolling bearings or ball joints to allow rotation or oscillation producing low friction.An EV can comprise either one or two wiper systems for only the front or both front and rear screens, respectively.Additionally, a pumping system, commonly an EM pump, for the washer fluid is included.So, windshield wiper systems encompass various tribological elements producing friction losses which can be improved."Those elements are the wiper blades, rolling bearings or ball joints, the EM, pump and the washer fluid's lubricity.Ball joints are the most commonly used type of joints in parallel manipulators for ICEVs, but friction force and clearance of this kind of joints affect considerably the functional performance of mechanisms.High performance suspension systems from ICEVs are being implemented into modern EVs to take advantage of the advances and developments achieved in suspension elements, namely, ball joints, shock absorbers, springs, etc.The ball joints are used to allow the vertical motion of suspension system and the rotational motion of steering system simultaneously.Some aspects of handling, steering feel and ride-comfort of a passenger vehicle can be attributed to the tribological performance of the steering and suspension components, including the ball joints.High friction force is required in ball joints to inhibit vibration, fluttering and shimmy of the steering and suspension systems to a certain extent.However, high friction increases the torque required for the initiation of motion causing a heavy handling without smoothness, unpleasant steering feel and higher energy consumption.Both the low friction for smooth sliding initiation and the high friction for inhibiting vibration are required in ball joints .A ball joint consists of a ball stud, a bearing, a bearing plug and housing, grease and a seal.The most followed route to obtain the friction required in ball joints has been attained by the improvement of lubricating greases.However, surface treatment and coatings solutions based on PTFE and DLC for the ball stud have been applied lately to extend service life and approach the appropriate friction in ball joints.The tribological performance of ball joints has been enhanced not only by modifying the ball stud, but also by modifying the bearing in terms of elasticity, surface roughness, resistance to temperature, and by increasing the durability of the boot .Shock absorbers play a crucial role in the driving quality and performance of an EV.A smooth and comfortable ride is provided by the attenuation of the energy transmitted from the wheels to the car body by the reciprocating work of shock absorbers that can be those passives, semi-actives or actives.In EVs, the level of interior sound and vibrations are more evident than ICEVs because EMs produce much lower vibrations and sounds than ICEs.Besides, EMs generate other type of noises from electromagnetism and current."Although this sound pressure level can be considered low, the interior sound quality is reduced affecting negatively the passenger's psychology.In addition, the squeak noise produced by the shock absorbers becomes apparent even in modern EVs, so it represents a concern for drivers and passengers .Squeak noise is perceived by drivers and passengers as a quality deficiency, or even, an EV malfunction.Although there are different hydraulic or pneumatic shock absorbers, namely, twin-tube, mono-tube, spool valve and magnetic, the most popular for passenger cars are those hydraulic twin-tube and mono-tube.They operate using spring-loaded check valves and orifices to control the flow of an oil/hydraulic fluid inside the tubes through an internal piston moving in reciprocating motion.A dynamic seal is used to retain the oil/hydraulic fluid inside the shock absorber tube avoiding leakages."Within the long-term use and high speeds, the shock absorbers exhibit problems caused by the lateral friction between the piston rod and the dynamic seal conductive to reduction of performance, generation of noises due to wear of piston rod and seal, and shock absorber's oil leakage .It can be optimized by reducing friction at the piston rod and seal interface using advanced oil/fluids , advanced seal design and materials, and novel shock absorber hardware configurations .The current status of achievements in terms of tribological solutions applied to the components and the identification of research gaps for further works and developments have been reviewed and compiled.The main conclusions derived from this work are listed below:The most critical tribological components for electric vehicles were classified into electric motor, transmission, steering system, tires, wheel bearings, constant-velocity joints, kinematic energy recovery system/flywheel, comfort and safety devices, suspension and micro-electro-mechanical systems.The most efficient electric motor being investigated for application in modern electric vehicles is the synchronous permanent-magnet motor.The tribological elements to be optimized in the future for electric motors are rolling bearings and, in some electric motor types, brushes/sliding rings.It may be achieved by means of development of more efficient greases meeting with eco-friendly demands, materials and coatings.In addition, better understanding of tribology of complex sliding interfaces comprising lubrication with grease and electric contacts is needed for potential optimizations.In-wheel motor systems and multi-speed transmissions are the most attractive options for drivelines for modern electric vehicles with aims to become autonomous in future.In-wheel motor systems can be improved as like electric motors meanwhile multi-speed transmissions comprise critical tribological elements to be optimized further such as gears and dynamic seals.The trend of improvement for gears is by using more efficient oils meeting with eco-friendly demands, surface texturing and advanced coatings while the trend for dynamic seals is by using advanced seal materials, texturing in seal and shaft surfaces, and increased compatibility with oils and greases.In-wheel motor and electric power assisted steering systems are considered as the best options for the steering system in modern electric vehicles.However, electric power assisted steering system has more critical tribological elements, namely, rolling bearings, universal joints, pinion and rack gear, the rack guide yoke, the rack bushings, dynamic seals, belt and pulleys and tie rods, which can be enhanced via tribological solutions.Steer-by-wire systems are the most recent approach to reduce the number of tribological elements in an electric power assisted steering system, but it is still in research progress.The tyre-road contact produces the largest friction losses in electric vehicles.Friction in tyres can be reduced by optimizing the tyre design, operating parameters and materials.Lately, nanotechnology has demonstrated significant reduction in tyre friction loss, but high cost, unreliable production techniques and uncertainty over environment, health and safety risks limit its usage in the production of commercial tyres.The wheel bearings can be significantly optimized in terms of lower friction and higher durability by using advanced greases, improved rolling bearings and dynamic seals.The reduction of friction and increase of service-life of constant-velocity joints can be achieved by optimization of the rolling elements, lubricity and service-life of grease, but majorly by improving the durability of the boot.The best options for energy storage of regenerative brakes in electric vehicles is the flywheel.To improve efficiency in regenerative breaking/flywheel, rising torque transmission in clutch and reducing friction in the rotation of flywheel is required.Friction generated in the flywheel could be reduced by enhancing the tribological performance of rolling bearings or by implementing superconducting magnetic bearings to reach zero friction values.Air conditioning systems and windshield wipers system are unavoidable critical components in electrical vehicles.The most common and modern air conditioning systems for vehicles are based on reciprocating compressors which can be enhanced by developing new and advanced refrigerants with nano-technology and coatings and reinforced polymers for sliding elements."Windshield wipers system could be optimized by improving wiper blades, rolling bearings or ball joints, the electric motor, pump and the washer fluid's lubricity.Ball joints and shock absorbers are the main tribological elements generating friction in electric vehicles.Ball joints can be boosted by modifying the ball stud and bearing according to its elasticity, surface roughness, resistance to temperature, and by increasing the durability of the boot while shock absorbers could be optimized by reducing friction at the piston rod and seal interface using advanced oil/fluids, seal design and materials, and novel hardware configurations.Micro-electro-mechanical systems-based sensors for airbags and vehicle stability control systems are being currently implemented in modern electric vehicles.However, high level of stiction, dynamic friction and wear are restrictive factors for the widespread reliability of micro-electro-mechanical systems.Different solid lubricating films have demonstrated noteworthy improvements in recent investigations, so they can be used for developing advanced micro-electro-mechanical system devices.Overall, further friction and wear reductions in tribological components of electric vehicles could be reached by developing more advancements for each tribological element, but it may have a larger impact if the current advancements reported for each element or component are implemented and improved jointly.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Considering the growing interest in substituting internal combustion engine vehicles with highly efficient electric vehicles around the world, this paper aims to contribute with a literature review about the current state and future improvement trends for optimization of critical tribological components used in passenger electric vehicles. The review gives an understanding of the most recent achievements in terms of tribological solutions applied to the critical components and the identification of research gaps for further developments and efficiency improvements for EVs through novel component designs, materials and lubricant technologies.
211
Effects of biophilic indoor environment on stress and anxiety recovery: A between-subjects experiment in virtual reality
Human health and well-being have been affected by the quality of environments that people live in.Urban living is usually related to long working hours, heavy workload, tight deadline and unsatisfied working environments.Concurrently, the risk of mental disorders has been increased in the population bearing psychosocial work stressors in their working environments.Previous studies have shown that city living and urban upbringing could affect people’s neural social stress processing and are associated with higher rates of psychosis, anxiety disorders and depression than those growing up in rural areas.Moreover, mental disorders have already become one of largest factors in global disease burden.Approximately one in five adults in the U.S. experienced mental illness, including anxiety and depression, which are often associated with, or triggered by, high level of stress.Better understanding of interventions that ameliorate stress and anxiety are needed given their negative consequences on human health.Contacting with outdoor natural elements, settings and process has become a frequently used approach to seek relief from stressful urban lives, which could be explained by people’s innate affinity with nature since we were primarily exposed to nature during our evolutionary process.Consensus has been reached that experience of natural environments are associated with increased psychological well-being and reduced risk factors of some types of mental illness.The effects of exposure to natural environments on restorative benefits have been explored through many pathways with two dominant theories from environmental psychology perspective: attention restoration theory and stress reduction theory.ART proposed that natural environments abound with “soft fascinations” could replenish people’s cognitive capacity and thus reduce their mental fatigue and increase their focus and attention.SRT suggested that exposure to nature activate our parasympathetic nervous system and facilitate the psychophysiological stress recovery because of our innate preference for natural environment developed through evolution.Although these two theories are debating the mechanisms of how nature affect human health, they both emphasized that exposure to natural environments could improve restoring capacities, including attention restoration and psychophysiological stress recovery.Nowadays, we are living in a rapidly urbanizing world where accessibility to nature is typically limited.Moreover, based on the statistic from the National Human Activity Pattern Survey, people spend almost 90% of their time indoors, which indicates the further disconnected from nature.In recent decades, biophilic design, stemming from the concept of biophilia, which hypothesizes human have innate connection with nature, has become a new approach to incorporate the positive experiences of nature into the design of the built environment.By bringing nature into living and working building spaces, people could increase their time and frequency of connecting with natural elements while being indoors.Recently, building evaluating system, such as the WELL, Living Building Challenge, and The 9 Foundations of a Healthy Building have listed biophilia into their design categories as a key element that can be implemented into the indoor environment to positively impact mood, sleep, stress levels and psychosocial status.In clinic settings, studies found that the inclusion of natural sounds, aromatherapy, green plants and views of nature into hospital interior spaces reduced mental stress, increased pain tolerance and shortened hospital stays.Generally, although the effect of biophilic design on psychological responses had been previously summarized, study investigating how it affects the physiological response in stress recovery process is limited, and less is known about how different elements of biophilic design contribute to these health and well-being outcomes.Research on exploring independent effect of these biophilic elements is important for both research purposes and future design practices.Previously, most of studies on assessing impacts of biophilic design elements were based on post-occupancy evaluation, one of the commonly used design evaluation methods.It is conducted by users after the completion of construction which prone to bias subjectively.A pre-occupancy evaluation, on the other hand, could intentionally evaluate people’s psychological and physiological responses to biophilic design and improve design strategies based on those responses prior to the construction.Virtual Reality provides us an innovative approach to achieve this goal.By using simulated indoor environments in a laboratory setting, we could control variables, such as size and layout of the spaces and indoor environment quality, whilst scripting different types of biophilic elements in a convenient way, to estimate the impact of a particular design strategy.Moreover, for patients who experience reduced mobility, VR natural environment could be used as therapy for improving their mental well-being during therapy.To contribute to the literature on restorative impact of biophilic indoor environment, this experimental study investigated effects of simulated biophilic indoor environments in VR on stress reaction and anxiety level in the recovery process following acute mental stressor.Our research hypotheses were: recovery from stress and anxiety would be greater after exposure to biophilic environments compared to that in non-biophilic environment; different biophilic environments have different impacts on physiological and psychological responses.We recruited 100 healthy adults to participate in this study via the Harvard Decision Science Lab recruitment system from October to December in 2018.All qualified participants were Harvard affiliated faculty, staff and students.We posted the brief information of this study without disclosing the study objectives in HDSL’s recruitment system to reduce the potential bias from self-selection.Participants voluntarily signed up for experiment with $15 compensation.Through the prescreening process, we excluded participants who self-reported that they took stress recovery medicine or therapy.The study was approved by the Institutional Review Board of Harvard T.H. Chan School of Public Health and all participants signed the consent form before the experiment.We used between-subjects design for this study based on two main reasons.First, to test the restorative effects of biophilic environments, we need to first increase participants’ mental stress level.Using stressor only once for each participant would get the optimal effect on stress increase and avoid potential carry-over effect in experiment with within-subject design.Second, we intended to minimize the time of wearing VR headset to avoid potential negative feelings like nausea and headache from participants.Therefore, all participants engaged in a pre-designed stressor in VR to induce their mental stress level and were then randomly assigned to explore one of four virtual indoor office settings: one non-biophilic base office and three similar offices enhanced with different biophilic design elements.To test participants’ responses in office with different biophilic design elements, we stimulated four three-dimensional virtual offices in VR by using Rhino5 software in advance and rendered in real time during experiment by using Unity software.We categorized different biophilic design elements into two conditions, “indoor green” and “outdoor view”, for two reasons.First, we considered two major types of office spaces: with and without windows.Second, we re-organized biophilic elements based on their tangibility.Specifically, the indoor green condition indicated that we incorporated living walls and potted plants, water, natural materials and biomorphic shapes, which were frequently used in interior design practice, into indoor space; the outdoor view condition represented long-distance natural view of trees, grass, water and daylight through windows, which shared the same size and location of the living walls in the indoor green condition.In addition, we designed an office with the combination of both conditions, referred to as “combination”, and used a non-biophilic office as the control setting.We kept the same size and similar layout for those four conditions to maximize the comparability.Except for biophilic design interventions, all four offices were identical in terms of all other elements.We measured participants’ acute stress reaction through physiological indicators, including heart rate variability, heart rate, skin conductance level and blood pressure.Specifically, the Movisens EcgMove3 was worn by participants with a chest belt and it acquired raw data of a single channel electrocardiography, from which secondary parameters like HRV and HR were calculated.For HRV, we calculated time-domain HRV indicator: the root mean square of successive differences between normal heartbeats, and frequency-domain HRV indicator: low to high frequency ratio.Higher value of RMSSD indicates increased parasympathetic activities, which results in stress relief.LF/HF ratio is the ratio between the low frequency band power and high frequency band power, which estimates the balance between parasympathetic and sympathetic nervous activities, with low value indicating parasympathetic dominance.The calculations of HRV indicators were performed internally every 30 s, which is the minimal time interval to calculate HRV of this sensor.HR output was the mean heart rate for each 30-sec interval.The Movisens EdaMove3 collected SCL data to reflect the electro-dermal activity, and was worn on the left wrist of participants.SCL changes are caused by sweat gland secretions, which is controlled by the sympathetic nervous system activity.To match the 30-sec output interval of ECG sensor, the EDA sensor also averaged the SCL data every 30 s.The Omron EVOLV wireless upper arm blood pressure monitor was used to measure systolic and diastolic blood pressures).BP was measured at three timepoints: baseline, after stressor induction tasks and after 6-minutes recovery period.Additionally, we measured psychological indicator of anxiety level by using the six-item short-form of State-Trait Anxiety Inventory.This short version consists of six questions and has been tested to have similar mean score from the full form of STAI, which includes 20 questions for anxiety state.Test-retest reliability was maximized by preparing two versions test with different questions selected from the full STAI and randomly implementing for the pre-post recovery measures.Each short version STAI included three anxiety-positive questions and three anxiety-negative questions.Items questioned participants on how they felt at the test moment which were rated on a four level scale, and anxiety-positive questions were rated from one to four with higher scores indicating greater anxiety, vice versa for anxiety-negative questions.Mean scores of six questions indicated degrees of anxiety.All experiments were conducted in the Harvard Decision Science Lab.The indoor environmental quality of the experimental settings, including temperature, relative humidity, CO2 and PM2.5 concentrations, were monitored by using a real-time sensor package from Academia Sinica.Specifically, temperature and relative humidity were measured by HTU21d sensor; CO2 was measured by SenseAir S8 sensor; PM2.5 was measured by Plantower 5003 sensor.Those IEQ indicators were collected every 5 min.The experiment includes three parts: preparation and baseline, stressor, and recovery.In the preparation and baseline period, participants signed the informed written consent.Then, they wore HTC Vive VR headset and bio-monitoring sensors with the assistant of research staff.After that, participants were given a five-minute break and their baseline physiological measurements were recorded at the end of the rest.In the stressor period, participants were exposed to a virtual office with untidy conditions and background noises from traffic, machinery and household appliances.They were instructed to finish two stress induction tasks.In the two-minute memory task, a series of three-digits numbers were displayed one after another on the screen of a virtual computer in VR.Each number was shown for only one second.After each series of numbers, participants had 20 s to put those numbers in the correct order.Each participant performed this task four rounds, with amount of those numbers increased from four to ten with increments of two.In the five-minute arithmetic task, participants were asked to keep counting backward from a random four-digit number in steps of a random two-digit number.During these two tasks, to keep participants under alertness, they were informed that they would be carefully monitored during the tasks by the research staff and a buzzer would sound when incorrect answers were given.After completion of these stress-induction tasks, participants were given the pre-recovery blood pressure measure and short version of STAI.Then, they were randomly assigned to experience a virtual office for six minutes recovery, slightly longer than 5-minute which has been shown in previous studies to be a sufficient time for inducing restorative effect.They could walk and observe the indoor space freely for the first four minutes and then seat on a revolving chair and keep looking around for the rest two minutes.After that, post-recovery blood pressure was measured and a STAI was administrated again.Finally, all devices were removed from participants and they completed an online survey about their demographic information, general health condition, caffeinated beverage drinking and good sleep quality of the night before, and stress level.We administered these questions at the end of the experiment to avoid disclosing study objectives that was to compare stress recovery of participants among different indoor environments.The whole experiment lasted around 45 min.To test the effectiveness of randomization, we conducted ANOVA to test whether IEQ, baseline physiological measures, stress and anxiety levels after stressor among four conditions were similar or not.A two-side alpha level of 0.05 was used to determine statistical significance.To test the effectiveness of stressor, we conducted paired t-test, or Wilcoxon signed-rank test if the distribution of observed variable is not normally distributed, for the pre-post stressor physiological measures to determine if participants’ physiological stress levels after the stress-induction tasks were significantly higher than their baseline measures.A one-side alpha level of 0.05 was used to determine statistical significance.To better understand participants’ physiological responses within the six-minute recovery process, we extend the mixed effect model to compare the effect of biophilic environments on recovery rate of continuous outcome variables in every two minutes, representing the start, middle and end stages.Results are presented in four sections.First, we report the demographic information and test the baseline differences of demographics, IEQ and physiological measures to confirm the effectiveness of randomization.We also test the post-stressor differences of physiological measure to ensure there are no pre-recovery group differences.Second, we examine effects of biophilic environments on pre-post changes for momentary measures.Third, we explore the same effects on recovery rate for continuous measures.Finally, we investigate those effects on time to complete recovery for those continuous measures.The overall characteristic of the 100 participants and characteristic of four conditions after randomization on demographics and the indoor environmental quality of their visits are presented in Table 1.Participants had an average age of 29.2 ± 11.8 year, with 63% of whom were female and 41% of whom were white.81% of participants self-reported very good or excellent health conditions.75% of participants reported a good sleep and 44% of participants had caffeine beverage before they came to the experiment.Most of them were not stressed and the average score of self-reported stress level was 2.2 ± 0.9.The indoor environmental quality was consistent during the experimental periods.For example, the average PM2.5, CO2, temperature and relative humidity were 0.3 ± 0.6 μg/m3, 716 ± 121 ppm, 21.3 ± 1.2 °C, and 36.7 ± 10.0%, respectively.There were no statistically significant differences of demographics and most IEQ among four conditions after randomization.One exception is that the average relative humidity was lower in the non-biophilic environment compare with those in the biophilic environments.In addition, the baseline physiological measures were similar among four conditions with no significant differences.The absence of differences across baseline measures among four groups indicated the success of the randomization.Participants’ mean and median physiological and psychological measures among four groups at baseline, pre-recovery and post-recovery are shown in Fig. 3 and Table S3.Our results from paired t-tests and Wilcoxon signed-rank tests suggest that participants’ physiological stress level increased significantly after experiencing stressor.In addition, our ANOVA results suggest that effect sizes for between group differences in BP, STAI, SCL HR, and HRV are not significant.Therefore, there were no significant differences in stress and anxiety level after stressor among four groups.Comparing to the non-biophilic environment, participants in biophilic environments during recovery process had consistently greater decreases of both systolic blood pressure and diastolic blood pressure.Specifically, indoor green, outdoor view and combination conditions were associated with 3.1, 1.0, 1.3 mmHg greater decreases in SBP as well as 4.5, 3.9, 1.2 mmHg greater decreases in DBP, respectively.In general, participants reported lower STAI scores after recovery compare to their scores before recovery in all four conditions, indicating they were recovered from anxiety.Comparing the decrease of STAI scores in non-biophilic environment, participants in the outdoor view and combination conditions had 0.4 and 0.3 greater decrease in STAI score reaching borderline significance, respectively.However, the difference of STAI decreases between indoor green condition and non-biophilic environment was close to the null and not statistically significant.Estimated differences of mean recovery rates of HR and percentage changes in the geometric mean recovery rate of HRV/SCL in biophilic environments versus those in non-biophilic during the 6-minute recovery period are shown in Fig. 4.We assumed linear recovery rate during recovery process to compare the overall restorative effect between biophilic and non-biophilic environments.We found that participants in biophilic environments had faster RMSSD increase rates in the recovery process, comparing to the change rate in the non-biophilic environment.Especially, the geometric mean increase rate of RMSSD were 2.1% faster in indoor green conditions, suggest significantly better stress recovery in this environment.In addition, we also find the relative effect on RMSSD were different within three stages of recovery process.Specifically, in the middle stage, the geometric mean increase rate of RMSSD were 4.7% faster in indoor green condition and 4.3% faster in outdoor view condition, respectively.However, we did not find significant difference of recovery rates of LF/HF ratio, HR and SCL between biophilic and non-biophilic environments.Estimated hazard ratio of complete recovery for physiological measures in biophilic environments compare to non-biophilic environment in 6-minute recovery period are shown in Fig. 5.Since SCL measure in four groups kept stable during recovery period rather than reduced back to baseline, we excluded it in the time-to-event analysis.After excluding participants whose stress level did not increase after stressor, we had n = 70, n = 45, n = 63 in Cox model for HR, RMSSD and LF/HF ratio, respectively.The hazard ratios of complete recovery for HR in biophilic environments were all larger than 1, and significant in indoor green condition and combination condition.These corresponds to a 70% and 72% chance of the participants’ getting complete recovery of HR first in indoor green and combination condition, respectively.We also observed the similar trend for RMSSD measure in indoor green, and combination conditions.However, we did not find significant hazard ratios for LF/HF ratio in biophilic environments.These results suggested that throughout the recovery period, the participants in biophilic environments recovered faster.In this study, 100 participants were randomly assigned to explore one of four virtual indoor environments: one non-biophilic base office and three similar offices enhanced with different biophilic design elements termed as indoor green, outdoor view and combination, respectively.Overall, our results strongly support our first hypothesis that participants in biophilic environments had consistently better post-stress restorative responses on physiological stress level and psychological anxiety level compare to those in the non-biophilic environment.Although not statistically significant, those restorative effects differed among the three different types of indoor biophilic environments, with indoor green condition facilitated more on physiological stress recovery and outdoor view condition affected more on anxiety reduction.For most physiological and psychological measures, the effects of the combination condition were between those of indoor green and outdoor view conditions, although the differences were not significant.Within the recovery period, we also found the biophilic environments had the largest effect on reducing physiological stress in the first four minutes of the six-minute recovery process.Our physiological results from linear model, mixed effect model and Cox model indicated that participants in biophilic environments had consistently better recovery from stress.This findings was consistent with previous studies, which found physiological recovery) was faster and more complete when people were viewing natural rather than urban environments through videotapes.Our findings also indicate that the biophilic environments, especially the indoor green condition, have improved participants’ blood pressure, which was partially consistent with our previous findings that visual exposure to indoor biophilic environments could improve participants’ blood pressure, especially for diastolic blood pressure.Consistently, a systematic review paper also mentioned that outdoor greenspace exposure, rather than indoor biophilic environments, was associated with decreased diastolic blood pressure and systolic blood pressure.Recently, a randomized controlled experiment testing restorative impact of views to school landscape suggested that window view to green landscapes significantly increased student’s recovery from stressful experience by measuring their short-term HRV and SCL.Our findings on the physiological responses during the restoration process are in accordance with the stress recovery theory which suggested viewing natural environments can reduce physiological stress and aversive emotion since we evolved to have an innate preference for those environments.Moreover, the four physiological measures reflect activity in different bodily systems all relate to autonomic nervous system.The consistent trends across these physiological responses strengthened the SRT that biophilic environments could help reduce physiological stress level.In addition, we could observe from Fig. 3 that mean value of most physiological indicators of post-recovery stress level went back to, or even lower than, their baseline measures for those participants in biophilic environments, indicating complete recovery.One exception is SCL, which kept stable and did not recover back to baseline in all these four environments, indicating sympathetic nervous system activity during 6-minute recovery period were still active and more time may be needed for its recovery.Further, the hazard ratio, which was derived from the Cox model, suggested participants’ HR and RMSSD were recovering faster in the biophilic environment, which provides evidence that biophilic environments could promote restoration from another perspective.In this study, we found different restorative effects on physiological stress indicators and anxiety level among three different indoor biophilic environments.The indoor green condition had greater effects on reducing physiological stress than other conditions.Previous study found that indoor green plants in working environments reduced stress and increased the overall well-being.Indoor spaces with plants can improve human attitudes, behaviors and physiological responses.The outdoor view condition had better effect on reducing anxiety followed by combination condition.View of landscapes had more complexity compared to indoor environments and a glimpse of the world offered by the window view can quickly transport one’s attention.This result was widely agreed by many studies that views to green spaces improved work performance, increased student’s recovery from stressful experience and correlated with employees’ satisfaction and stress reduction.The major difference between indoor green and outdoor view conditions is the type of the biophilic elements inside each environment.Specifically, indoor green condition had uniquely tangible items, such as green plants, wooden material and fish tank, while outdoor view condition incorporated intangible items, such as large windows with natural light and views of trees and water.Our results indicate that indoor biophilic elements facilitate the recovery of physiological stress and window with outdoor view and light facilitates the recovery of anxiety.The results from combination condition strengthen this argument since it had the moderate effect on improving both physiological stress level and psychological anxiety level.Physiological monitoring of participants to assess stress and anxiety while experiencing three-dimensional simulated virtual environments in VR is an innovative approach.Compared to traditional 2-D video and picture, 3-D simulated virtual environment provides more immersive experience.Using VR simulations, we could control the design elements of indoor environment.Secondly, randomized between-subject design reduces confounding factors.Thirdly, the large sample size and balanced design led to distinguishing effects among the different biophilic designs.Forth, consistent results were obtained among multiple statistical approaches applied.Our study has a few limitations.There is always the criticism that VR simulations are not “real world” conditions where other sensory stimulations are experienced.Studies found that stress recovery process also related to auditory, olfactory, thermal comfort, or people’s interaction with the surroundings.VR simulations in this study did not include these factors which would be present in reality.As a counterpoint, however, using VR allows us to isolate and study specific pathways for study that studies in the real-world may not be able to isolate due to the complex mix and pattern of other sensory factors.In addition, our previous research showed consistent physiological and cognitive responses to biophilic interventions when participants experience them in the real-world as well as in VR.Second, we did not measure changes affective state of mood, which may be an important mediator in the pathway of exposure to biophilic environments and reduced stress and anxiety.Third, our studies should to be extended to indoor environments other than offices.It is our intention to apply VR simulations to other indoor settings including, assisted living, health care, hospitals, classroom, hospitality, and retail.In this between-subject experiment with 100 participants, we combined virtual reality and wearable biomonitoring sensors to test the restorative effect of biophilic elements on stress and anxiety.Generally, biophilic environments had larger restorative impacts than non-biophilic environment in terms of reducing physiological stress and psychological anxiety level.Additionally, restorative effects differ among three different types of indoor biophilic environments with indoor biophilic elements facilitate the recovery of physiological stress, and having a window with daylight and an outdoor view to natural environments facilitated the recovery of anxiety.This research demonstrates a tool for architects, interior designers and developers to better understand human-environment interaction in the pre-occupancy building evaluation and aid in selecting biophilic design features to reduce stress and anxiety.Additionally, it provides evidences on the restorative effects of biophilic design in indoor environments and demonstrates the potential that virtual reality may be a way to bring nature and its therapeutic benefits to people who cannot get out to experience it firsthand, like patients in hospitals.Jie Yin: Conceptualization, Methodology, Software, Formal analysis, Investigation, Data curation, Writing - original draft, Project administration, Funding acquisition.Jing Yuan: Methodology, Software, Formal analysis, Investigation, Data curation, Writing - original draft, Visualization.Nastaran Arfaei: Methodology, Software, Writing - review & editing, Visualization.Paul J. Catalano: Methodology, Writing - review & editing.Joseph G. Allen: Conceptualization, Writing - review & editing, Supervision.John D. Spengler: Conceptualization, Writing - review & editing, Supervision.
Previous research has demonstrated the positive associations between outdoor nature contact and stress reduction. However, similar effects of incorporating natural elements into indoor environment (i.e. biophilic design) have been less well studied. We hypothesize that exposure to biophilic indoor environments help people recover from stress and anxiety and those effects differ among different types of biophilic elements. To test these hypotheses, we conducted a between-subjects experiment with 100 participants using virtual reality (VR). Participants were randomly assigned to experience one of four virtual offices (i.e. one non-biophilic base office and three similar offices enhanced with different biophilic design elements) after stressor tasks. Their physiological indicators of stress reaction, including heart rate variability, heart rate, skin conductance level and blood pressure, were measured by bio-monitoring sensors. Their anxiety level was measured by using State-Trait Anxiety Inventory test (short version). We found that participants in biophilic indoor environments had consistently better recovery responses after stressor compare to those in the non-biophilic environment, in terms of reduction on stress and anxiety. Effects on physiological responses are immediate after exposure to biophilic environments with the larger impacts in the first four minutes of the 6-minute recovery process. Additionally, these restorative effects differ among three different types of indoor biophilic environments. This research provides evidence that biophilic design elements that impact stress recovery and anxiety. It also demonstrated the potential that virtual reality may be a way to bring nature and its therapeutic benefits to patients in hospitals.
212
The distress thermometer as a prognostic tool for one-year survival among patients with lung cancer
Lung cancer is the second most common and deadliest cancer worldwide.It constitutes approximately 14 percent of all cancer diagnoses and 27 percent of all cancer deaths .Most patients are diagnosed with either locally advanced or metastatic disease and are often faced with treatment-related toxicities and side-effects .These factors contribute to a poor prognosis, high levels of distress, and a lower quality of life among patients and their caregivers .Despite this poor prognosis and limited survival, many patients with lung cancer receive aggressive treatments near the end of their life.Discussions focused on discussing the rationale for such treatments or patient’s goals and values either happen late in the disease course or are of insufficient quality .Moreover, it may be difficult to accurately determine a patient’s prognosis due to the unpredictability of the disease course.Indeed, previous work shows that current prognostic predictions by clinicians are frequently inadequate and largely based on disease-related characteristics .Recent studies have thus suggested that addition of patient-reported outcome measures to such predictions can be useful to better approximate a patient’s prognosis .Use and subsequent discussion of such measures also leads to better symptom control, increased use of supportive care facilities or measures, and enhanced patient satisfaction .A PROM has been defined as “a measurement of any aspect of a patient’s health status that comes directly from the patient” .International and consensus-based guidelines advocate the routine use of PROMs as an integral component of high-quality cancer care .To date however, these measures are only sparsely incorporated in clinical care for patients with cancer .One example of a possibly useful rapid assessment tool is the Distress Thermometer.The DT is a single-item, visual analogue scale that can be immediately interpreted to rule out elevated levels of distress in patients with cancer .The prognostic value or significance of this tool in terms of survival has not been confirmed among patients with lung cancer .To this end, we sought to investigate the prognostic value of the DT when combined with sociodemographic and clinical predictors to assess one-year survival in patients with lung cancer.We also compared this model to models that included quality of life or symptoms of anxiety and depression.This study represents a secondary analysis of data obtained from a randomised controlled trial evaluating the effects of screening for distress using the DT, the associated Problem List and additional supportive care measures to those in need of such care.This study detailed on the effects of this intervention on QoL, mood, patient satisfaction, and end-of-life care.The primary results of this trial are detailed elsewhere .The RCT was conducted at the University Medical Center Groningen among patients with newly diagnosed or recurrent lung cancer starting systemic treatment.Randomisation, data collection and management was performed by the Netherlands Comprehensive Cancer Organization.The study was approved by the institutional Medical Ethics Committee.In short, patients were included within a week after start of systemic therapy and subsequently randomized in a 1:1 ratio to either the intervention group or the control group.Only patients assigned to the intervention group were invited to complete the DT and PL prior to their scheduled outpatient visit.Dependent on the DT-score, type of problems identified, and/or patient’s referral wish, responses were discussed with a nurse practitioner specialized in psychosocial issues.Patients were subsequently offered referral to an appropriate and licensed professional.Patients assigned to the control group were not routinely screened for distress and did not complete the DT and PL.They received care as usual as determined by the treating clinician.The primary outcome was the mean change in the EORTC-QLQ-C30 global QoL-score between 1 and 25 weeks.Between 1 January 2010 and 30 June 2013, 223 patients were enrolled in the trial.All patients had received a histological diagnosis of any type of lung cancer, had an Eastern Cooperative Oncology Group performance scale of 0, 1 or 2, had to start a form of systemic treatment, were without cognitive impairment, and were able to complete questionnaires in Dutch.Systemic treatment was defined as treatment with chemotherapy, adjuvant chemotherapy, chemo-radiotherapy, or treatment with biologicals.Of the patients included, 110 were randomized to the intervention arm.These patients were asked to complete the DT and were therefore included in the current analyses.Sociodemographic characteristics were obtained from the hospital’s electronic health record at study entry as were clinical characteristics detailing on histological tumour type, performance status, recurrent versus new diagnosis, disease stage, initial type of treatment, and the Charlson age-adjusted co-morbidity index were also derived from the electronic health record .Date of death was recorded from the electronic health record up to one year after randomisation.The DT is an extensively validated measure to screen for distress .It consists of a single-item, visual analogue scale with a score ranging from 0 to 10 and is to be completed by the patient to quantify the level of distress experience in the past week.A score on the DT below either four or five, depending on the country and setting, has been propagated as optimal cut-off to rule out significant distress in patients with cancer .An optimal cut-off value of five was observed among Dutch patients with cancer and therefore used in the current study.We did not use data obtained through the Problem List in these analyses.All patients also completed the EORTC-QLQ-C30 to assess health-related QoL and the Hospital Anxiety and Depression Scale to assess mood.Scores on the EORTC-QLQ-C30 may range from 0 to 100 with higher scores reflecting better QoL.We only used the global QoL subscale in the current study as a best approximation to generic QoL.The HADS assesses symptoms of anxiety and depression over the past week with scores ranging from 0 to 3.It consists of 14 questions and scores may vary from 0 to 21 with higher scores indicating more symptoms of anxiety or depression.All PROMs were completed after patients were randomised but within a week after the start of systemic therapy.Candidate predictors for one-year survival were selected based on the literature as well as expert opinion and availability of such predictors in clinical settings .We selected the following five clinical or demographic predictors to be included in the model: 1) gender; 2) performance status;3) disease stage; 4) the Charlson age-adjusted comorbidity index and; 5) tumour histology.To characterize the study population, descriptive statistics were used to evaluate the frequencies, mean, and standard deviations for all sociodemographic and clinical characteristics as well as other study measures at study entry.Patients with significant distress were compared to those without significant distress using independent T-tests and Chi-square tests .The one-year survival of patients with and without significant distress was compared with the log-rank test and illustrated with a Kaplan-Meier curve.Statistical tests were performed with two-sided alternatives and considered significant if P ≤0.05, using SPSS software version 25 and STATA/IC version 13.Univariable Cox proportional hazard models were used to determine the association of these predictors separately with one-year survival.We examined the proportional hazards assumption using log-minus-log plots.Regardless of statistical significance, all selected predictors were subsequently entered together simultaneously into a Cox proportional hazard model.This constituted the basic model.Hereafter, we separately added three sets of PROMs to the basic model: 1) the DT-score; 2) the EORTC-QLQ-C30 global QoL score; and 3) the HADS total score.We report on the added value of these PROMS to the basic model by evaluating the change in -2 log likelihood, the statistical significance, and Harrell’s C-statistic with a 95% CI .The -2LL is a measure of accuracy or overall performance of the model whereas the C-statistic demonstrates the difference in discriminatory value of a model comparable to the area under the receiver operating characteristic curve .To provide better clinical insight regarding the added value of the DT, we constructed a reclassification table including all patients who completed the DT.This table depicts the shift in classification of cases of mortality and non-cases separately for the basic model and the model after addition of the DT-score.To obtain this table, the individual survival risk was calculated for each patient using the baseline survival and the regression coefficients of the selected predictors.We then defined two risk groups primarily based on the net one-year survival date of patients with lung cancer.We defined the high risk group as patients having a one-year mortality risk as ≥85 percent .This reclassification was not performed for models that included the EORTC-QLQ-C30 global QoL score or the HADS total score.Relevant demographic and clinical characteristics of the included patients are displayed in Table 1.Approximately half of these patients was female, 65% was diagnosed with stage IV lung cancer, and 81% was initially treated with a chemotherapy or chemo-radiotherapy regimen.A total of 97 patients accurately completed the DT.Patients not completing the DT were comparable in all sociodemographic as well as clinical characteristics to patients who completed the DT.Of the 97 patients who accurately completed the DT, 51 had a DT score >5 and 46 had a score <5.Patients with and without significant distress were comparable in terms of sociodemographic and illness-related characteristics.Patients with clinically relevant distress reported a significantly lower global QoL, and depicted higher scores on the depression and anxiety subscales of the HADS as well as the total HADS score.Median one-year survival time among patients with clinically relevant distress was significantly shorter: 7.6 months versus 10.0 months.Table 2 displays the univariable relationships between the five selected predictors and the three sets of PROMS with one-year survival.Performance status, disease stage, and the Charlson age-adjusted comorbidity index were all found to be significant predictors.Of the included PROMs, the global QoL-score and the DT-score were identified as significant predictors, but not the HADS.Table 3 depicts the performance of the multivariable model as well as the performance of subsequent multivariable models when combined separately with the three sets of PROMs.The -2LL, i.e. the accuracy of the model, significantly improved after addition of the global QoL-score, addition of the HADS total score, and addition of the DT-score.The C-statistic, i.e. the discriminatory value, improved slightly from 0.69 in the model with clinical predictors to 0.71 after addition of the DT-score.Addition of the global QoL-score and the HADS total score led to a C-statistic of 0.69 and 0.67, respectively.The reclassification model of the 97 patients of whom 50 died within one year is shown in Table 4.The proportion of correctly classified high-risk patients who died within one year increased from 8 percent to 28 percent after addition of the DT-score to the basic model.Moreover, addition of the DT-score did not considerably increase the proportion of patients incorrectly classified as high risk.To our knowledge, this is the first study to show that addition of a patient-reported distress score, as measured by DT, to selected clinical predictors may hold prognostic value when estimating one-year survival.Similar results were obtained when combining the selected predictors with QoL and symptoms of anxiety or depression.Further, patients with clinically relevant distress had a significantly shorter median one-year survival time when compared to patients without clinically relevant distress, whilst being comparable in terms of clinical and sociodemographic characteristics.This finding was also supported by the improvement in the classification of patients with a high risk of death after combining the DT-score with selected predictors.This suggests that addition of a patient-centered outcome that can be rapidly interpreted may allow clinicians to more accurately determine which patients are at risk for a poor prognosis and possibly personalize care accordingly.When viewed in the light of current clinical practice, these findings are important for several reasons.First, we specifically opted to study the prognostic value of the DT since prognosis of patients with lung cancer is often poor and the overall one-year net survival is only 30 percent .The DT was originally developed as a rapid screening and diagnostic tool to rule out clinically relevant distress in patients with cancer .Studying the prognostic value of the DT may thus move this tool beyond the originally intended purpose.Yet, other PROMs such as QoL, anxiety, and depression have previously been identified as important prognostic indicators in multiple, large-scale studies .More importantly perhaps, these outcomes are associated with distress .Having a fast and efficient tool available that screens for distress, and simultaneously conveys prognostic information, is therefore a promising finding in this patient population.Second, numerous studies conducted across different care settings have provided clear evidence to support the earlier integration of palliative care, sometimes even delivered concurrently with treatment .This has led to an increased interest with regards to the earlier integration as well as official endorsement by clinical guidelines .Yet, many patients with advanced cancer either receive such care at a late stage and the quality of such care may not be optimal .Although the use of a short screening tool cannot substitute careful clinical assessment and management, routine use of the DT may aid clinicians in identifying those patients at risk for poor outcomes and provide a vantage point from which to earlier engage patients and caregivers in patient-centered conversations about advance care planning and palliative care options.In contrast to our findings, one previously conducted study did not identify the prognostic value of the DT in patients with stage III lung cancer treated with chemotherapy containing carboplatin .Notably, the observed median DT-score in that study was lower compared to the current study and the majority of patients refused to complete the DT and the associated Problem List.As described by the authors, this selection bias may account for the contrasting findings.Previous studies, although conducted among different cohorts of patients with advanced cancer, have shown that screening for distress has positive effects on the experienced of physical as well as psychosocial problems .Moreover, these studies also observed that distress measures may convey important prognostic information in terms of survival.A recent systematic review concluded that more effort is needed towards ensuring patients’ adherence when completing PROMs and that routine completion should be supplemented by clear guidelines to support clinicians when discussing responses with patients .Other PROMs such as QoL and anxiety or depression have been found to convey important prognostic information in patients with cancer .Yet, these instruments are often lengthy and require additional training and time investment.Also, healthcare professionals have cited practical concerns related to the length of questionnaires and required time investment, disruption of workflow, costs, and a lack of training for accurate interpretation .In contrast to this, the DT allows for rapid assessment and may therefore be easier to integrate in clinical settings.Our findings should be viewed in light of certain limitations.The current study represents a secondary analysis of a previously conducted RCT at a single, academic institution and our sample size was small.Further, although we did include patients with any histological subtype of lung cancer and all patients started a form of systemic treatment, only patients with an ECOG performance status between 0 and 2 were eligible for inclusion in the trial.These observations limit the generalizability of our findings.Third, the current patient population does not include patients treated with immunotherapy.This recent treatment modality is likely to markedly shift the prognosis of patients with advanced lung cancer in the near future.It would therefore be interesting to investigate whether patients with increased levels of distress are also at risk of a poor prognosis among those patients treated with immunotherapy.Next, we used the -2LL and the C-statistic as a best approximation to general performance of the different multivariable models.The -2LL did show significant improvements after addition of the different PROMs but we did not observe similar findings using the C-statistic.The C-statistic, however, has been criticized for a lack of sensitivity with regards to recognizing the added value of a risk marker.It has therefore been recommended to additionally construct and report on a reclassification table since this conveys important complementary information .In line with this, we decided to use a cutoff of 85 percent to define patients at high risk of dying within one year .We specifically decided not to include the EORTC-QLQ-C30 or the HADS in this reclassification table.Instead, we contrasted the performance of these PROMs in the outlined multivariable models to demonstrate similar performance of the DT when compared to other PROMs.Further, although this cutoff likely represents the futility of further tumor-targeted treatment in this patient population, it was arbitrarily chosen and should be further validated in future studies.Last, the response rate in the original trial was relatively low.This was most likely because of the high symptom burden these patients already face and was also stated as the most common reason for participation refusal.This should be taken into consideration when interpreting our current findings.In conclusion, this is the first study to provide evidence for added prognostic value of the DT-score in patients with lung cancer.The possible relationship between the DT-score and survival should be evaluated further in prospective, longitudinal studies across different settings and institutions .Yet, our findings are promising and may allow clinicians to identify those patients at risk for poor outcomes and prevent discordance between care received and personal patient preferences near the end of life.This may further improve the timely delivery of high quality, patient-centered care among patients with lung cancer.Fig. 1.Kaplan-Meier overall one-year survival curve stratified by significantly elevated elevated distress as evaluated by the Distress Thermometer.Survival data was calculated from the date of randomization and date of death was recorded up to one year later.The authors declare that there are no conflicts of interest.
Introduction: The use of patient-reported outcome measures is increasingly advocated to support high-quality cancer care. We therefore investigated the added value of the Distress Thermometer (DT) when combined with known predictors to assess one-year survival in patients with lung cancer. Methods: All patients had newly diagnosed or recurrent lung cancer, started systemic treatment, and participated in the intervention arm of a previously published randomised controlled trial. A Cox proportional hazards model was fitted based on five selected known predictors for survival. The DT-score was added to this model and contrasted to models including the EORTC-QLQ-C30 global QoL score (quality of life) or the HADS total score (symptoms of anxiety and depression). Model performance was evaluated through improvement in the -2 log likelihood, Harrell's C-statistic, and a risk classification. Results: In total, 110 patients were included in the analysis of whom 97 patients accurately completed the DT. Patients with a DT score ≥5 (N = 51) had a lower QoL, more symptoms of anxiety and depression, and a shorter median survival time (7.6 months vs 10.0 months; P = 0.02) than patients with a DT score <5 (N = 46). Addition of the DT resulted in a significant improvement in the accuracy of the model to predict one-year survival (P < 0.001) and the discriminatory value (C-statistic) marginally improved from 0.69 to 0.71. The proportion of patients correctly classified as high risk (≥85% risk of dying within one year) increased from 8% to 28%. Similar model performance was observed when combining the selected predictors with QoL and symptoms of anxiety or depression. Conclusions: Use of the DT allows clinicians to better identify patients with lung cancer at risk for poor outcomes, to further explore sources of distress, and subsequently personalize care accordingly.
213
Longitudinal telomere length shortening and cognitive and physical decline in later life: The Lothian Birth Cohorts 1936 and 1921
Determining the biological factors that influence both cognitive and physical decline in later life is an important challenge facing researchers today.Telomeres are nucleo-protein complexes at the end of eukaryotic chromosomes.They protect the ends of chromosomes, but shorten each time a somatic cell replicates.Environmental factors also contribute to accelerated decline in telomere length.These include low socio-economic status, smoking, oxidative stress, and psychological stress.Telomere length decreases with age and a systematic review determined that the correlation between telomere length and chronological age is about −0.3.Leukocyte telomere length has previously been associated with a number of traits and diseases in older age including cognitive abilities, dementia, physical health and obesity, and has been hypothesised to be a biological marker of ageing.However, a systematic review concluded that current results were equivocal and that more studies, including longitudinal studies, were required that assessed telomere length and ageing-related functional measures.Longitudinal studies have the potential to measure age-related decline in telomere length, and cognitive and physical abilities more accurately than cross-sectional studies and also allow the investigation of the change of multiple variables in parallel with each other.There are many studies that show lower childhood cognitive ability is associated with poorer health and more illness in adulthood and older age, and to earlier mortality from all causes and from several specific causes, such as cardiovascular disease.Early life IQ has previously been associated with telomere length in midlife.The mechanism of the childhood cognition-illness/death association is not understood, but it is possible that telomeres might provide a biomarker of how lifestyle has affected the body.We previously reported mostly-null cross-sectional associations between telomere length and cognitive function, walking speed, lung function, and grip strength in the Lothian Birth Cohorts of 1921 and 1936.More recently, we showed that the same cognitive and physical abilities decline on average between ages 70 and 76 years in LBC1936.Here, we report longitudinal analyses investigating whether decline in telomere length predicts cognitive and physical decline in the Lothian Birth Cohorts.We also investigate whether baseline telomere length influences subsequent decline in cognitive and physical abilities.Finally, we test whether cognitive ability measured in childhood is related to telomere length decline in later life.LBC1936 consists of 1091 surviving members of the Scottish Mental Survey of 1947.At approximately age 11 years most took a valid mental ability test, the Moray House Test version 12.At a mean age of 69.5 years they were recruited to a study to determine influences on cognitive ageing.They underwent a series of cognitive and physical tests.Two further waves of cognitive and physical tests have occurred at mean ages 73 and 76 years.DNA was extracted from peripheral blood leukocytes at ages 70, 73 and 76 years from which telomere length was measured.Cognitive tests taken at each of the three waves included six Wechsler Adult Intelligence Scale-IIIUK non-verbal subtests.From these six cognitive tests a general fluid cognitive factor was derived.The scores from the first unrotated component of a principal components analysis were extracted and labelled as gf.This component explained 52% of the variance, with individual test loadings ranging between 0.65 and 0.72.Physical trait measures included time taken to walk six metres at normal pace, grip strength measured with a Jamar Hydraulic Hand Dynamometer, and forced expiratory volume from the lungs in one second measured using a microspirometer.LBC1921 consists of 550 surviving members of the Scottish Mental Survey of 1932.At approximately age 11 years most took a valid mental ability test, the MHT.At a mean age of 79.1 years they were recruited to a study to determine influences on cognitive ageing.They underwent a series of cognitive and physical tests.Four further waves of cognitive and physical tests have occurred at mean ages 83, 87, 90 and 92 years.DNA was extracted from peripheral blood leukocytes at ages 79, 87, 90 and 92 years from which telomere length was measured.Cognitive tests taken at each of these four waves included Raven’s Progressive Matrices, Verbal Fluency and Logical Memory.From these three cognitive tests a general fluid cognitive factor was derived using principal component analysis.The scores from the first unrotated component were extracted and labelled as gf.This component explained 53% of the variance, with individual test loadings ranging between 0.65 and 0.73.Physical trait measures included time taken to walk six metres at normal pace, grip strength measured with a Jamar Hydraulic Hand Dynamometer and forced expiratory volume from the lungs in one second measured using a microspirometer.Ethics permission for the LBC1936 was obtained from the Multi-Centre Research Ethics Committee for Scotland, the Lothian Research Ethics Committee, and the Scotland A Research Ethics Committee.Ethics permission for the LBC1921 was obtained from the Lothian Research Ethics Committee and the Scotland A Research Ethics Committee.All persons gave their informed consent prior to their inclusion in the study.DNA was extracted from whole blood by standard procedures at the Wellcome Trust Clinical Research Facility Genetics Core at the Western General Hospital, Edinburgh.Telomere length was measured using a quantitative real-time polymerase chain reaction assay.The intra-assay coefficient of variation was 2.7% and the inter-assay coefficient of variation was 5.1%.Four internal control DNA samples were run within each plate to generate absolute telomere lengths and to correct for plate to plate variation.These internal controls are cell lines of known absolute telomere length, 6.9 kb, 4.03 kb, 2.0 kb and 1.32 kb respectively, whose relative ratio values were used to generate a regression line by which values of relative telomere length for the actual samples were converted into absolute telomere lengths.The correlation between relative telomere length and absolute telomere length was 0.8.Measurements were performed in quadruplicate and the mean of the measurements used.PCRs were performed on an Applied Biosystems 7900HT Fast Real Time PCR machine.Linear mixed models were used to determine if telomere length and cognitive and physical abilities changed over time.One individual with chronic lymphocytic leukaemia was removed from the LBC1936 analyses.Covariates included age as the time scale, sex, for telomere length white blood cell counts and for physical abilities height.Individual participant number was included as a random effect.Baseline telomere length was added as a fixed effect interaction with age to test if it predicted decline in cognitive and physical abilities.Age 11 MHT score was then added as a fixed effect interaction with age to test if it predicted decline in telomere length.In LBC1921, linear regression was used to determine if age 11 MHT score was associated with telomere length at age 79 years.Linear mixed models were then used to investigate if telomere length change predicted change in cognitive and physical abilities.Again covariates included age as the time scale, sex, white cell counts and for physical abilities height.Individual participant number was included as a random effect.Linear mixed models were performed in R using the lme4 and lmerTest packages.Descriptive statistics for telomere length, general fluid cognitive ability, time taken to walk six metres, forced expiratory volume in one second and grip strength for LBC1936 waves 1, 2 and 3 are shown in Table 1, and for LBC1921 waves 1, 3, 4 and 5 are shown in Table 2.In LBC1936, mean telomere length decreased with age.In LBC1921, mean telomere length remained relatively stable between ages 79 and 87 years and then decreased with age.In both cohorts gf, FEV1 and grip strength all decreased with age and time taken to walk six metres increased.Mean age, telomere length and FEV1 did not differ between all individuals who participated in a particular wave of testing and those who returned for later waves of testing.Individuals who returned for further waves of testing generally had a slightly higher gf, a faster walk time and a stronger grip strength on the first occasion of testing.Mean trajectory plots for change in telomere length, gf, six metre walk time, FEV1, and grip strength for LBC1936 and LBC1921 are shown in Fig. 1.In LBC1936, a linear mixed model indicated that telomere length decreased by 64.8 base pairs per year, which is 1.5% of the mean telomere length at age 70 years.Telomeres were 177.9 bp longer in males than females.Telomere length decreased with increasing lymphocyte cell count, but was not associated with any other white blood cell count.As previously shown27, gf decreased by 0.05 standard deviations per year, 6 m walk time increased by 0.15 s per year, FEV1 decreased by 0.05 L per year, and grip strength decreased by 0.04 kg per year.There was no evidence to suggest that baseline telomere length was associated with trajectory of decline in cognitive and physical abilities.Age 11 Moray House Test score was not linked to differences in change in telomere length.In LBC1921, a linear mixed model indicated that telomere length decreased by 69.3 bp per year, which is 1.7% of the mean telomere length at age 79 years.Telomeres were 256.9 bp longer in males than females.Telomere length was not associated with white blood cell counts.gf decreased by 0.05 standard deviations per year, 6 m walk time increased by 0.27 s per year, FEV1 decreased by 0.03 L per year, and grip strength decreased by 0.74 kg per year.Baseline telomere length was not associated with decline in cognitive or physical abilities.Age 11 Moray House Test score was linked to the amount of telomere length change such that, for a standard deviation increase in age 11 cognitive ability score, there was a 9.7 bp greater decrease in telomere length per year.Age 11 MHT score was not associated with telomere length at age 79 years.In LBC1936 and LBC1921 there was no evidence to suggest that differences in telomere length change correlated with differences in change in cognitive or physical abilities.This study indicates that, in both LBC1936 and LBC1921, mean telomere length decreased by ∼65 bp per year, which is just under 2% of the mean telomere length at baseline.This is slightly higher than that reported for other longitudinal studies, which ranged from 32 to 46 bp per year.Cognitive and physical abilities also decreased during this period.Telomere length at baseline was not associated with decline in cognitive or physical abilities between the ages of 70 and 76, or 79 and 92 years.In LBC1921 childhood cognitive ability was linked to the amount of telomere length change such that, individuals with a higher childhood cognitive ability underwent a greater decrease in telomere length per year in later life.The rate of decrease in telomere length did not correlate with the rate of decrease in cognitive and physical abilities in either cohort.As far as we aware this is the first longitudinal study, measuring at least three time points, to investigate if telomere length decline is associated with cognitive and physical decline.A recent meta-analysis based on two time points also found little evidence for telomere length decline as biomarker for physical decline.Our results largely agree with previously published cross-sectional findings that telomere length does not associate with cognitive and physical ability.The results confirm the conclusions from a number of previous papers that telomere length is not informative as a biomarker for multiple dimensions of age-related risks including cognitive decline, multi-morbidity and mortality.In LBC1936 and LBC1921, telomere length was longer in males than in females, which contradicts many previous studies.However, this may reflect the fact that life-expectancy of women is higher than men.Due to the older-age range of the Lothian Birth Cohorts, the men are typically much healthier than those of a similar age in the general population, whereas the women may be more representative of women of a similar age in the general population.Also a meta-analysis study looking at different methods of measuring telomere length concluded that only the Southern blot method generates results where women have longer telomeres than men.Interestingly, mean telomere length at age 79 years in LBC1921 was longer than mean telomere age at age 76 years in LBC1936.This may be due to the selection of relatively healthier participants into a study at age 79 years compared to those aged 76 years who were already involved in a study.However, the physical ability data does not support this theory e.g., mean grip strength at age 79 years in LBC1921 was less than mean grip strength in LBC1936 at age 76 years.Also, a recent study showed that although there is a negative correlation between age and telomere length up to age 75 years, after 75 years the correlation becomes positive.In LBC1936 higher lymphocyte count was associated with shorter telomeres, indicating that white blood cell distribution may be a predictor of telomere length, as shown previously.In LBC1921, age 11 cognitive ability was linked to telomere length change such that individuals with a higher Moray House Test score at age 11 years showed a greater decline in telomere length in later life.This was not due to individuals with higher age 11 cognitive ability scores having longer telomeres at age 79 years.Age 11 cognitive ability scores did not influence telomere length change in LBC1936 and the significant result in LBC1921 may be due to type 1 error.Therefore, this finding needs replicating in another study before being considered further.Strengths of this study include the longitudinal nature, with measurements at three and four time points of the telomeres and the cognitive and physical abilities in two narrow-age cohorts whose combined age periods range from 70 to 92 years.A further strength is that our absolute values of telomere length were generated using four internal controls which are cell lines of known absolute telomere length, whose relative ratio values were used to generate a regression line by which values of relative telomere length for the actual samples were converted into absolute telomere lengths.This allowed us to accurately correct for plate to plate variations as it is well known that the quantitative real-time PCR assay method is sensitive to efficiency variations between very long or very short telomere amplifications.PCR efficiency is not the same for samples with long telomeres compared to samples with short telomeres.A disadvantage of the study is the relatively short time period between each wave of testing.As with all longitudinal studies, there was attrition, though the statistical method used all the available data.Selection bias due to differential mortality is a common limitation in longitudinal studies.However, in this study baseline telomere length and FEV1 did not differ between individuals who did and did not return for later waves of testing.Individuals who returned for further waves of testing generally had a slightly higher gf, a faster walk time and a stronger grip strength on the first occasion of testing, indicating some selection bias.This may reduce the power of the study to detect associations between telomere length shortening and cognitive and physical decline.A further limitation of the study is that the sample sizes of the cohorts, particularly at later waves, is perhaps not large enough to detect a correlation between telomere length shortening and decline in cognitive and physical abilities.The relative health of the cohorts also reduces the variance of the cognitive and physical phenotypes relative to the general population.We find that, although telomere length, and cognitive and physical abilities all show mean decline with age in LBC1936 from age 70 to 76, and in LBC1921 from age 79 to 92, the shortening of telomeres is independent from the observed decline in cognitive and physical abilities.
Telomere length is hypothesised to be a biological marker of both cognitive and physical ageing. Here we measure telomere length, and cognitive and physical abilities at mean ages 70, 73 and 76 years in the Lothian Birth Cohort 1936 (LBC1936), and at mean ages 79, 87, 90 and 92 years in the Lothian Birth Cohort 1921 (LBC1921). We investigate whether telomere length change predicts change in cognitive and physical abilities. In LBC1936 telomere length decreased by an average of 65 base pairs per year and in LBC1921 by 69 base pairs per year. However, change in telomere length did not predict change in cognitive or physical abilities. This study shows that, although cognitive ability, walking speed, lung function and grip strength all decline with age, they do so independently of telomere length shortening.
214
Cardiomyocyte behavior on biodegradable polyurethane/gold nanocomposite scaffolds under electrical stimulation
Cardiovascular diseases pose the highest risk of death in the world, according to the American Heart Association Statistics.Every 34 s one American dies by heart attack, stroke or other cardiovascular problems .Currently, treatment options following a myocardial infarction and subsequent congestive heart failure are still limited.Pharmacological agents increase the blood flow but limit ventricular remodeling events and increase cardiac output .Mechanical devices, such as the left ventricular assist device, can only be applied to a limited group of patients .The only successful treatment option for a severe MI to date is heart transplantation ; however, the lack of suitable donors significantly restricts this option.As cardiovascular diseases remain a major cause of morbidity and mortality, new strategies in cardiovascular treatments attract much attention.Among all cardiovascular diseases, MI is one of the key reasons for heart failure, resulting in heart dysfunction and progressive death of cardiomyocytes when normal heart function cannot be restored afterwards .Cell therapy has so far shown only little improvement of cell retention and long-term survival .Instead, biocompatible 3D scaffold materials might provide a feasible solution, as some structures may improve cell retention, survival and even cell differentiation .These kinds of scaffolds or patches can, in principle, be directly implanted on the infarcted tissue with or without cells after MI .Typically, tissue engineering for cardiovascular regeneration is based on producing biomimetic and biodegradable materials for scaffold fabrication that ideally integrate signaling molecules and induce cell migration into the scaffolds .A material suitable for a tissue engineering-based approach to treat myocardial infarction should provide an environment that is predisposed to improve electromechanical coupling of the cardiomyocytes with the host tissue , as well as cardiomyocyte adhesion .This adhesion is essential for the proliferation of cardiomyocytes and for ventricular function.Materials suitable for application in cardiac tissue engineering include natural polymers, such as decellularized myocardium , collagen , alginate , fibrin , as well as synthetic polymers such as polylactic acid, polyglycolic acid, their copolymers , and polyurethane .Among the above-mentioned materials, PUs are considered a major class of applicable elastomers because of their good biocompatibility and biodegradability, their high flexibility, and their excellent mechanical properties .The stiffness of heart muscle varies from 10 kPa in the beginning of the diastole to 500 kPa at the end of the diastole, therefore an elastic material having a stiffness in this range would be required for cardiac engineering ."Such Young's moduli are obtained with biodegradable PUs , which can be synthesized by using vegetable oils as polyol and aliphatic diisocyanate, resulting in typical degradation times of several months.Among different grades of PUs, castor oil-based PU shows no toxicity, is low in cost, and is available as a renewable agricultural resource .This grade of biodegradable PUs has already been widely applied in biomedical engineering, including materials for peripheral nerve regeneration, cardiovascular implants, cartilage and meniscus regeneration substrates, cancellous bone substitutes, drug delivery carriers and skin regeneration sheets .Furthermore, tissue engineering applications require that cells are embedded into the material.Much progress has recently been made in order to fabricate porous polymer scaffolds, in particular by using salt leaching techniques .The success of this method has been shown for a variety of soft and hard polymers , and we have recently established this procedure for PU .Although many PU-based materials have been developed for providing vascular grafts, only few PU scaffolds have so far been studied in the context of myocardial tissue engineering , even though PU is easy to implant into muscle tissue, because it is stiffer than typical hydrogels.An important goal for myocardial tissue engineering must be the fabrication of materials that allow for the synchronization of electrical signals, and thus enhance the contraction of cardiomyocytes in the scaffold material so that a homogeneous total contraction of the engineered patch is guaranteed.In the study presented here, we fabricated a biodegradable nanocomposite material by incorporating gold nanotubes/nanowires into PU scaffolds so that the wired material structure can mimic the electromechanical properties of the myocardium.To investigate the functionality of these materials as cardiac patches, H9C2 rat cardiomyocyte cells were seeded on different polyurethane-gold nanotube/nanowire composites.Eventually, electrical stimulation was applied to the cell-scaffold constructs in order to enhance the functional performance of cardiac scaffolds and to improve cell morphology and alignment.We used fluorescence and scanning electron microscopy as well as gene expression analysis to investigate the behavior of cardiomyocyte cells on the scaffolds.We demonstrate that the adhesion and proliferation of cells significantly depends on the amount of incorporated GNT/NW, and that an optimum concentration of 50 ppm of GNT/NW can provide the best environment for cells to achieve native cardiomyocyte function.Polyurethane-GNT/NW composites were synthesized according to our previous work .In brief, gold nanotubes/nanowires were made by using template-assisted electrodeposition and mixed with castor oil/polyethylene glycol-based polyurethane.Concentrations of 50 and 100 ppm of GNT/NW were used to synthesize two different composites types.For fabrication of porous scaffolds, 355–600 μm sieved table salt was added to the PU-GNT/NW solution, then the mixture of PU-GNT/NW and salt was cast in a Teflon mold of 10 mm diameter and 4 mm thickness.Afterwards, all samples were dried at room temperature for 48 h; then the porous scaffolds were placed in distilled deionized water for 2 more days to remove the salt.In the following, we refer to the scaffolds as PU-0 for pure PU scaffolds, PU-50 for scaffolds containing 50 ppm GNT/NW, and PU-100 for those containing 100 ppm GNT/NW.As it is experimentally difficult to obtain 3D information about pore interconnectivity based on 2D images, Li et al. suggested a simple method of soaking the samples in an ink solution and then imaging the colored sample.Accordingly, our scaffolds were soaked in a solution of common blue writing ink for 24 h and dried at room temperature.Then, a cross section of samples with a thickness of 1 mm was prepared by cutting with a surgical blade and then imaging the samples with a Nikon inverted microscope.This treatment provides information on the interconnectivity of pores as well as on their accessibility from neighboring pores.Porosity was calculated by ImageJ using a manually set intensity threshold."H9C2 rat cardiomyocytes were purchased from the European Collection of Cell Cultures and maintained in Dulbecco's Modified Eagle's medium, supplemented with 10% fetal bovine serum and 1% penicillin and streptomycin at 37°C and 90% humidity.H9C2 is a subclone of the original clonal cell line derived from embryonic rat heart tissue.Cells were sub-cultured regularly and used up to passage 6.Prior to the experiments, PU scaffolds were sterilized using ethylene oxide gas and placed in 10 ml of sterilized phosphate buffered saline for 2 h. Cells were seeded per cylindrical scaffold and incubated overnight to allow cell attachment.On the following day, cells were stimulated using a function generator with a square pulse of 1 V/mm amplitude, pulse duration of 2 ms, at a frequency of 1 Hz for 15 min.This procedure was repeated on three consecutive days, once per day .Stainless steel 304 was used as the electrode material for electrical stimulation.Compared to titanium electrodes and titanium electrodes coated with titanium nitrate, the electrical field was stable in stainless steel 304 electrodes over the whole time of stimulation .The cell-scaffold constructs were left in the incubator for one more day.Calcein was used for staining viable cells and Hoechst for staining cell nuclei.Five repeats of each scaffold group were stained with both Calcein AM and Hoechst after 1 day of cell culture before stimulation and another 5 repeats of each group were stained on the fourth day after cell seeding and electrical stimulation.For Calcein staining, the samples were rinsed once with DMEM and incubated with a 1 μg/ml solution of Calcein in DMEM for 10 min at 37 °C.Afterwards, the samples were washed with DMEM twice, stained with 10 μg/ml Hoechst 32258 in PBS and incubated for 20 min at 37 °C.Then, the samples were washed extensively with PBS and imaged using an Olympus BX43 fluorescence microscope with a 10 × objective.Cell confluency was measured as the ratio of the area stained with Calcein to the whole surface of a scaffold in 2D images using ImageJ .This test was performed in two independent experiments and at least 5 images were taken in each experiment."Cells were lysed in TriSure and RNA extraction was performed according to the manufacturer's protocol.In order to obtain enough RNA, cells grown on 3 scaffolds were pooled.After RNA extraction, aliquots of 200 ng total RNA from each group were reverse transcribed into cDNA, using a cDNA synthesis kit and the provided oligo dT-V primer.Subsequently the cDNAs were purified utilizing the spin columns and buffers provided with the cDNA synthesis kit.Gene expression was analyzed by qRT-PCR using a Rotorgene 3000.For each qRT-PCR analysis, 2.5 μl of the above-mentioned cDNA was used; total reaction volume was 25 μl each and cycling conditions were as follows: 10 min initial denaturation at 95 °C followed by 45 cycles of 20 s denaturation at 95 °C, 20 s annealing, for details see Table 1, and 20 s elongation at 72 °C.At the end of the cycling program a melt curve analysis was performed starting at the actual annealing temperature.All samples were run in duplicates.Gene-specific primers were obtained from TibMolBiol.Primers for atrial naturiuretic factor, Connexin 43, Cardiac troponin I, cardiac Troponin T type 2, NK2 homeobox 5, Myocyte enhancer factor 2C and glyceraldehyde-3-phosphate dehydrogenase were designed using the web-based “Primer 3” program.Primers for ß-cardiac myosin heavy chain, natriuretic peptide B and GATA binding protein 4 were published previously ; as were the primers for Beta-2-microtubulin, TATA box binding protein and those for 18S ribosomal protein mRNA .The SYBR Green based qPCR mix was purchased from Peqlab.Threshold levels for Ct-determination were chosen manually.Primer sequences and annealing temperatures are provided in Table 1.The morphology of the porous Polyurethane-GNT/NW nanocomposites was studied by field emission scanning electron microscopy.For observation of cell-scaffold constructs, the samples were fixed in 3% glutaraldehyde solution in PBS, and then dehydrated with a graded ethanol series, 20 min each.Dehydration was finished with 100% ethanol overnight.The samples were sectioned with the thickness of 7 μm from the top side, further dehydrated using a critical point dryer, and coated with gold before SEM imaging.Cell confluency is presented as mean value ± standard deviation."Differences between groups were analyzed by analysis of variance followed by Tukey's multiple comparison test.For gene expression analysis, the results are presented as mean value ± standard deviation.qRT-PCR data were analyzed according to the ∆∆Ct method using the mean Ct value of the housekeeping genes.Fold changes of expression levels were calculated as described previously and the obtained values were used for statistical analysis.An essential prerequisite for a cardiac patch material is to ensure porosity above the percolation threshold, so that the cells can grow deeply into the scaffold without undergoing hypoxia-induced cell death.In Fig. 1, we present images of scaffold cross-sections after incubation in ink.Our results show that all imaged scaffolds were homogeneously colored by the ink, regardless of their GNT/NW content.This confirmed that the pores were almost uniformly distributed and interconnected.Colored pores were found to be accessible either directly or via adjacent pores.The porosities of the scaffolds were above 90% in all samples.By increasing the amount of gold content in the polymer solution during polymerization, the interconnectivity of the pores was improved, presumably due to the presence of chloroform in the gold-containing samples, which leads to the activation of a solvent casting mechanism in addition to salt leaching.Furthermore, we observed that PU-100, having the highest gold content and smallest polymer concentration, was the most uniform in pore size and distribution and had the largest pores.SEM images of scaffolds also confirmed the largest pores in PU-100 compared to PU-0 or PU-50,Cells of the myocardium need to adhere and proliferate on the material patch in order to form a functional cell network before the scaffold material is degraded.To compare how cell adhesion and growth are influenced by the different scaffold types and by additional electrical stimulation, we investigated the morphology of H9C2 cells stained with Calcein and Hoechst on different scaffolds with and without electrical stimulation in Fig. 2.Fig. 2a–c clearly show that cells after 1 day of incubation spread best on PU-50 compared to PU-0 or PU-100, and they are more homogeneously distributed within the scaffolds than cells on the other two scaffold types.In particular, on the PU-100 scaffold H9C2 cells preferred to attach to each other and formed large clumps rather than spreading on the sample.On samples that had undergone electrical stimulation, the results were distinctly different: whereas cell spreading was not significantly influenced by electrical stimulation on PU-0 scaffolds, it was significantly enhanced on the gold-containing PU-50 and PU-100 scaffolds.This observation is even more pronounced in the quantitative analysis of confluency.Furthermore, we checked if cell alignment after electrical stimulation was enhanced which would mimic the natural response of cells to electromechanical coupling in the heart.The representative images in Fig. 2 clearly show that the cells were aligned only in gold-containing scaffolds, whereas no alignment was observed in PU-0.Furthermore, no significant differences in cell alignment were observed when cells were seeded on PU-50 and PU-100 scaffolds.Fig. 3 summarizes our results for H9C2 cell confluency on scaffolds before and after stimulation.Confluency increased by 39% and 14% after stimulation for PU-50 and PU-100 scaffolds, respectively.However, at the same time cell confluency was not significantly influenced by electrical stimulation in the samples without gold.When the samples were incorporated with gold, a significant increase was found between PU-0 and PU-50 after stimulation.An even more marked increase was found for PU-50 before and after stimulation, however for PU-100, no significant difference was found.In order to investigate if the incorporation of gold into porous PU scaffolds in combination with electrical stimulation can facilitate the function of H9C2 cardiomyocyte on the scaffolds similarly to native myocardium, we investigated the expression of several relevant genes using qRT-PCR analysis.To this end, we evaluated gene expression levels of different cardiac transcription factors as well as gene expression levels of Con43, cTnl, Tnnt2, Nkx2.5 and Mef2c in the H9C2 cardiomyocytes on different scaffolds and as a function of electrical stimulation.The expression levels of the housekeeping genes GAPDH, B2M, TBP and 18 sr RNA were also examined.Expression levels of GAPDH, B2M, TBP and 18 sr RNA were not significantly affected by any of the treatments and were therefore used to normalize gene expression levels of the genes of interest, namely GATA4, NPPB, ANF, β-MHC, Con43, cTnl, Tnnt2, Nkx2.5 and Mef2c.Fig. 4 shows ANF, NPPB, Tnnt2, Nkx2.5 and Mef2c gene expression in H9C2 cardiomyocytes.In Fig. 4a and c, ∆ ct values obtained in cells grown on normal culture dishes were set as “1” and fold changes obtained in cells grown on PU-0, PU-50, PU-100 were calculated as described elsewhere .Similarly in Fig. 4b and d, values obtained from cells grown on pure PU-0 scaffolds alone were set as “1” and fold changes obtained in cells grown on PU-50 or PU-100 were calculated as described elsewhere .Dotted lines indicate 2-fold changes of gene expression as described previously and ± 2-fold changes in gene expression levels are considered statistically significant .Compared to tissue culture plastic surfaces, all our PU samples showed, regardless of their GNT/NW content, upregulated gene expression of some cardiac transcription factors in H9C2 cells.ANF, NPPB, Tnnt2, Nkx2.5 and Mef2c expression levels were already increased when H9C2 cells were grown on pure PU-0 scaffolds, 111.3 ± 4.82 fold, 3.27 ± 0.22, 27.76 ± 2.29 and 6.49 ± 1.29 ) compared to cells grown in normal culture dishes.Growing the cells on PU-50 resulted in a distinct increase of ANF, NPPB and Nkx2.5 expression levels, which were 15.6 ± 0.73, 560.76 ± 3.58 and 79.34 ± 1.76 times higher, respectively, than those detected in cells grown in normal culture dishes.Gene expression levels of Mef2c and Tnnt2 were only marginally increased when cells were grown on PU-50.Expression changes of the cells growing on PU-100 were as follows: 8.2 ± 0.49-fold increase of ANF- and 240.1 ± 5.44 fold increase of NPPB-expression, when compared to the levels detected in cells grown in normal culture dishes; Mef2c gene expression was only 1.62 ± 0.11 times higher when cells were grown on PU-100, similarly Nkx2.5 was only 31.94 ± 1.19 times higher when compared to cells grown in normal culture dishes and Tnnt2 gene expression levels were 10.41 ± 1.16 higher.Interestingly, the fold changes in ANF- and NPPB-expression, when compared to levels detected in cells grown on PU-0 were rather similar: 4.78 ± 0.23 and 4.94 ± 0.31 in ANF- and NPPB-expression, respectively, in cells grown on PU-50 and 2.52 ± 0.11 and 2.12 ± 0.09 ANF- and NPPB-expression, respectively, in cells grown on PU-100.This, however, was not the case for Mef2c, Nkx2.5 and Tnnt2 leading for Mef2c gene expression a 2.01 ± 0.26-fold decrease when grown on PU-100 and a 2.86 ± 0.26-fold increase when grown on PU-50, while all other conditions were not significantly influenced, and Tnnt2 expression was not at all affected when compared to PU-0 scaffolds.GATA 4, Con43 and cTnl expression was not affected by any of the different scaffolds, and β-MHC expression could not be detected in these cells, but was detectable in cDNA synthesized from total RNA of normal rat embryonic tissue, which was used as a positive amplification control.SEM images of H9C2 cells on different samples after 3 days of electrical stimulation are shown in Fig. 5.The images clearly support our findings from the cell staining experiments, as more cells are adhering to the PU-50 scaffold than to the other scaffolds.Furthermore, cells adhering to PU-100 had a morphology similar to cardiomyocytes in native tissue.The results are in agreement with our results on cell confluency, as there are more cells adhering to PU-50 than to the other scaffolds.In general, imaging cells inside porous scaffolds was very challenging due to the spatial conformation of pores, in which the cells can hide behind the pore walls.These results prove that our nanocomposite scaffolds indeed support cell attachment much better compared to gold-free PU-0 scaffolds.This is probably due to a larger number of interconnected pores in the gold-containing samples, providing higher probabilities for cells to grow through the scaffold pores, thus improving cell adhesion and proliferation.In the work presented here, we investigated a novel method using the combined effect of a polyurethane-gold nanotube/nanowire composite material and electrical stimulation of cardiomyocyte cells.This specific composite material of nano-sized gold incorporated into a porous biodegradable polyurethane matrix was chosen in order to improve the transmission and synchronization of electrical signals in the material and thus increase the natural functionality of cardiomyocytes.The feasibility of this approach of incorporating gold nanoparticles into scaffold materials for applications such as cardiac patches has recently been shown for an alginate matrix .Such alginate matrices have a very low elastic modulus of only a few kPa and are viscoelastic .An ideal material for cardiac tissue engineering would, however, be purely elastic in order to mimic the complicated mechanical properties of native heart tissue without tearing during systolic pressure or prohibiting contractile force.The compressive modulus of native heart tissue has been reported to be 425 kPa at the systole .We have recently shown that PU-GNT/NW composites can provide the mechanical properties required for this purpose, i.e. elasticity can be tuned between 200 kPa and 240 kPa .Incorporation of gold nanoparticles in PU substrates changed the physicochemical properties of PU and improved fibroblast cell attachment , and gold in the form of nanowires allowed the formation of conductive bridges between pores and enhanced cell communication .Addition of GNT/NW caused the formation of hydrogen bonding with the polyurethane matrix and improved the thermomechanical properties of nanocomposites.Higher crosslink density and better cell attachment and proliferation were reported in polyurethane containing 50 ppm GNT/NW .Additionally, PU and PU composites showed controllable degradation properties using different polyols during the synthesis process .The polymeric matrix in PU-GNT/NW composites can therefore be replaced by extracellular matrix due to the controlled degradation of PU .After degradation of the scaffold matrix, the gold nanoparticles would remain in the cardiac muscle ECM, which should not harm the cardiac tissue as the gold concentration is comparably low, thus cytotoxicity should be negligible .Additionally, the concentration of gold in most 3D structures varies from 0.0001 wt.% to 15 wt.% and the low concentration of ppm has been shown to affect the cellular activity .Since intact myocardium tissue contains a high density of cardiomyocytes and is known for heavy oxygen consumption, pore interconnectivity and pore uniformity are essential properties of any tissue engineered cardiac patch material, as they guarantee nutrition and oxygen exchange.Both are, for example, necessary to facilitate cell migration .Additionally, the size and orientation of pores has been reported to affect cell alignment .We used 355–600 μm sieved table salt in scaffold fabrication by a porogen-leaching method so that a microscopic, interconnected, and homogeneous pore structure was formed.Nutrients should therefore easily be transported deeply into the scaffolds.In addition to the relevance of material selection for cardiac tissue engineering, signaling factors also play a major role for engineering a functional tissue patch.Proper signaling might be induced by mechanical stimulation or electrical stimulation, similar to the conditions found in intact myocardium.A recent study has shown that in heart mimicking constructs, applying only mechanical stimulation was not a proper signaling factor to keep cardiomyocytes functional .Instead, it has been suggested that an excitation-contraction coupling in cell-scaffold constructs is required for the proper function of cardiomyocyte tissue.This can be achieved by electrical stimulation just as in native heart, where the mechanical stretch of the myocardium is induced by electrical signals .Other studies have already shown that even small physiological fields can stimulate the orientation, elongation and also migration of endothelial cells .In this study, we investigated the orientation and adhesion of cardiomyocyte cells on different PU scaffolds after 3 days of consecutive electrical stimulation.Only on gold-containing scaffolds cells had changed their alignment after four days.Before stimulation, no significant difference of cell morphology was found, whether gold had been incorporated in the scaffolds or not.Furthermore, cell proliferation was not enhanced as a result of gold incorporation.On PU-0, no cell alignment was observed even after electrical stimulation; on both PU-50 and PU-100, cells were aligned on day 4 after electrical stimulation.It is interesting that after stimulation, PU-50, not PU-100, showed the greatest amount of cells, although cells showed on PU-100 a morphology that was most similar to their natural morphology.In our experiments, the alignment of cells was rearranged towards the direction of the applied electrical field.A similar cell alignment improvement was reported by Au et al. for fibroblasts and cardiomyocytes.Furthermore, the cells were re-oriented due to electrical stimulation only on PU-GNT/NW composites.Particularly for endothelial cells, it is well-known that electrical stimulation can change cell elongation, alignment, and migration .Here, we made use of this effect in order to electrically polarize the cardiomyocytes seeded on PU-GNT/NW scaffolds to provide a better microenvironment for their adhesion, elongation and function.It has previously been reported that a square, biphasic electrical pulse of 2 ms duration provided cell coupling similar to that present in in vivo environments after 8 days of stimulation and a small electrical field of 200 mV/mm caused a fully-oriented cell network .Despite all of these electrical stimulations, Tandon et al. showed that the alignment of cardiomyocyte cells was only affected by surface topography and not by applying an electrical field; however, our result demonstrated that electrical stimulation indeed facilitates the behavior of only those cell-scaffold constructs that contained gold.The morphology and distribution of cells investigated by SEM confirmed the essential role of pore size and distribution in the scaffolds.We observed a marked difference in terms of both cell number and cell morphology between pure PU and PU-GNT/NW composites.In PU-GNT/NW composites, where chloroform had been used during fabrication, the pores were bigger and more interconnected.Therefore, more cells could migrate into the scaffold and could easily be observed.However, we found that cells on PU-100 were closer to their native morphology.This is consistent with our previous result that 50 ppm gold provides optimum adhesion conditions for mesenchymal cell attachment , presumably by changes in surface energy in response to the incorporation of gold.Other studies have shown that an optimum amount of gold caused a microphase separation in the chemical composition of PU, hence improving hydrophilicity .Gold nanoparticles in a concentration of 43.5 ppm in polyurethane matrix have been shown to cause minimum inflammatory response in vitro and in vivo, and improve biocompatibility .Our study shown here suggests that the PU-50 scaffolds provide optimum conditions for a cardiac tissue engineering material.Our gene expression analysis of specific markers in myocardium tissue clearly showed changes in the expression levels of functional cardiac genes, clarifying the role of gold nanoparticle incorporation into PU and the importance of electrical stimulation.Five different specific genes were investigated.The expression of both ANF and NPPB was significantly up-regulated; the highest up-regulation level was determined on PU-50.The ANF gene is highly expressed by cardiomyocytes when arteriosclerosis has occurred and a decrease has been reported during maturation of ventricular cells .ANF is particularly a marker of cardiomyocyte differentiation ."Therefore, the marked increase of this gene's expression in PU-50 and PU-100 found here is assumed to be a positive response to atrial stretch due to the electrical stimulation.Therefore, we conclude that PU-GNT/NW scaffolds can accelerate cardiomyocyte response to the stresses induced by electrical stimulation, decreasing the progress of cardiac hypertrophy.NPPB marks any overstretching in myocardial tissue and acts similar to ANF, but with lower affinity.As it has been shown that in native heart, mechanical stretch is initiated by electrical signals , increases in the expression levels of these genes reflect the overstretching of the cell-scaffold constructs, particularly in the PU-50 samples.Similarly, incorporation of gold induced in our studies a significant increase in gene expression level of the early cardiac transcription factors Nkx2.5 and Mef2c.Mef2c plays a role in myogenesis, maintaining the differentiated state of muscle cells.Nkx2.5 also functions in heart formation and development .This implies that 50 ppm of GNT/NW is an optimum concentration for stimulating the expression levels of important cardiac differentiation markers and of myogenesis.In this study we investigated different properties of cardiomyocytes on porous nanocomposite scaffolds formed by a biodegradable polyurethane matrix with incorporated gold nanoparticles.Cardiomyocyte adhesion and proliferation were strongly increased in response to electrical stimulation on PU-GNT/NW composites within 4 days.After 4 days of incubation and electrical stimulation on the scaffolds, cardiomyocytes on PU-GNT/NW samples showed a more native morphology and enhanced proliferation compared to gold-free PU-0.Only small differences in cell behavior were observed between PU-50 and PU-100, where particularly PU-50 induced optimum cell distribution and spreading, as well as the largest up-regulated expression levels of genes relevant to cardiac differentiation and hypertrophy.Taken together, our data suggest that nanocomposites made from porous and biodegradable polyurethane scaffolds with an optimized content of gold nanowires/nanotubes in combination with electrical stimulation are promising materials for future applications in cardiac tissue engineering.
Following a myocardial infarction (MI), cardiomyocytes are replaced by scar tissue, which decreases ventricular contractile function. Tissue engineering is a promising approach to regenerate such damaged cardiomyocyte tissue. Engineered cardiac patches can be fabricated by seeding a high density of cardiac cells onto a synthetic or natural porous polymer. In this study, nanocomposite scaffolds made of gold nanotubes/nanowires incorporated into biodegradable castor oil-based polyurethane were employed to make micro-porous scaffolds. H9C2 cardiomyocyte cells were cultured on the scaffolds for one day, and electrical stimulation was applied to improve cell communication and interaction in neighboring pores. Cells on scaffolds were examined by fluorescence microscopy and scanning electron microscopy, revealing that the combination of scaffold design and electrical stimulation significantly increased cell confluency of H9C2 cells on the scaffolds. Furthermore, we showed that the gene expression levels of Nkx2.5, atrial natriuretic peptide (ANF) and natriuretic peptide precursor B (NPPB), which are functional genes of the myocardium, were up-regulated by the incorporation of gold nanotubes/nanowires into the polyurethane scaffolds, in particular after electrical stimulation.
215
Development and validation of a bioassay to evaluate binding of adalimumab to cell membrane-anchored TNFα using flow cytometry detection
Biopharmaceuticals are heterogeneous molecules manufactured under stringent quality controls intended to ensure their batch-to-batch consistency.Accordingly, regulatory guidelines, indicate the relevant characteristics or attributes that need to be evaluated to determine their quality.In this regard, different analytical methodologies are employed for the evaluation of critical quality attributes with respect to identity, structure, heterogeneity, purity and functionality.The evaluation of functionality in bioassays is a fundamental part of quality assessment of biopharmaceuticals because it provides confirmation of the appropriateness of other physicochemical and structural CQAs; additionally, functionality assays to properly evaluate its mechanism of action.However, the development, standardization and implementation of bioassays is a challenging task because they depend on the response of living organisms, the use of critical reagents, and other uncontrollable sources of variability that affect the system’s performance .Bioassays must be capable to reproduce in vitro the mechanisms of action by which a biomolecule is capable to achieve its biological activity in patients.Additionally, bioassays should incorporate a reliable technique that reveals the interaction between the biomolecule and its target, commonly colorimetric, luminiscent or fluorometric signals .Accordingly, the development of bioassays requires a deep understanding of the mechanisms of action of the biomolecules under study; this is important during the design of the assay, because allows defining the critical characteristics to be evaluated based and helps to select the most appropriate analytical approaches for their assessment .Once the characteristics to evaluate and the approach for assessment have been defined, the experimental conditions of the bioassay must be standardized, and then should be validated to demonstrate that it is suitable for its intended purpose .The validation exercise must be focused on the evaluation of characteristics that warranty that the assay is robust under the experimental conditions in which will be performed.In general, the characteristics evaluated during a validation exercise include specificity, accuracy, precision, sensitivity and system suitability .However, the validation exercise of bioassays and the stringency to evaluate each characteristic will depend on the nature of each assay, the knowledge gained during its development and standardization, as well as its intended purpose.In this article, we present the development and validation of a bioassay designed to evaluate the interaction of adalimumab and its target, tumor necrosis factor alpha, for this purpose, we used recombinant CHO cells that express TNFα bound to their membrane.TNFα is a pleiotropic inflammatory cytokine central regulator of inflammation which is responsible of chronic diseases such as rheumatoid arthritis, ankylosing spondilitys and Crohn’s disease .Adalimumab is a human anti-human TNFα therapeutic monoclonal antibody produced by phage display which has been successfully employed in the treatment of several TNFα mediated diseases .Interactions between adalimumab and TNFα have been well described at a molecular level.It has been reported that adalimumab binds to TNFα with a relative higher affinity than other anti-TNFα molecules such as Etanercept and Infliximab .In addition, crystallography studies showed that adalimumab prevents ligand from binding to TNFR2, inhibits ligand-receptor binding, and blocks TNFR activation .It is well known that native transmembrane TNFα of organisms is involved in the inflammatory response acting as a bipolar molecule that transmits signals, either as a ligand, or mediating reverse signals into mTNF-expressing cells .On the other hand, it has been reported that anti-TNFα antibodies exert their main therapeutic effect by binding to mTNFα to form stable immune complexes .Despite the interaction of adalimumab and TNFα has been widely studied from a clinical research perspective, the information on bioassays designed to evaluate binding in vitro as routine tests for quality control purposes in the pharmaceutical industry is more modest.The bioassay presented herein resorts to flow cytometry to evidence the binding of adalimumab to mTNFα of a recombinant CHO cell line through a fluorometric signal.Flow cytometry has been widely employed to analyze phenotypic characteristics of cells based on the Coulter principle coupled to a detection system that measures how the cell or particle scatters incident laser light and emits fluorescence .Despite this technology has been applied as an identity method of cells, it has been little used for obtaining data to build dose-response curves.In order to determine the suitability of the bioassay we evaluated its performance through a validation exercise that included the assessment of specificity, accuracy, precision and system suitability.The validation exercise was designed and conducted according to the USP 〈1033〉 Biological Assay Validation Chapter and the ICH Validation Guideline Q2 .The validation of the assay ensures its robustness to be used as an in vitro pre-clinical test of biological activity of adalimumab for batch release, functional characterization or biocomparability purposes.Engineered CHO-K1 cells expressing membrane-bound TNFα were acquired from PROMEGA.RPMI-1640 medium.Fetal Bovine Serum, PBS buffer, Trypsin TryPLE Select 1X, EDTA, BD cytofix solution, IgG Anti-human PE, IgG Anti-TNFα APC.Adalimumab, 40 mg/0.8 mL solution for injection in pre-filled pen, batch 51073LX01, was acquired from Abbvie Inc.mTNFα cells were routinely grown in RPMI medium containing 10% Fetal Bovine Serum at 37 °C, 5% CO2.This is a stable cell line developed to be ready to use, and do not require selective pressure nor further passages to express mTNFα consistently, this ensures reproducibility of assays.The cells were harvested at 80% of confluence.The harvested cells were centrifuged, washed and resuspended in PBS.Viable cells were counted by the trypan blue exclusion method prior to be prepared with BD Cytofix™ Fixation Buffer for subsequent staining and cytometric analysis.Finally, the concentration of cells was adjusted to 4 × 106 cells/mL with PBS + 1% horse serum.50 μL of cells suspension prepared in the previous step were dispensed into 1.5 mL Eppendof tubes.Different concentrations of either, adalimumab or antibody anti-TNFα, were added to the tubes containing the cells as separate treatments, and then were incubated 1 h at 4 °C.The cells were washed with wash buffer and the pellet was recovered by centrifugation.Then, the cells exposed to adalimumab were resuspended and added with 50 μL of PE anti-human IgG Fc and incubated during 45 min at 4 °C.Finally, the cells were washed twice with wash buffer and stored at 4 °C until they were analyzed in an Aria III flow cytometer.mTNFα cells were gated based on single events and homogeneus morphology according to size and granularity by the Forward Scatter and Side Scatter respectively.Unstained cells were regarded as controls to determine the autofluorescence of the cells.Cells incubated only in the presence of PE anti-human IgG Fc or in the presence of adalimumab with an isotype control antibody were used as controls to evaluate unspecific binding.The stained cells were analyzed in FACSAria III flow cytometer.The binding of adalimumab to mTNFα was evaluated by the increase of the Median Fluorescence Intensity of the fluorochrome conjugated to the anti-human Fc antibody.Then the fold increase was calculated as MFI of concentration of adalimumab/MFI without adalimumab control.The parameters to be fulfilled for validation were established according to the ICH guideline Q2 and the USP 〈1033〉 Biological Assay Validation Chapter considering the parameters below.The acceptance criteria for each evaluated parameter of the validation exercise was established according to the method capabilities observed during the standardization stage and its intended purpose, which is to evaluate in vitro the binding of adalimumab to mTNFα express on recombinant CHO cells for QC purposes.Specific recognition and binding of adalimumab to mTNFα was tested against matrix components.A dose-response curve constructed from the Fold increase in MFI values generated from independent triplicates at 9 dilution levels of adalimumab with respect to the matrix prepared under the same dilution scheme and replicates.In this context, specificity is given by the fitting of the model curve into the assay data through a non-linear regression model.Curve fitting was tested under four or five-parameter logistic models using the software Graph Pad Prism.Accuracy several factors such as buffer components and sample matrix can impact on binding of antibodies in bioassays, hence influence the accuracy of the method.Accordingly, accuracy is usually measured through dilutional linearity that accounts for such variations.We evaluated accuracy at all the dilution levels of the dose-response curve in a concentration range from 60% to 140%.The pre-defined acceptance criteria for acceptable linearity was r2 ≥ 0.90 and slope in a range from 0.80 to 1.25.The precision describes how the method is capable to reproduce independent results with variations within an acceptable given distribution of single measurements.We estimate precision through the coefficient of variation from three independent replicates at the nine concentration levels of the dose-response curve.The pre-defined acceptance criteria were CV ≤ 20% among replicates at all the evaluated levels.According to the manufacturer’s guides, before setting up an experiment, we run a performance check using the Cytometer Setup and Tracking application.This performance check ensures that the cytometer is performing consistently for automatic characterization, tracking, and measurement acquisition.Additionally, the system suitability should be determined by specificity using dye controls capable to unambiguously identify specific populations.In this work the system suitability was determined considering the capability of the flow cytometer to detect a differential dose-response among the samples containing adalimumab-mTNFα complex and the negative controls.The parameters evaluated were CV and r2 of the dose-response curve within the concentration levels range.The first step was to demonstrate that the recombinant cell line was capable to express TNFα onto their cell membrane.For this purpose we analyzed the cell line to determine size and granularity using forward scatter and side scatter respectively.Our results confirmed that the detected events corresponded to CHO-K1 cells.Additionally, the expression of TNFα onto cell membrane was determined through median fluorescence intensity from APC.These results confirmed that under our experimental conditions, the recombinant CHO-K1 cells were capable to express and bound TNFα to their membrane.We selected an anti-human Fc antibodies conjugated to PE in order to determine the establishment of the adalimumab-mTNFα complex.The anti-Fc antibody conjugated to PE exhibited a displacement in MFI with respect to the controls, which is indicative of specific recognition of the Fc portion of adalimumab.These results confirmed that the biological model, along with the detection method by flow cytometry, were appropriate to evaluate binding of adalimumab to its target in vitro.Based on our previous experience developing cell-based assays we adjusted the biological system to 2 × 105 cells to evaluate the dose-response ratio of adalimumab binding to mTNFα through the MFI.We observed a dose-response behavior denoted as displacements of MFIs obtained from cells exposed to different dilution levels of adalimumab in a range from 3 to 700 ng/mL.The curve constructed from these fold increases in MFI values showed an asymmetric dose-response shape with a shorter tail at the upper end.It was observed that samples prepared at different concentrations of adalimumab exhibited a concentration-dependent MFI response, while the negative controls did not exhibit any signal different to cell auto fluorescence.These results showed that there was no interference of the matrix and the signal from the samples is the result of the specific interaction between adalimumab and mTNFα.In most of bioassays, the effect of the concentration of the analyte on the biological response traditionally results in a sigmoidal shape which fits to the four-parameter logistic model.However, in some cases, the data may not result in sigmoidal dose-response curves .In such circumstances, it is recommended to regard asymmetry into the mathematical model for curve fitting .The five-parameter logistic model invokes the same parameters of the 4PL equation, plus an asymmetry factor which provides a better fit when the response curve is not symmetrical .Despite, in various experiments 5PL exhibits a better fit of asymmetrical curves; in this work, we observed that the both, 4PL and 5PL, exhibited similar results in the goodness to fit test.This means that for this particular case, asymmetry is negligible to fit of the model curve to the true curve.Accordingly, the validation was carried out using 4PL fitting and the results complied with the pre-defined acceptance criteria.Accuracy estimated by dilutional linearity of the effect of adalimumab concentration on the MFI at five dilution levels showed a correspondence between nominal and measured potency with a correlation coefficient r2 = 0.9.The coefficient of variation among replicates was < 20% at all the evaluated levels, which complies with the pre-defined acceptance criteria.It was observed that the variation increases as the samples are diluted, however, this variation did not affect precision within the evaluated range.It was observed that the samples containing adalimumab-mTNFα complexes exhibited a dose-response behavior that fitted to a non-linear mathematical model, while the response of negative controls was mainly flat.The CV among samples was < 0.20, while r2 for the dose-response curve was > 0.9.These results are indicative in one hand, that the bioassay is capable to exert a differential behavior of the positive samples with respect to the negative controls, and in the other hand that the flow cytometry instrument is capable to detect accurately such differences.Flow cytometry has been employed for different purposes related to cell characterization and functionality.The results obtained here show that flow cytometry is also an appropriate alternative as a detection method in bioassays .The advantages of using flow cytometry in bioassays rely on the automation, sensitivity, dying controls and self-calibration that allow obtaining accurate and consistent measurements.On the other hand, the validation exercise demonstrated that the developed bioassay is suitable for the evaluation of binding of adalimumab to mTNFα expressed by rCHO cells.Additionally, we observed that despite 5PL model is highly recommended to fit non-sigmoidal curves, one must be cautious when choosing this model since unexpected results could arise due to assumptions or constraints of the model and software.The collective results from the development and validation of this bioassay, suggest that it could be implemented as a routine methodology in QC laboratories for batch release, or as a test for characterization and biocomparability of adalimumab.Actually, the experimental conditions could be standardized for the evaluation of other biomolecules acting through the same mechanism of action.The information from this validation exercise contributes to expand the applications of flow-cytometry for biodrugs characterization and for the demonstration of comparability of biosimilars.The authors declared no conflict of interest.
Physicochemical and structural properties of proteins used as active pharmaceutical ingredients of biopharmaceuticals are determinant to carry out their biological activity. In this regard, the assays intended to evaluate functionality of biopharmaceuticals provide confirmatory evidence that they contain the appropriate physicochemical properties and structural conformation. The validation of the methodologies used for the assessment of critical quality attributes of biopharmaceuticals is a key requirement for manufacturing under GMP environments. Herein we present the development and validation of a flow cytometry-based methodology for the evaluation of adalimumab's affinity towards membrane-bound TNFα (mTNFα) on recombinant CHO cells. This in vitro methodology measures the interaction between an in-solution antibody and its target molecule onto the cell surface through a fluorescent signal. The characteristics evaluated during the validation exercise showed that this methodology is suitable for its intended purpose. The assay demonstrated to be accurate (r2 = 0.92, slope = 1.20), precise (%CV ≤ 18.31) and specific (curve fitting, r2 = 0.986–0.997) to evaluate binding of adalimumab to mTNFα. The results obtained here provide evidence that detection by flow cytometry is a viable alternative for bioassays used in the pharmaceutical industry. In addition, this methodology could be standardized for the evaluation of other biomolecules acting through the same mechanism of action.
216
Prevalence of sweetpotato viruses in Acholi sub-region, northern Uganda
Sweetpotato is an important crop for smallholder farmers in resource-limited rural settings of Africa.It requires few inputs to grow, yields relatively well in poor soils and is drought tolerant .It is a good carbohydrate source and the cheapest food security crop for subsistence farmers in Africa .In addition, sweetpotato tubers and leaves are regarded as the cheapest source of vitamins, micro-nutrients, protein, fat and dietary fibre .The importance of sweetpotato is constantly increasing but its production is greatly constrained by viruses, among other biotic factors.Up to seven sweetpotato viruses have been reported to infect and constrain sweetpotato production in East Africa.Six of these have been particularly reported in Uganda , where they can cause up to 98% yield losses .Propagation of sweetpotato plants using vine cuttings remain the most important mechanisms for the spread, survival and transmission of sweetpotato viruses from generation to generation .In addition, traditional agricultural practices such as piecemeal harvest allow the virus to be maintain for long within the infected plants such that it act as potential source of inoculum for future infection .Sharing of sweet potato vines amongst farmers or buying vines from the market during time of shortages are some of the farming practices that promote the spread of sweetpotato viruses among different farms .SPCSV is transmitted by whitefly common species known as Bemisia tabaci while SPFMV is transmitted by aphids .Some of the viruses are transmitted through sap inoculation from infected plant by use of contaminated tools during vine cutting among the local farmers .Most sweetpotato viruses do not produce severe symptoms as single infections but have devastating co-infection effects .Synergistic interaction among sweetpotato feathery mottle virus, sweetpotato mild mottle virus and sweetpotato chlorotic stunt virus causes a very severe sweetpotato condition – sweetpotato chlorotic dwarf disease .Co-infections involving SPFMV and SPCSV produce a severe disease syndrome known as sweetpotato virus disease that is associated with severe yield losses in a number of sweetpotato production systems .Currently SPVD is widespread in the major sweetpotato growing region of Uganda and has been implicated in elimination of some early maturing and high yielding cultivars .In addition, high incidences of SPFMV and SPCSV have been reported in central Uganda .However, reports on the incidence of these viruses and their effect on sweetpotato production in the former war-zone of northern Uganda are limited.Such information is essential in guiding control strategies toward managing spread of these diseases.This study was therefore carried out to determine the prevalence of different sweetpotato viruses in northern Uganda.A cross-sectional survey was carried out in Gulu, Kitgum and Lamwo districts of the Acholi sub-region in northern Uganda from January to February 2016.These districts were chosen because they represent the major sweetpotato growing districts in Acholi.A total of 380 samples were collected from 38 fields across six sub-counties randomly selected from the three districts.Sweetpotato fields were sampled using systematic random sampling along roads .The distance between a sampled field and a subsequently sampled field was at least 2 km .Only fields with vines aged two months or more were sampled because they had developed many leaves for symptom observation.Field observations were made to identify vines with symptoms related to virus infection .The picture of the plant showing symptom of viral infection were taken from the field and vines were cut at least 15 cm long.Leaves were removed from the vines and subsequently the vines were wrapped in moist tissue paper to avoid withering.The sampled vines were potted in a screen-house at Gulu University a day after their collection from the fields .New leaves were monitored for any development of virus like symptoms similar to those manifested by the plant when in the field to differentiate symptoms induced by heat stress or insect bites when the plant where in the fields.The vines were watered regularly every two days and also sprayed with insecticide to avoid cross infection by insect vectors .Leaves of the plant were harvested within three to four weeks after potting for testing virus infection.Viral RNA or DNA was extracted using TRIzol LS reagent from fresh leaves of sweetpotato plants established in the screen-house.The RNA quality was checked by denaturation in highly deionised HI-DI™formamide and electrophoresed in 1.2% agarose dissolved in 1% TAE buffer .The cDNA was generated using a RT-PCR kit.The reaction volume contained 10μM of 0.5 μl of each reverse primer, 200000U/ml of 0.5 μl M-MuLVR reverse transcriptase, 1X of 5 μl of RT buffer, 1 mM of 4 μl of dNTPs mix, 40000U/ml of 1 μl of RNase inhibitor, 2 μl of RNA template, 10 μg/ml of 0.5 μl of BSA and water to bring the total reaction volume to 20 μl.The reactions were then incubated in SimpliAmp Thermal Cycler under the following conditions: 22 °C for 10 min, 42 °C for 40 min and 95 °C for 4 min.Multiplex PCR was completed in a 25 μl reaction volume using a Taq PCR kit.The reaction mixed contained 1X of 5 μl of PCR buffer, 1 mM of 2 μl of dNTP solution mix, 5000U/ml of 0.2 μl of Taq polymerase, 2.5 mg/ul of 4 μl of MgCl2, 2 μl of cDNA templates, 10μM of 0.5 μl of forward primers, 10μM of 0.5 μl of reverse primers and PCR water to make the volume up to 25 μl.Amplification was performed in SimpliAmp Thermal Cycler as follows: initial denaturation at 94 °C for 5 min and 35 thermal cycles of denaturation at 94 °C for 30 s, annealing at 50 °C for 30 s, extension at 72 °C for 1 min.Final extension was at 72 °C for 5 min.The PCR products were electrophoresed in 1% Agarose Basic, stained with SYBR Safe DNA gel stain and visualised on an UVIDOC HD5 ultraviolet trans-illuminator.Infections were determined when one or more bands corresponding to expected amplicon sizes of the viruses appeared on the lane in agarose gel after electrophoresis.The gel electrophoresis bands were used to summarised virus infection of the samples after PCR.Samples with gel electrophoresis band corresponding to expected amplicon size were recorded as positive in Excel spread sheet.Those with no bands were recorded as negative for the virus infection in the Excel spread sheet.The data recorded in excel spread sheet were uploaded to EPI info 7.The frequency of all infections was calculated and expressed as a percentage.Similarly, frequency of infection of sweetpotato fields were computed and expressed as percentages.The 95% confidence interval for percentage infection was calculated.The spatial distribution of the different viruses across the three districts within Acholi were presented in an ArcGIS map.Only four viruses were detected in Acholi.A total of 92/380 samples were infected with any of the four viruses.Of these 92 infections, 65/92 were SPFMV, which represented the major virus infecting sweetpotato in the region.There were 17/92 infections with SPCFV, 8/92 of SPMMV and only 2/98 were SPCSV.In total, 17.11% of samples were infected with SPFMV, 4.47% with SPCFV, 2.11% with SPMMV and only 0.5% with SPCSV.The highest number of virus-infected samples was from Kitgum, followed by Gulu and then Lamwo.Total SPFMV infection in Kitgum was 30/44, SPCFV was 10/44, SPMMV was 3/44 and SPCSV was 1/34.Similarly, in Gulu, 21/34 samples were infected with SPFMV, 7/34 with SPCFV, 5/34 with SPMMV and 1/34 with SPCSV.Only SPFMV was detected in Lamwo.No virus was detected in 26% of surveyed fields, SPFMV occurred in 57.9% of surveyed fields, SPMMV occurred in 18.4%, six had SPCFV and only two had SPCSV.Five fields had both SPFMV and SPMMV, four had infection with both SPFMV and SPCFV, and two fields had both SPCSV and SPFMV.Only one field had both SPFMV, SPCSV and SPMMV.Only three samples out of 380 showed infection by more than one virus.Only two kinds of co-infection were identified: SPFMV + SPCFV and SPCSV + SPCFV.Co-infection of SPCSV and SPFMV was not detected in any of the three districts.Furthermore, multiple infection involving three or more viruses was not detected using multiplex PCR.The most widespread virus identified in the study was SPFMV, with higher frequency of occurrence than other viruses.This finding is consistent with studies done in central, western and eastern Uganda that indicated wide distribution of SPFMV among farmers’ fields .In different parts of East Africa, SPFMV remains the most widely distributed virus .Reports also indicate that SPFMV occurs in almost every country where sweetpotato is grown .The widespread distribution of SPFMV in different part of the world is attributed to its ability to cause mild or no symptoms in sweetpotato plants, making it difficult for farmers to detect SPFMV infection during vine propagation and so promoting its spread by sharing infected vines among local farmers .As well the mild or no symptom manifested by SPFMV make it hard for farmers to rogue the infected sweetpotato plants from their sweetpotato fields.Such sweetpotato plant may be propagated and reuse for many seasons by different farmers through vine sharing which is a common practice of vine acquisition among local farmers.Single infection by SPFMV is estimated to cause yield losses of about 1.6% in sweetpotato.Some strains of the virus cause corkiness in root tubers making them unpalatable.The SPCSV was least detected and had limited distribution.However, reports indicate that it is the major virus causing significant yield losses in central and western Uganda where it often occurs in combination with SPFMV .Limited Prevalence of SPCSV has been reported in some parts of Uganda and Tanzania .SPCSV infected plant manifest symptom clearly which make some farmers select against such vines when selecting the vines for propagation.Phytosanitary measures such as roughing sweetpotato plants with virus like symptoms by sweetpotato farmers could contribute to low prevalence of SPCSV reported in this study since the virus clearly manifest symptom that are easily detected by farmers.The prevalence of SPCSV in different agro-ecological area also depends on the abundance and distribution of whitefly .However, the hot dry season in northern Uganda from December to April every year scorch most sweetpotato vines in the area and provide break in whitefly lifecycle which is a key vector of SPCSV .Despite its low prevalence, detection of SPCSV in this area poses a threat to sweetpotato production since it can cause an estimated yield loss of about 40% alone.However, up to 98% yield losses have occurred when SPCSV and SPFMV co-infect sweetpotato .It is the major sweetpotato virus responsible for degeneration and extinction of sweetpotato cultivars .Currently, local farmers often reuse and share planting materials and this will most likely increase the frequency of occurrence of SPCSV and other viruses.The SPCFV was the second most detected virus in our study.A previous study ranked it as the fourth most important virus in central and western Uganda and an earlier study indicated that SPCFV had comparatively higher prevalence in a surveyed district in northern Uganda .No vector has yet been identified as responsible for transmission of SPCFV, making it impossible to correlate spread of SPCFV with a vector.The spread of SPCFV through sap from infected sweetpotato is the only known mode of transmission .The high frequency of detection is possibly due to the sharing of infected planting material among local farmers and spread through sap inoculation from unsterilised tools during cutting of planting materials.The SPMMV, a potyvirus, was the third most important virus detected in our study and had a lower frequency of detection compared with other studies.It is the third most distributed virus in Uganda and East Africa .It can interact with SPFMV and SPCSV, producing the severe disease syndrome known as sweetpotato chlorotic dwarf disease as first reported in Argentina .The three-virus combination has been reported in Uganda and shows severe disease syndromes with significant yield losses .Infection involving two or more viruses was rare in our study and single infection by SPFMV was the most common, in contrast to findings in central and western Uganda where most infection was mixed .Our findings were consistent with previous results in which SPFMV was the major virus of sweetpotato in northern Uganda.Similarly, single infection by SPFMV was more common than multiple infection in Tanzania .However, we failed to detect infection of SPCSV and SPFMV, which together form the common and devastating disease of SPVD in East Africa .This is not the first case of unusually low occurrence of SPVD – a similar case was reported in the coastal district of Bagamoyo in Tanzania .The possible explanation for the failure to detect SPVD is the rare occurrence of SPCSV in this region, as found in our study and previously .Previous studies attributed the low prevalence of sweetpotato viruses in northern Uganda relative to central Uganda to the differences in rainfall distribution pattern .Northern Uganda experiences longer dry periods with a unimodal pattern of rainfall compared to an even distribution of rainfall throughout the year for central Uganda .In the three East African countries of Kenya, Uganda and Tanzania, cases of high sweetpotato virus incidence have consistently been reported in areas around Lake Victoria, which receive abundant rainfall throughout the year .Such areas with regular rainfall have a continuous sweetpotato production pattern which maintains infected plants in the production system for long periods and favours continuous proliferation of whitefly vectors .In contrast, the northern region has a prolonged dry spell during December–April, which scorches most vines and reduces their reuse and the multiplication of vectors of sweetpotato viruses.Reports indicate that uniform rainfall distribution within the Lake Victoria basin supports proliferation, abundance and distribution of the whitefly vector throughout the year .Stable whitefly populations are maintained by continuous sweetpotato production and even distribution of rainfall throughout the year, which is not the case in northern Uganda.The four sweetpotato viruses detected in the region are the major viruses reported to infect sweetpotato in other parts of Uganda and East Africa.The most frequently detected virus was SPFMV and least detected was SPCSV.The two viruses SPCSV and SPFMV are the most significant viruses of sweetpotato worldwide because co-infection of a plant results in a devastating disease syndrome, with associated yield losses in the range of 65–98%.Overall the study found low frequency of occurrence of the viruses in the Acholi sub-region, indicating a lower burden to sweetpotato production within this sub-region compared to previous studies conducted in central, western and eastern Uganda.This work was supported by PEARL grant from Bill and Melinda Gates Foundation.The authors declare that they have no competing interests
The purpose of the study was to identify different viruses infecting sweetpotato and the level of co-infection and spatial distribution of the viruses within the Acholi sub-region of northern Uganda. Multiplex PCR was used to screen and determine level of co-infection in 380 sweetpotato plants. The PCR scores were computed to give overall frequency of occurrence of different viruses. The spatial distribution of viruses was represented on an ArcGIS map. Of all screened samples, 24% (92/380)were infected with at least one virus. Sweetpotato feathery mottle virus (65/92), sweetpotato chlorotic fleck virus (17/92)and sweetpotato mild mottle virus (8/92)were the most frequent viruses detected. Of sampled fields, 74% (28/38)had at least one virus-infected sweetpotato plant. The four viruses detected are the major viruses causing significant yield losses in major sweetpotato growing regions of Uganda and East Africa. The findings of limited distribution and low prevalence of the viruses in the region indicate it causes less burden to sweetpotato production in the sub-region compared with other parts of Uganda.
217
Re-evaluating the resource potential of lomas fog oasis environments for Preceramic hunter-gatherers under past ENSO modes on the south coast of Peru
The coast of Peru is one of the world's driest deserts and yet for much of the year its air is pregnant with water vapour, a paradox due to the rigid stratification of its tropical air masses above the cold Humboldt Current offshore.Along the seaward Andean foot-slopes this creates the unique ecological phenomenon of lomas – ‘oases born of mists’ – in which vegetation flourishes during the austral winter, before fading back to barren desert in the summer."The role that these lomas fog oases played in the long history of human ecology that was to lay the foundations here for one of humanity's few independent hearths of agriculture and cradles of “pristine” civilisation, has been much debated. "Over the long trajectory of the Middle Preceramic Period hunter–gatherers here exploited a variety of ecological niches, juxtaposed along this otherwise extremely arid littoral, including: the riparian oases along the floodplains of rivers arising in the Andean hinterlands; estuarine wetlands; and not least, one of the world's richest marine ecosystems sustained by deep upwelling offshore.Explaining how, and why, human exploitation shifted between these different ecologies through time is key to understanding increasing sedentism and the emergence of agriculture here.And there is a dichotomy in interpretation of precisely how important lomas fog oases were to these changes: illuminated by only a few detailed, well-reported investigations along almost 2500 km of Pacific coastline, most notably in Chilca, and Quebrada de los Burros in the far south.This remains a fundamental issue, not least because lomas environments – highly sensitive both to climate shifts and to human impact – were seemingly abandoned at the end of the Middle Preceramic Period: a critical time period during which intensified exploitation of marine resources, alongside the cultivation of cotton and food plants on river floodplains, laid the foundations for larger scale populations and the first emergence of Andean social complexity.Early investigators, particularly struck by the density of lithic artefacts encountered in the lomas, asserted that these environments were the prime resource for Early Holocene hunter–gatherers on the Peruvian coast, and moreover, that they had formerly been far more extensive.Lanning and Engel believed that the lomas had provided rich resources for transhumant hunter–gatherers living there during winter months: what Engel called the ‘fog oasis situation’."They further interpreted the occurrence of large expanses of desiccated vegetation and snail shells outside the limits of today's lomas formations as evidence of their former extent.Indeed, Lanning believed that ever since their first occupation in the Terminal Pleistocene the lomas had been in constant retreat, as the fog belt lifted due to changes in the Humboldt Current, contracting to a tenth of their original extent by the time of their abandonment around 4450 BP.Soon, however, this model of lomas ecology in the early prehistory of Peru came to be challenged as a ‘transparent adventure in environmental determinism’.Its critics argued, as Lynch summarises, that ‘lomas vegetation and animal life could never have been important to man, that the greater previous extent of lomas vegetation has not been established and that … there is no reason to believe that the climate of coastal Peru has been substantially different from the present during postglacial times’.Relict extensions of desiccated vegetation and snails were, it was suggested, the product of occasional El Niño Southern Oscillation climatic perturbations, rather than of permanently more extensive lomas.Indeed, some of these extraordinarily preserved formations of desiccated Tillandsia date far back into the Pleistocene.Many see little evidence for any significant long-term climatic change throughout the Holocene on the Peruvian coast.Instead, other factors were now invoked to explain the changes in archaeological patterning observed to occur at the end of the Middle Preceramic, including over-exploitation of fragile lomas ecologies driven by population growth, and indeed technological change.For Moseley, argued that rather than diminution of lomas resources driving people to exploit marine resources, it was an enhanced ability to do so, thanks to the introduction of cotton agriculture."Moreover, Lanning's model had failed to take into account the effects of rising sea levels on the archaeological pattering observed on the central Peruvian coast, and new findings suggested far greater time depths for maritime adaptations.Such refinements, however, still leave the ultimate cause of the abandonment of the central Peruvian lomas to be fully explained.Meanwhile, there have been significant advances in palaeoclimatic research.Broad scale climatic changes are thought to have occurred in Peru throughout the retreat of the glaciers that mark the Pleistocene–Holocene transition and the stabilisation of sea levels that followed.These may have significantly affected monsoonal circulation and tropical rainfall of Amazonian origin over the Andes.Most palaeoclimatic reconstruction, however, using proxy record from both on and off shore locations across the Andes, has focused on the history of ENSO variation, and its effect on the Humboldt current, along the Pacific coast of Peru during the Holocene.On the Peruvian coast itself where ‘standard palaeoclimatic records are absent or underdeveloped’, these often entail proxy data recovered from archaeological deposits.In this paper we revisit the ‘lomas dichotomy’ in the light of new investigations of the lomas formations of the Peruvian south coast: defined herein as the 250 km of Pacific coastline encompassed by the Pisco, Ica, Río Grande de Nazca and Acarí river valleys."These valleys have a distinctive geomorphology, climate and hydrology from those of Peru's north and central coasts, and from those further south to the Chilean border. "Consequently, for much of Peru's prehistory, they also have a shared and distinctive cultural trajectory, commonly labelled ‘south coast’ by archaeologists. "Our findings arise out of collaborations between Cambridge University's One River Project and the Royal Botanical Gardens Kew, as follows:",New investigations of the botany and ecology of the highly endemic and lomas formations of this region.To date these are largely unstudied, and indeed the few references to them betray inaccuracies that underplay their resource potential.New archaeological investigations of these south coast lomas and their associated littoral to reveal an entire Preceramic landscape."Although the south coast archaeological record comprises some of Peru's most famous cultural manifestations, not least Paracas and Nasca, their Preceramic predecessors have not been much investigated since the pioneering work of Engel in the 1950's. "The paucity of previous scientific investigation here is, in part, because the lomas of the south coast are relatively remote from modern settlement as the course of the Pan-American highway is diverted much further inland than elsewhere along Peru's littoral.Yet this has also better preserved their fragile biodiversity and their Preceramic archaeological record, and, as we will see, conveys certain clarities to the interpretations of its ancient human ecology.We will use these findings to reassess the importance of lomas resources to Preceramic human ecology on this previously understudied part of the Peruvian coast, revisiting the question of why changes took place through time in the context of the latest, detailed Holocene palaeoclimatic reconstructions specific to the south coast.There are two sources of water along the arid Peruvian coast, each arising in distinct seasons.The first arises from the small portion of intense precipitation over the Amazon Basin during the austral summer that overcomes the formidable rain shadow of the Andean cordillera, to drain their western flanks in seasonal streams and rivers.The second is more ephemeral – at least in non-ENSO years – arising in the austral winter, between around August and November, as a deepening inversion layer saturated with moisture over cold seas is blown inland by trade winds.Where this marine stratus inversion layer encounters the coastal Andean flanks between 300 and 1000 m asl, orographic cooling causes fogs to coalesce, resulting in fine, horizontally moving drizzles.This moisture is sustained by reduced evaporation due to stratus cloud cover producing ‘occult precipitation’ and fostering formations of fog-drip vegetation known as lomas.Since vegetation creates the interception surface area on which fog condenses and is trapped, this growth acts greatly to increase the amount of water that would be deposited on barren slopes.As this occurs, lomas formations blossom across the desert with intense green meadows and pasture.The term ‘lomas’ is used colloquially in Peru to describe the Andean foot slope and coastal cordillera all along the Pacific, so that ecologists, geographers and archaeologists have come to use the term idiosyncratically, as Craig notes, for various distinct ecological niches, including desert scrub found at higher altitudes and vegetation along erratic quebrada watercourses.Here, we use the term sensu stricto to mean those plant communities which rely exclusively on fog: depauperate for much of the year, dominated by Tillandsia, but flourishing with lush, ephemeral herbaceous growth during the winter months.Such fog oases are to be found along parts of the western coast of South America between ∼6°S and 30°S.In Peru there are at least 70 discrete localities supporting lomas, occupying a total of some 8000 km2.Their vegetation communities are clearly delimited ecosystems, unique within the context of South American ecology and floristic composition.They consist of varying mixtures of annuals, short-lived perennial and woody vegetation, represented by a total of some 850 species in 385 genera and 83 families.They are home also to a variety of animal life.Around 40% of the plant species in the lomas communities of the south coast can be classed as perennials and maintain themselves through the dry season by a variety of vegetative, starch-rich, underground storage organs such as roots, corms, tubers and bulbs.Indeed, we find that many plants previously classed here as annuals turn out, on excavation, to be perennials, arising from deep corms on contractile roots, or simply re-sprouting from apparently dead leafless stems, supplied by enlarged roots or stems."When sustained winter fogs arrive, and depending upon the intensity and duration of the fog season, these perennials re-sprout, thereby increasing moisture capture and acting as ‘nurse plants’ for annuals to germinate anew from the previous year's seed production.Once winter gives way to spring and increasing amounts of sunshine, all the lomas comes into flower simultaneously to maximise rapid pollination and set seed, many of which are adapted to resist high summer temperatures and lie strewn across the desert surface, awaiting the return of winter fogs.Lomas vegetation is highly sensitive to ENSO climatic perturbations and indeed, certain lomas plants depend on the periodic recurrence of El Niño to propagate themselves.Importantly, however, our research shows that the relationship between lomas ecosystem productivity and ENSO is rather complex: while some parts of the lomas ecosystem wax under particular ENSO variations, others wane.We return to consider the implications of this later.Within lomas formations vegetation occurs in distinct belts whose composition is determined by variations in moisture availability, which are predominantly a function of altitude, but also of slope, aspect and type of ground surface in relation to the intensity, direction and velocity of coastal fog.The most diverse herbaceous fog meadow flourishes with winter fogs in a belt immediately beneath the altitude at which the inversion layer intercepts the land.Along its margins, particularly further inland where conditions are more arid, the lomas of the south coast are characterised by transitional belts occur composed of low-density areas of cacti, and high-density formations of Tillandsia species, which can extend over vast tracts of desert.Unlike most lomas which occur along the foothills of the Andes themselves, those of the south coast are separated from the main Andean cordillera by an expanse of uplifted desert sedimentary tableland around 60 km wide.Here, lomas formations – from north to south: Morro Quemado, Asma, Amara-Ullujaya and San Fernando – occur along the seaward flanks of the Coastal Cordillera.These fragments of granite batholith run along the Pacific coast for almost 150 km, and rise abruptly out of the ocean to attain heights of around 800 m, 1000 m and 1791 m asl, in the Lomas of Morro Quemado, Amara-Ullujaya, and San Fernando, respectively."These south coast lomas formations thus lie almost on the littoral, and quite far from the parallel inland courses of the north-south flowing Río Ica, and the most westerly of the Río Grande de Nazca's several tributaries. "From 2012 to 2014 Cambridge University's One River Project and the Royal Botanical Gardens Kew, collaborated on new investigations of the lomas of the south coast through joint and several fieldwork entailing:",RBG Kew undertook a spatial analysis of vegetation and floristic composition in the lomas of San Fernando and Amara by systematic collection of herbarium vouchers in plots across transects to identify species and develop understanding of the ecology, biodiversity across the south coast lomas."An important purpose of this work is to produce baseline species and vegetation maps to support conservation delimitation by Peru's Servicio Nacional de Áreas Naturales Protegidas por el Estado.Figs. 1 and 2 show vegetation maps derived from two sources: Landsat 8 imagery from March 2014 to March 2015 supplemented with GeoEye imagery for the San Fernando region.Landsat imagery was processed to give Normalised Different Vegetation Index for each month, these were stacked and the maximum NDVI over this yearly period calculated.This maximum NDVI imagery was thresholded to pull though the herbaceous lomas and the Tillandsia vegetation.Vegetation in the San Fernando region did not show up in the Landsat imagery, probably because the seasonal lomas growth 2014–2015 was very poor.Thus lomas vegetation for the San Fernando region was derived from GeoEye imagery in the same way described for Landsat.These two vegetation maps were overlaid to give Figs. 1 and 2.Fig. 2B shows a detail of the Amara lomas from a Near infra-red false colour composite image, showing high density of seasonal herbaceous lomas vegetation as dark grey, using GeoEye imagery from 10 February 2010, during an ENSO event entailing moderately increased sea surface temperature anomalies off the coast of Peru.,Fig. 3 shows the vegetation associations recorded across transect A of the lomas de Amara.The relative extent of the vegetation units, bands or overlapping zones, is indicated with bars corresponding with the vegetation characterisation derived from herbarium voucher collections, drone flights, satellite imagery and other fieldwork including walked transects, seed analysis and root test pits.The figure also shows aggregated lomas resource values under a normal fog season and under El Niño and La Niña climatic modes."Cambridge University's One River Project carried out new investigations of the Preceramic archaeology associated with the lomas of the south coast. "We follow in the footsteps of Frédéric Engel, whose seminal work here in the 1950's identified 14 Preceramic sites along the littoral between Morro Quemado and the Bahía de San Nícolas, eight of which he radiocarbon dated.As well as investigating many of these in much more detail and subjecting them to modern sampling, we turned our attention also to the lomas hinterlands, in which we identify another seven previously unknown Preceramic sites.At many of these sites we carried out investigations of exposed profiles, new excavations and surface collections, during which we sampled for flotation to extract organic remains and heavy fraction components, geochemistry, monoliths for micromorphology, radiocarbon dating, etc.Figs. 1 and 3 and Table 1, presents the results of this updated and augmented archaeological survey of the Preceramic landscape along the south coast between the Morro Quemado and the Bahía de San Nícolas, in which all radiocarbon dates are now presented calibrated using the ShCal13 curve in OxCal version 4.2.The Preceramic archaeological patterning recorded on the south coast naturally reflects the differential distribution of many resources of food, fuel and raw material resources.Yet the first essential that dictates where people settle and move in such an extremely arid landscape is, of course, fresh water.What is most striking about the archaeological site distribution shown on Fig. 1 is that, while the largest and most visible Preceramic archaeological remains are, unsurprisingly, to be found at the mouths of the Ica and Nazca Rivers, many Preceramic sites lie scattered far beyond the river courses: and yet always intimately associated with the coastal lomas formations.Indeed, as we will see, all of these Preceramic sites, regardless of their locations, contain significant lomas-derived resources.The largest and most visible Middle Preceramic archaeological remains between Morro Quemado and Bahía de San Nícolas lie at the mouths of the Ica and Nazca Rivers.A number of sites on the Río Ica estuary date to early in the Middle Preceramic Period, the largest and most visible of which are La Yerba II and La Yerba III.Their midden deposits are dominated by evidence of hunted and gathered marine resources: sea mammals, birds, fish and molluscs, particularly Mesodesma donacium surf clams that, until the 1997/98 El Niño, proliferated in vast beds just at the surf lines of the adjacent beaches.They also include many other organic components enjoying excellent preservation through desiccation, not least remains of plants: including, inter alia, seaweeds and Cyperaceae rhizomes; woven reeds and wooden posts in the vestiges of windshelters and houses; plant fibres spun and plied into lines and nets; and bottle gourd and both wild, and domesticated beans: all presumably cultivated on the river floodplain.We interpret these sites on the estuary to be the logistical basecamps of seasonally mobile hunter–gatherers, although there is significant evidence of increasing sedentism at La Yerba III.They are located at the confluence of different habitats: riparian woodland, river estuaries, sandy beach, rocky headlands and an ecologically diverse lomas mosaic.Each could provide seasonally abundant and predictable resources, accessed at different times of the year by different members of society, according to varying levels of energy expenditure and hazard.Upstream of the river estuaries, few Middle Preceramic sites are reported on the south coast, though doubtless other remains have been obscured by subsequent agricultural and geomorphological changes.Far beyond the river courses, however, our investigations reveal other component parts of the Middle Preceramic landscape: all of which are directly dependent on lomas to provide key resources, not least water and fuel.These are of two general types.The first group of Preceramic sites such as Arroyo de Lomitas, Abrigo I and Bahía de San Nícolas, all identified originally by Engel, are located along the lomas foreshore, thereby simultaneously granting immediate access to marine and lomas resources.Our investigations of the dense, stratified midden deposits at Arroyo de Lomitas, including structured hearths suggest that ground water sources here were sufficient to sustain small groups of hunter–gatherers for stretches of time.These middens are dominated by marine resources of the adjacent rocky littoral, particularly gastropods, mussels and seaweeds.Today itinerant fishermen call the site ‘Agua Dulce’, and affirm that water here can always be obtained by excavating above the tidal margin where the arroyo emerges onto the beach.Indeed, all these Preceramic sites along the coast are associated with watercourses arising from in the lomas.The abrupt topography and impermeable granite massif of the Coastal Cordillera, rising out of the sea to over 1000 m, forms a vast and ideal surface for fog condensation and consequent run-off.Emerging from the seaward bluffs of the Amara and San Fernando lomas are multiple arroyos, deeply incised into the uplifted Holocene marine terrace, that stand testament to periodically significant surface flows.The range of clastic materials within the discrete water-lain deposits of these arroyos are evidence of erratic, high-energy flows perhaps during ENSO events.Yet layers of finer alluviums attest to water flows even during normal years and their courses are lined with perennial vegetation in the form of Atriplex rotundifolia, Croton alnifolius and Tiquilia spp., a variety of small herbs such as Alternanthera sp. and Parietaria debilis, and large relict stands of the cactus Corryocactus aff. brachypetalus, whose size is evidence of significant, if sporadic, water availability.Unlike most coastal lomas contexts, wherein the two sources of water in this coastal desert, described in Section 2 above, are inevitably conflated, here, the only possible sources of fresh water beyond the river courses derive from condensing winter sea fogs.These fog-fed arroyo courses enabled Preceramic human habitation all along this arid littoral, far from its river courses, but in immediate proximity to both lomas and particular marine resources.While Engel successfully located those Preceramic sites along the coast, he failed to find a ‘single human settlement’ within the south coast lomas.Yet such sites do exist, conspicuous largely as large accumulations of land snail shells.Preservation of other organic remains here is poor because of winter humidity.In lomas elsewhere Craig interprets similar accumulations of land snails to be natural death assemblages.Here, however, the Middle Preceramic anthropogenic origin of Amara Norte I is unequivocally revealed by stratigraphic contexts also including remains of marine shell and sea urchin, grinding stones and hearth charcoals radiocarbon dated to 4963–4728 cal BP.Moreover, these middens are surrounded by scatters of obsidian projectile points and flaking debitage from the later stages of lithic reduction such as the retouching of preforms, left lying exposed on the surface by wind deflation of these upland landscapes.We identify three clusters of such sites deep within the lomas of Amara — at Amara Norte, Valle de Amara and Abra Sur de Amara — which we interpret to be logistical field camps: the vestiges of multiple, short-term forays in winter to hunt game and gather lomas specific resources, particularly plant geophytes and land snails.Elsewhere in the archaeological record such sites are rare but not unknown.Occult precipitation is sometimes sufficient to form seasonal standing ponds of water in the San Fernando and Amara lomas.In historical times atmospheric moisture has been ‘harvested’ within lomas by means of fabrics or nets and collecting vessels, yielding up to 10 L per day per m2 of fabric.There is no evidence that Middle Preceramic fabric technology allowed such water harvesting, but, during the foggy winter months – also the time when the lomas ecology is at its most productive – even fog drip collected from overhanging rocks eventually yields useful quantities of water.Doubtless drinking water was transported during the Middle Preceramic using bottle gourd and/or animal skins, both in evidence at the logistical basecamps on the river estuaries."Nonetheless, our observations of today's lomas hydrology, together with a pattern of substantial Middle Preceramic archaeological deposits, at locations very widely dispersed from the river courses, strongly suggest that, fresh water derived entirely from coastal fog ecosystems was sufficient, at least seasonally, to sustain small groups of hunter–gatherers.We next assess the potential of other lomas resources to Preceramic hunter–gatherers, in the light of our ecological and archaeological investigations of the south coast lomas.Today, despite the depauperate or ephemeral nature of much of its vegetation, the south coast lomas formations sustain a remarkably rich animal life including larger mammals such as wild guanaco camelids, two species of desert fox, many rodents, lizards and birds, and invertebrates including insects and snails.Indeed, snails are the single most conspicuous and ubiquitous element of lomas resources found in the Middle Preceramic archaeological contexts of the south coast: evidence of their importance to human diet."There appear to be two species of snail today found on Ica's lomas formations, the most common of which, we tentatively identify as Bostryx reentsi.These snails are only active during the lomas growing season, aestivating in groups during the dry months.We observe live snails in numbers grazing on the lush herby lomas vegetation during the fog season, but also in transition belts, eating algae that grows on Tillandsia and aestivating on their stems or beneath their roots.Archaeological sites such as Amara Norte I and Abra Sur de Amara I, amidst these Tillandsia transition belts in the lomas de Amara, between 550 and 700 m asl, have middens dominated by snail shells.Meanwhile, sites along the littoral such as Arroyo de Lomitas, Abra I and La Yerba II, otherwise full of the evidence of marine resources, also all contain lomas snails, in some contexts in considerable quantities.In Mesolithic contexts worldwide snails have made a significant contribution to daily calorific and protein requirements.Clearly this was the case during the Middle Preceramic in the lomas of the south coast of Peru.Relative to marine molluscs individual Bostryx yielded little, difficult to extract, meat; yet they were easily gathered in great quantities during winter months and could then – unlike marine molluscs – be stored fresh for months because of their ability to aestivate through sealing their shells with a mucus epiphragm.The significance of snails to Preceramic diets may well have varied therefore according to the seasonal availability of other resources, particularly during periodic collapses in marine resources induced by El Niño.Most of the land snail assemblages in these contexts are whole and show little evidence of heat alteration.But many shells in the Amara Norte I middens show fine puncture holes suggesting that their meat was extracted using cactus thorn inserted through the epiphragm, or through the thin shell itself.Since the taphonomies of terrestrial and marine molluscs in such ancient archaeological contexts should be similar – relative to those of other classes of organic remains – we use the proportion one to the other, measured by minimum number of individuals, as a proxy measure of the relative importance of lomas resources in particular contexts.Thus averages of such measures offers a useful, albeit crude, way to compare the use of lomas resources between individual site locations during the Middle Preceramic.Meanwhile, the importance of other lomas faunal resources to south coast Middle Preceramic hunter–gatherers can be inferred from direct evidence such as animal bone – though with considerable taphonomic bias between different sites – but also from indirect evidence in the form, for instance, of the stone tools used to hunt and process animals.Today tiny relict populations of guanaco still live in the south coast lomas of San Fernando and Amara.They are shy and more or less constrained to particularly remote parts.In the past these animals, catholic feeders on either browse or graze, may have moved more freely between the winter lomas and summer river valley oases, which may, in turn, have determined rounds of hunter–gatherer transhumance.Alongside predators such as puma, guanaco and grey deer were once far more numerous in the lomas and the formerly wooded valleys of the Peruvian coast.Today there are no deer on the south coast of Peru, outside of captivity.Yet the Middle Preceramic archaeological record here includes evidence of both deer and camelids, together with many other vestiges of the attendant hunting activities in the lomas.Before the development of woven textile technologies in the Late Preceramic, these lomas mammals would have provided skins and tendons vital for clothing and bindings.Guanaco hides were much esteemed for clothing by Yaghan hunter–gatherers of Tierra de Fuego because, unlike the hides of marine mammals – the more common prey of both Yaghan and Middle Preceramic hunters – guanaco skin is thin and flexible, so that it conforms readily to the contours of the human body, and has heavy hair, making it is far better for shedding water and keeping out the cold.The La Yerba II contexts contain good evidence for hide preparation in the form of numerous scraper tools made of Choromytilus shell and plants potentially used for curing leather.Preservation conditions at sites within the lomas itself are poor, though a few fragments of ungulate bone were recovered from Amara Norte I.At La Yerba II, 17% of an animal bone assemblage dominated by marine mammals and birds, are camelid and deer.Whereas all parts of deer were recorded, camelid bones were mostly of meatier hind leg parts, suggesting that these large animals were butchered off-site, perhaps near the kill sites within the lomas.Obsidian and other stone tools, dominated by projectile points prepared for hafting, are found throughout the lomas of the south coast.Small stone structures high on bluffs in the lomas and near guanaco trails today, such as Abra Sur de Amara II, may represent hunting stands.Indeed, sites deep within the lomas associated with the gathering of snails such as Amara Norte I, are surrounded by scatters of obsidian lithics including many projectile points, presumably for hunting game.The variety of obsidian points found in the lomas, are likely a palimpsest of hunting activities over time.Different lithic morphologies may also represent different target species, or hunting strategies.We suggest, however, that the majority of this lithic activity dates to late in the Middle Preceramic because those sites on the coast that date to that period, such as La Yerba III and Amara Norte I, have abundant, comparable obsidian lithic remains within stratified deposits, whereas earlier sites such as La Yerba II, contain only a few small obsidian flakes.The importance of plant foods to hunter–gatherer diets outside extreme latitudes has long been recognised.As noted above, many lomas perennials have evolved to cope with cycles of long aridity followed by intensive, growing seasons through underground storage organs, several of which would have provided a valuable source of edible starch for the hunter–gatherers of the Middle Preceramic.Significant examples we have recorded in the lomas of the south coast include: the tuberous roots of Aa weddelliana, only recently recorded elsewhere on the Peruvian coast; the roots of Alstroemeria sp., several closely related species of which are recorded as providing food for humans in the lomas of Chile where they are known as ‘chuño’; and the corms of various Oxalis species including O. lomana.Several varieties of true and wild potatoes are found in lomas vegetation elsewhere in Peru although only Solanum montanum, a highly variable species distantly related to potatoes, is recorded today in the lomas of the south coast.Like all Solanaceae tubers, these are bitter and must be processed, but are recorded as a minor food source in historical times.Edible lomas geophytes are at their most nutritious both before and after periods of active growth but remain available all year round, and are easily collected because plants such as S. montanum occur in concentrated patches and are not deeply rooted.Edible seeds, greens and fruits of lomas plants likely provided seasonal sources of important vitamins and micronutrients.In particular almost all lomas cacti species produce palatable fruit and likely contributed significantly to diet for parts of the year.Corryocactus brachypetalus which, uniquely, grows on the rocky seaward bluffs of the lomas has the largest fruit, with a pale pulp tasting not unlike apple or gooseberry.Two species of Haageocereus cactus, generally found in the transitional vegetation belts of the inter lomas plain in Amara, both produce crops of sweet juicy fruit in January once the winter fogs have receded, whilst the fruit of Cumulopuntia sphaerica and Eriosyce islayensis are edible but somewhat less palatable.In the desiccated conditions of the La Yerba II Middle Preceramic occupation contexts at the Río Ica estuary evidence for the use and consumption of lomas plants include Oxalis sp. tubers, and cactus fruit and seeds.Indirect evidence for the processing of starchy plant foods include stone grinding mortars both at La Yerba II and III, and also at Amara Norte I in the heart of the lomas, where plant remains themselves are only preserved through charring.Undifferentiated charred parenchyma tissue suggestive of starchy plant foods is evident in the contexts of all these Middle Preceramic sites.Haageocereus spines found at La Yerba III in carefully tied bundles, were presumably for used for making fishhooks.A range of lomas plants are also known today for their antibacterial or other medicinal properties including Ephedra spp. and Plantago spp. and the astringent Krameria lappacea, used for treating inflammation, gastrointestinal illnesses and other medical purposes.Krameria also has a very high tannin content so that it is sometimes used today for curing animal hides.Its remains are identified in the Middle Preceramic contexts of La Yerba II.Most of these archaeological contexts also contain many pale, finely-spun fishing yarns and net fragments.Engel suggests, before cotton, such yarns may have been spun from that the downy seed head fibres of Tillandsia sp.Many of these lomas plants are also important to the diet of the animals hunted here during the Preceramic.Guanaco graze on leguminous herbaceous lomas plants such as Astragalus, as well as the roots and tubers of perennial plants that they dig up, grasses, cacti and cacti fruit.In both winter and summer they can also persist on lichens and the Tillandsia that dominate such large parts of the south coast lomas.Fuel would have been required at these Preceramic sites for cooking, opening clam shells in bulk, curing animal skins, roasting to make certain lomas tubers edible and indeed for warmth because, though in the tropics, the south coast of Peru is remarkably chilly in winter, when temperatures fall into single figures and strong winds blow cold, damp fogs inshore off the Pacific.Our data loggers record a minimum temperature during winter in the south coast lomas of 7 °C.The evidence from Middle Preceramic sites such as La Yerba II and III, adjacent to the river, suggest the use of estuary and woodland species as fuel.But lomas vegetation must have been the primary source of fuel at those sites that are distant from the vegetated river courses, because ethnographic studies suggest that heavy expendable resources like fuel wood are almost always gathered within a radius of not more than two hours walking Substantial fragments of wood charcoal are evident in the structured hearths at Arroyo de Lomitas.Although driftwood might have contributed to the overall fuel economy, today it is scare along this rocky shore.More exposed sites within the lomas such as Amara Norte I have poorly preserved microcharcoal remains, but blackened and fire cracked hearthstones here are evidence of repeated fires.Today there are no trees and few woody perennial shrubs on the south coast lomas.Those few that do persist, such as Ephedra americana, A. rotundifolia and C. alnifolius, are like all lomas woody species, slow growing, yielding high calorific value fuel, but making them prone to overexploitation.Elsewhere, Preceramic charcoal assemblages have been taken as suggesting such overexploitation.Certainly, the charcoals in the hearths of Preceramic sites such as Arroyo de Lomitas, suggest that the south coast lomas once supported more woody vegetation.Large areas of these lomas are today dominated by Tillandsia spp., which, elsewhere, has been noted as a fuel in the Preceramic.Its downy seed head fibres would also have made effective tinder.Yet most Tillandsia too are slow-growing and their removal may have promoted destabilisation of dune systems, evident in landscape deflation about sites such as Amara Norte I.Although the story of Middle Preceramic human ecology in the lomas of the south coast is one of continuity over millennia, change through time is also evident in this archaeological record.We turn next to what that evidence might contribute to the vexed question of what caused such changes.Lomas environments, like other arid area ecologies, are fragile and peculiarly sensitive to even mild climatic perturbations, and to human impact.As discussed, the climatic factor of proximate relevance to lomas is the periodic perturbations along the Pacific coast of Peru of the El Niño Southern Oscillation phenomenon.ENSO is defined by sea surface temperature anomalies, described recently according to two spatial modes of variability: maximum SST anomalies localized in the central Pacific, entailing negatively skewed SST anomalies off the coast of Peru and strong ‘La Niña’ events; or, maximum SST anomalies localized in the eastern Pacific, entailing strong ‘El Niño’ events.A long-standing model of ENSO history based on proxy data from archaeological sites on the north coast of Peru suggests, for north of 120 S, some four millennia of ENSO suppression during the Early Holocene, between c. 9000 and 5800 BP; after which there was a low-frequency of El Niño until 3000 BP, when the modern, higher frequency El Niño regime became established.Recently, however, refinements specific to the south coast have been proposed to this model based on δ18O as a proxy for sea surface temperature recorded in surf clam shells in archaeological deposits, including from La Yerba II, central Pacific corals and Galapagos foraminifera.Together these suggest that for over five millennia, between 9600 and 4500 BP, mean annual SSTs were significantly lower than today, especially in southern Peru; that before around 8000 BP ENSO variance was skewed towards El Niño, whereas between 7500 and 6700 BP variation was skewed towards CP mode with more frequent and intense La Niña events; and that there was a period of substantial reduction of ENSO variance was between 5000 and 4000 BP.How then might this revised model of ENSO history have impacted upon the south coast lomas and is it reflected in its archaeological record?,Variances in ENSO spatial mode and amplitude affect relative land and sea temperatures that drive convection wind regimes and govern other sea–atmosphere interactions, such as the production of aerosols and condensation nuclei, and thereby the production, intensity and duration of occult precipitation in lomas.In northern and central Peru El Niño years prompt ‘enormous blooming events’ and population explosions of snails in lomas.Further south too they bring increased lomas vegetation, due to ‘advection fog with a higher moisture content’ over warmer waters."Dillon and Manrique record increases in the primary productivity of southern Peru's lomas, measured by plant density and cover, by thirteen times, during the very strong 1997–98 El Niño event, and three times during the moderate 2010 event, respectively.Moreover, El Niño effects usually fall in the austral summer, so producing continuity between the phases of normal lomas winter florescence.The effects of La Niña events on Peruvian lomas formations are less straightforward.Inland, colder oceanic conditions during La Niña produce dryer conditions.Yet, immediately along the coast, La Niña creates more persistent fogs, because the cooler humid air masses that it brings are more prone to condensation.Muenchow et al. record ‘abundant blooming events, high species diversity and high species coverage’, albeit during a single La Niña year, in Casma.Nonetheless, the relationship between increased fog and lomas vegetation is far from simple, not least because it is governed also by changes in the altitude and temperature of the fog layer.Consequently lomas plant species have evolved into these moisture regimes and topographically delimited niches to form a complex mosaic of vegetation.Persistent La Niña conditions may, for instance, promote xerophytic species and widen transitional vegetation belts at the expense of herbaceous fog meadow.Figs. 3 and 9 shows that for the lomas of the south coast the impact of ENSO on lomas resources is bimodal: El Niño tending to produce a higher above ground biomass in ephemeral zones, while La Niña tend to produce a higher below ground biomass in fog zones in the form of geophytes.In sum we infer that past multi-centennial periods of increased ENSO variance and intensity – of either La Niña or El Niño – would have bolstered the extent, volume and fog trapping height of lomas vegetation, thereby positively influencing lomas hydrology and producing a self-sustaining microclimate.This, in turn, would have greatly augmented the biomass of both the floral and faunal resources exploited therein by Preceramic hunter–gatherers.Different ENSO modes would promote different parts of the lomas ecosystem and so entail different subsistence strategies, for instance, by focus on hunting guanaco and gathering snails as their populations soared due to expanded herbaceous lomas vegetation during periods of increased El Niño variation, or, by focus on gathering the starchy geophytes of lomas perennials when they thrived thanks to the more persistent fogs in epochs of increased La Niña variation.By contrast, long durations of suppressed ENSO variance would be inimical to lomas ecosystems.Indeed, without regeneration through periodic ENSO events, certain lomas species may disappear completely over time.It would also increase their vulnerability to human impact.Vegetation in lomas acts overall, not as a water consuming, but as a water-producing factor because of the greatly enhanced surface area it provides for the condensation of fog water.This is particularly true of taller woody shrubs and trees which increase fog condensation in lomas by up to six-times so that their removal, say for fuel, acts in self-enhancing feedback to reduce lomas hydrology and soil humidity, and thus the growth and germination of other herbaceous plants and the entire lomas biomass.Fig. 10 brings together the archaeological record for the south coast lomas and the latest ENSO climate history data for the same region, as synthesised by Carré et al.We make five observations from this combination of data, in light of the ecological information already presented.Firstly, Fig. 10 shows that, over the broadest scale, all the Preceramic archaeological sites1 intimately associated with the south coast lomas were occupied during five millennia in which, as Carré et al. summarise, ‘mean annual SST was significantly lower … than today, especially in southern Peru ∼3 °C cooler an increase in the intensity of coastal upwelling’."It was this that underpinned conditions of higher oceanic productivity, and augmented lomas hydrology and biomass due to more persistent fogs, which sustained hunter–gatherers in Engel's ‘fog oasis situation’ for this enormous stretch of time on the south coast.Secondly, there was, seemingly, a millennia of increased El Niño before 8000 BP and the start of the Middle Preceramic Period, which would have expanded lomas vegetation extent and biomass, but simultaneously inflicted persistent shocks upon certain marine resources critical to Preceramic subsistence.Only one site at the estuary, La Yerba I, is dated by Engel to this epoch, and following calibration, entails a margin of uncertainty of almost a millennia.Engel provides few details and our own investigations of the same site – so far as that can be ascertained – suggest that both its date and archaeological assemblage are, in fact, similar to the adjacent site of La Yerba II.If so, there is little visible archaeological evidence of human occupation of the south coast during this period, though that also entails factors of eustatic sea-level change and stabilisation, germane to our next observation.Thirdly, within the broad sweep of five millennia of colder seas, the start of the Middle Preceramic period is marked by the founding of several highly visible archaeological sites, most notably La Yerba II on the Río Ica estuary.This coincides with a multi-centennial period of enhanced La Niña activity, entailing even colder seas and foggier lomas: conditions reflected in the particularly cold-water ecology of parts of the La Yerba II mollusc assemblage, such as Tegula atra and Choromytilus chorus.Also, at this time eustatic sea levels stabilised, after which shoreline progradation began forming the sandy beach habitat of the easily gathered Mesodesma surf clams that so characterise the La Yerba II middens.It is surely no coincidence then that the establishment of the Middle Preceramic way of life here apparently takes place during an epoch of abundant and predictable oceanic and lomas resources.Fourthly, over the five hundred years to around 6000 BP, the archaeological sites on the river estuaries show evidence of increased sedentism and a broadening of the resource base along a spectrum of mixed hunting-farming subsistence.Whereas sites like La Yerba II are characterised by temporary areesh wind-shelters made of reeds, later-dated sites at the river estuaries, such as La Yerba III and Santa Ana, have evidence of more permanent and substantial villages and structured mortuary deposition.These later sites have much greater quantities of obsidian, indicating far wider spheres of interaction since the nearest sources are 250 km away in the highlands, notably at Quispisisa.They also have the first evidence here for food agriculture, grown in the seasonally humid silts of the adjacent river floodplains.We recover fully domesticated lima beans, while Engel reports both Phaseolus beans and jícama, in the contexts of La Yerba III.Last, but not least, Fig. 10 suggests that as the long epoch of cooler sea temperatures ended around 4500 BP, synchronous with a Holocene minimum of ENSO variance lasting a thousand years to 4000 BP: each inimical to lomas ecosystems and to water sources they fed along the littoral, so too did a Middle Preceramic way of life that had turned for millennia here about the lomas seasons and their rich ocean littoral – the ‘fog oasis situation’ – draw to a close.Indeed, the significance of lomas resources to human settlement here is made stark by the observation that there are no Preceramic archaeological sites along the coast between Morro Quemado and Bahía de San Nícolas dated to after 4450 BP, the date more widely construed to mark the end of the Middle Preceramic Period elsewhere in Peru.For there seems little reason to suppose that a shift to an average sea temperatures akin to modern conditions, together with suppressed ENSO activity, would have been greatly detrimental to many marine resources, not least Mesodesma clams, long a key dietary component here.And yet, even at the river estuaries, the archaeological record appears silent for the subsequent Late Preceramic Period2.Following the demise of the ‘fog oasis situation’, therefore, we speculate that increased reliance upon agriculture in the Late Preceramic necessitated relocation of settlement inland, into the riparian basins of the south coast rivers.Elsewhere, on the north and central coasts of Peru this critical change heralded the florescence of greater population densities and monumental civilization after 4500 BP.This did not happen on the south coast, probably because of is distinctive geomorphological configuration.For unlike the river valleys to the north, with their broad alluvial deltas and wide ocean frontages granting easy access simultaneously to rich marine and agricultural resources, the river systems of the south coast comprise scattered riparian basins down long river courses, diverted and separated from the sea by the same coastal lomas formations that had been the prime theatre of human ecology during the Middle Preceramic.Those lomas environments continued to be exploited seasonally for snails, plants and game and for grazing domesticated animals throughout later archaeological time periods and indeed, well into historical times.Paths through the south coast lomas were still traversed for access to marine resources along the littoral.Yet it also seems clear that for those later times, lomas and marine resources were strictly supplementary, never again dictating the rounds of human existence as they had during the Middle Preceramic Period.Plainly it was the cold, upwelling ocean, with its bounty of almost inexhaustible protein sources, that chiefly sustained the Middle Preceramic hunter–gatherers of the south coast of Peru."Yet because those marine resources were distributed in hotspots all along this littoral, it was in fact the lomas formations on the granite coastal massif, and their fresh water sources, that defined the setting of human ecology at this time and thereby the patterning of human occupation and corridors of movement along the south coast between Morro Quemado and Bahía de San Nícolas: Engel's ‘fog oasis situation’.The south coast lomas offer unique insight into the capacity of past lomas ecosystems to support seasonal hunter–gatherer occupation because of their separation from the resources and water sources of the Andean foothills.Lomas fauna and flora provided significant, and seasonally critical, components of Middle Preceramic diet, most notably plant tubers, ungulates and storable land snails.Moreover, lomas provided critical fuel, medicinal and raw material resources.Our findings refute any view that lomas environments were not important ‘could never have been important to man’, or that they have not altered much through the course of the Holocene.Indeed, given that lomas ecosystems include wild relatives of the Andean potato and tomato and papaya together with guanaco, the wild relative of llama and alpaca camelids, we suggest that the role of lomas in key Andean domestication processes merit more consideration than has hitherto been the case, not least because those processes for camelids and tubers have often been seen as going hand in hand.Setting the latest model for ENSO variance based upon δ18O isotope records, alongside the archaeological patterning here, now defined with greater chronological precision, shows striking correlation between the Middle Preceramic occupation throughout the south coast lomas and a long epoch of significantly colder seas, with implications for increased intensity of coastal upwelling and more persistent lomas fogs.These include logistical field camps spread far along the lomas littoral alongside fog-fed arroyo watercourses, and previously undiscovered sites targeting gathered and hunted resources deep within the lomas itself.Within that five millennia of colder seas, the first Middle Preceramic occupations were founded at the river estuaries during a multi-centennial period of enhanced La Niña activity, entailing even more abundant ocean and lomas resources, and coinciding too with the time at which eustatic sea-levels stabilised and shoreline progradation began to form the beach habitat necessary for abundant, easily collected Mesodesma clam resources that dominate the middens of these sites.These logistical basecamps at the river estuary offered complex hunter–gatherers access to a mosaic of diverse, highly productive environments3.Eventually, the millennia of Middle Preceramic existence defined by the ‘fog oasis situation’ and the epoch of colder seas drew to a close, during a Holocene minimum of ENSO variance inimical to lomas ecosystems and the water sources they sustained."Thus this latest data from the south coast upholds Lanning's perspicacious assertion, made half a century ago now, that climate change – specifically linked to alterations in oceanic circulation – accounts for significant change in the resource potential of lomas ecosystems through the Holocene, and indeed, for why human ecology shifted out of the ‘fog oasis situation’.Yet compelling though this vision of climate-induced change in human ecology during the Middle Preceramic is, it does not preclude other factors from explanations of why the ‘fog oasis situation’ came to be abandoned, not least human impact, to which such climate changes would have exposed lomas ecologies.Indeed, since vegetation in lomas environments, and in particular its slow-growing, easily over-exploited woody vegetation, acts to catalyse fog condensation, such perturbations and human impact each precipitate the effects of the other.Today there are no trees and few woody species in the south coast lomas.Yet charcoal in Middle Preceramic hearths throughout this fog oasis situation attest to this not having been the case in the distant past.More importantly still, this model of climatically-induced lomas resource depression at the close of the Middle Preceramic does not seem to explain the emergence of agriculture in the Andes, as Lanning and others had also proposed.Over the five hundred years between La Yerba II and III, the archaeological record of the south coast shows evidence for many of those changes widely recognised to precede the emergence of agriculture in many parts of the world, for which Flannery coined the term ‘Broad Spectrum Revolution’.These include more permanent architecture, structured mortuary deposition perhaps denoting territoriality, much more extensive trade or exchange networks implied by obsidian quantities, and a widening use of resources to include floodplain farming of high-protein Phaseolus beans.Yet throughout this time, the ‘fog oasis situation’ still prevailed.We conclude therefore that on the south coast of Peru a Broad Spectrum Revolution unfolded, not through population pressure in deteriorating environments, but rather as an outcome of resource abundance prevailing in the ‘fog oasis situation’ throughout the Middle Preceramic Period.Just as is now envisaged for many parts of the world, it was a combination of abundance and seasonal predictability that enabled increasingly complex Middle Preceramic hunter–gatherers here to reduce mobility by settling in logistically optimal locations at the confluence of multiple eco-zones at the river estuaries.
Lomas - ephemeral seasonal oases sustained by ocean fogs - were critical to ancient human ecology on the desert Pacific coast of Peru: one of humanity's few independent hearths of agriculture and "pristine" civilisation. The role of climate change since the Late Pleistocene in determining productivity and extent of past lomas ecosystems has been much debated. Here we reassess the resource potential of the poorly studied lomas of the south coast of Peru during the long Middle Pre-ceramic period (c. 8000-4500 BP): a period critical in the transition to agriculture, the onset of modern El Niño Southern Oscillation ('ENSO') conditions, and eustatic sea-level rise and stabilisation and beach progradation. Our method combines vegetation survey and herbarium collection with archaeological survey and excavation to make inferences about both Preceramic hunter-gatherer ecology and the changed palaeoenvironments in which it took place. Our analysis of newly discovered archaeological sites - and their resource context - show how lomas formations defined human ecology until the end of the Middle Preceramic Period, thereby corroborating recent reconstructions of ENSO history based on other data. Together, these suggest that a five millennia period of significantly colder seas on the south coast induced conditions of abundance and seasonal predictability in lomas and maritime ecosystems, that enabled Middle Preceramic hunter-gatherers to reduce mobility by settling in strategic locations at the confluence of multiple eco-zones at the river estuaries. Here the foundations of agriculture lay in a Broad Spectrum Revolution that unfolded, not through population pressure in deteriorating environments, but rather as an outcome of resource abundance.
218
Energy policy regime change and advanced energy storage: A comparative analysis
The large-scale electrification of transportation and other energy-based services are widely seen as important elements of efforts to reduce greenhouse gas emissions from the combustion of fossil fuels.Major reductions in GHG emissions will be essential to meeting the requirements of the 2015 Paris climate change agreement.The focus on electrification has emerged at a time of three major technological developments in the electricity industry.The past decade has seen declines in the costs of renewable energy technologies, particularly wind and photovoltaic and thermal solar systems, while the performance of these technologies has been improving.Secondly, the emergence of smart electricity grids, through the digitization of grid communications and control systems, has the potential to lead to more adaptive and resilient electricity systems.Such systems will be better able to coordinate intermittent, smaller-scale, and geographically distributed energy sources into reliable resources.Finally, major developments have been occurring around energy storage technologies.Conventional energy storage technologies, including pumped or reservoir-based hydro-electric facilities, and lead-acid batteries, have existed for more than a century.The past decade has been marked by growing interest in both conventional and advanced energy storage technologies.Attention has been given to new mechanical systems based on compressed air and flywheels, advanced batteries, and thermal and gas based storage technologies.These technologies are summarized in Fig. 1.They have become the focus of substantial government and private sector investments in technology development.These investments are expected to result in significant improvements in cost and performance.In addition to their potential role in managing the growing presence in electricity systems of intermittent renewable energy sources like wind and solar energy, energy storage technologies could also provide grid services as operating and ramping reserves, demand response resources, and ancillary service providers for frequency response and regulation.Storage resources may offer means of deferring transmission and distribution upgrades as well.Finally, storage technologies may facilitate the integration of distributed energy sources into grid-scale resources.These applications are summarized in Fig. 2.Taken together, the developments in renewable energy technologies, smart grids, and energy storage are seen to offer the potential to make energy systems more environmentally and economically sustainable than is currently the case.Specifically, they are expected to be able to:make better use of renewable low-carbon energy sources;,be more reliable and resilient through expanded roles for distributed and technologically diverse energy sources;,have improved ability to adapt to changing circumstances and needs; and,have the potential to offer more control to consumers.This paper is focussed specifically on the new energy storage technology dimensions of these developments.Employing a multi-level perspective approach, it examines the development of new energy storage technologies as an encounter between existing social, technological, regulatory, and institutional regimes in electricity systems in Canada, the United States, and European Union, and the niche level development of new energy storage technologies.The outcomes of these encounters are unknown at this stage.It is uncertain whether new energy storage technologies will remain relatively niche level developments, or if they will contribute to the transformation or even reconfiguration or realignment of energy systems in the direction of larger-scale deployment of intermittent renewable energy sources and significantly expanded roles for distributed generation.Energy storage is not a substitute for existing energy generation technologies per se.Rather it is a potentially enabling technology for other new technologies, such as large-scale employment of distributed generation, and the expansion of behind-the-meter activity, which may disrupt conventional utility and generation models.These possibilities may prompt resistance from established actors within current regimes for these reasons.This may be especially the case in the current context of growing concerns about the stranding of conventional centralized generating, transmission and distribution assets in the reconfiguration or realignment of electricity systems.The MLP literature on socio-technical transitions is potentially helpful in understanding the processes of the development and adoption of new technologies and their impacts on existing institutional, regulatory, and technological systems.The MLP literature links three scales of analysis.The “socio-technical landscape” is defined as the exogenous environment of air quality, resource prices, lifestyles, and political, cultural and economic structures.The “socio-technical regime” consists of infrastructures, regulations, markets, and established technical knowledge.“Socio-technical niches” are smaller scale focal points of activity.The regimes are nested within and structured by landscapes, and niches are nested within and structured by regimes.The niche level is understood to be the key center for innovation in technology, practice, and policy.The MLP literature focuses on the transition processes that occur when landscape pressures on the regime create windows of opportunity for the adoption of niche-level innovations.Three major variables are generally identified in socio-technical transitions.These are actors and social groups; rules and institutions; and changes in technologies and wider socio-technical and economic systems.Within the category of rules and institutions, Geels includes normative and cognitive rules as well as formal legislative, regulatory and policy regimes.Other authors have treated underlying ideas, norms and assumptions about energy systems, sustainability and the role of the state and markets in energy policy formulation and transitions as a separate category of variable.Transitions are seen to follow one of four potential pathways.In the case of technological substitutions, existing regimes are overthrown by the deliberate introduction of new actors and technologies, through initiatives like Feed-in-Tariff programs for renewable energy sources.In a transformation, incumbent regimes are gradually reoriented through adjustments by existing actors in the context of changing landscape conditions.The incorporation of smart grid technologies into electricity transmission and distribution systems by existing grid operators is an example of such a transition.In reconfigurations, the emergence of new technologies leads to more structural adjustments in regimes as a result of landscape pressures.The widespread replacement of coal-fired generation by combined cycle natural gas-fired technologies as intermediate and seasonal supply in North American electricity systems, facilitated in part by the availability of new low-cost natural gas supplies and the scalability and operational flexibility of gas-fired generation, illustrates such an outcome.De-alignments and re-alignments, where existing regimes are disrupted by external developments, and new niche level innovations and actors emerge and reconfigure the regime, are rare.The emerging convergence of smart grids, and the improving economic and technological performance of renewable energy sources and energy storage, around the expansion of distributed generation and behind-the-meter activities, may indicate the potential direction for future re-alignments in the electricity sector.As new technologies may not fit well with existing socio-technical regimes, niches are understood as spaces where developing technologies are protected from normal selection pressures embodied in dominant regimes.Niches provide a means of shielding, nurturing, eventually empowering new technologies.Shielding involves holding off selection pressures like industry structures, established technologies, infrastructures and knowledge bases, markets structures and dominant practices, existing public policies, and the political power of established actors.Nurturing entails supporting the development of new innovations within shielded spaces through the development of shared, positive expectations, social learning and actor network and constituency building.Empowering can involve processes that make niche innovations competitive within existing external selection environments.Alternatively, empowering can mean changing the existing selection environment in directions favourable to new innovations.Much of the literature on socio-technical niches take their existence for granted.Niches may be protected from selection pressures of the regime either by design or by circumstance, although the specific understandings of their creation are less well developed.Where such research exists, it has tended to focus on the creation of niches through deliberate policy interventions, and less on other, more circumstantial, mechanisms through which they may emerge.The energy storage case offers examples of deliberate niche creation, but also opportunities to examine situations where niches may be more emergent, particularly in liberalized market electricity systems.The empowerment stage of niche to regime transformations is generally considered the least developed aspect of the niche literature, even though it is the key location for niche to regime transitions.Energy storage, which is at a niche to regime cusp, provides opportunities to study this stage in monopoly utility and liberalized market electricity systems.The principal methodology for the study is a comparative public policy approach.Specifically, energy storage policy development was examined in Canada, the United States) and selected states, including California, New York, Hawaii, and Massachusetts), and member states of the European Union.The jurisdictions reviewed were identified through a preliminary scan and then follow-up inquiries, as being active in energy storage policy or technology development.The existing secondary literature on public policies around new energy storage technologies is very limited.As a result, the findings are principally based on the review of primary documents from governments, grid operators, regulators, and energy storage developers.The review of primary and secondary literatures was supplemented by attendance at energy storage technology and policy development conferences in North America and Europe.Follow-up inquiries with conference presenters were conducted as needed.A review of each jurisdiction identified as a location for significant activity around energy storage development was conducted in terms of the following factors:articulated policies and goals around energy storage;,key institutional and societal actors around energy storage;,electricity system structure;,specific policy initiatives intended to facilitate energy storage technology development; and,initiatives intended to facilitate commercial or grid-scale employment of energy storage technologies.In the following sections, the landscape-level drivers of a potential niche to regime transition for energy storage technologies are outlined, and the differences in transition pathways between monopoly utility and liberalized electricity markets systems are examined.The key barriers found across multiple jurisdictions in niche to regime transitions for energy storage are identified, and potential future policy directions discussed.In an MLP context, the current status of advanced energy storage technologies is largely that of niche-level technological developments in the form of pilot projects, or relatively marginal operational roles in electricity systems, such as contributions to some categories of ancillary or demand response services.A range of landscape-level developments are creating the potential for a greatly expanded role for energy storage technologies in electricity systems, with the potential to propel energy storage technologies from the niche to regime levels.These developments are examined within the four categories of key variables in socio-technical transitions identified earlier of rules and institutions; technological developments and changes in wider socio-economic structures; actors and social groups; and shifts in energy system discourses.The major developments are summarized as follows.The landscape with respect to energy storage is defined by two major developments of the past two decades.The first has been pursuit by governments of a variety of strategies intended to prompt the large-scale development of renewable energy sources, such as FIT programs, and renewables obligations and portfolio standards.These initiatives have been driven by a combination of falling costs and improved technical performance for renewables, climate change policies focussed on de-carbonizing energy systems, and the high costs, technological challenges, and accident risks around nuclear energy.The second development has been the movement, beginning in the 1990s, of jurisdictions in North America and Western Europe away from monopoly or regulated utility models towards liberalized market models for their electricity systems.In monopoly systems a single vertically integrated entity provides generation, transmission and distribution services.In liberalized systems in contrast, electricity generation and related services are provided through a market, supported by independent market and transmission grid operators, into which third parties can bid their services.Electricity prices are set through the bidding process, rather than by a regulatory body overseeing a monopoly utility.As such, liberalized markets are theoretically more open to new entrants than non-liberalized systems dominated by monopoly utilities.Liberalized markets have been created principally for electricity generation and supply, although in some jurisdictions, markets have been established for other electricity system services as well, such as capacity ancillary services or demand response or conservation and demand management activities and).Liberalized systems are also expected to be neutral in terms of the technologies included in their bidding processes.Examples of liberalized market systems include FERC regulated interstate markets, like the Pennsylvania New Jersey Maryland Interconnection LLC and the Midcontinent Independent System Operator, and some individual state markets, like California and New York, in the United States.Germany, Denmark, the United Kingdom, and the Canadian provinces of Ontario and Alberta, also operate under liberalized market structures.However, in some cases, like Ontario, liberalization has only been partial, and the resulting systems incorporate mixtures of market, planning and politically directed elements.These developments have major implications for the development of new technologies and their pathways from niches to incorporation into regimes.In MLP terms, monopoly utilities and liberalized markets are effectively different models for creating niches for technology development.The monopoly model largely relies on deliberate decisions by the monopoly utility to create, shield and nurture niches from regime level selective pressures until the targeted technology is developed and ready for deployment or commercialization.The liberalized market model, in contrast, relies principally on the de facto creation of multiple niches by private investors and technology developers, where they see opportunities for new innovations to provide services on a for-profit basis.Governments may also make specific policy interventions to create niches in both monopoly or liberalized market systems.Liberalized and monopoly utility models embed different pathways for niche to regime transitions as well.Under the liberalized market model, the expectation is that the market will determine whether a technology or service moves from the niche to regime levels.Such outcomes would be indicated by successful commercialization resulting, for example, in sustained revenue streams for services off the rate base.Given the diverse range of services energy storage technologies can provide, commercialization may take the form of the acceptance and rate base funding of a series of different applications, as opposed to one dominant function.Market operators and regulators in liberalized markets are intended to play a facilitative role around market participation by actors and new technologies, rather than acting as gatekeepers in favour of established technologies and participants.Under the utility shielded niche model, in contrast, the niche to regime transition is mainly in the hands of the monopoly utility.It can choose to initiate, support or terminate the development of a niche anytime it chooses.Such decisions may be functions of many factors – determinations of the usefulness of the technology to the utility itself, preferences for existing technologies, a desire to maintain existing business models and avoid the risks of an unwanted reconfiguration or de-alignment/realignment, and politically influenced economic development considerations.The large-scale deployment of intermittent renewable technologies, such as wind and solar PV, is expected to continue to accelerate.The role of these technologies is likely to be to be reinforced by increased demand for low-carbon electricity as transportation and other energy based services are electrified in response to commitments to reduce GHG emissions.These developments have the potential to require substantial balancing resources to manage the intermittency of these technologies, as well as increased requirements for ancillary services such as voltage control and frequency regulation.The storage requirements needed to balance intermittent renewables in Germany, for example, are estimated to reach 3.5 TWh by 2025 and 40 TWh by 2040.In addition to requiring major expansions in the supply of high performance vehicle batteries, the large-scale electrification of transport will also change electricity consumption patterns, potentially presenting significant challenges at the distribution level in terms of charging load management.Storage resources may play a substantial role in managing these challenges.In the longer-term, the growing prevalence of electric vehicles may make large supplies of high-performance second-use batteries, which are potentially still useful in electricity grid applications, available at low cost.Finally, rapid developments are taking place in energy storage technologies themselves, with expectations of continued improvements in performance and decreases in costs.Transition pathways can be shaped by struggles among interests.The capacity of supporters of new technologies to undertake socio-political advocacy work is, therefore, an important factor in the outcome of niche to regime transitions.The past five years have seen the emergence and maturation of an interest community around energy storage.Energy storage industry associations have been established in Canada and the United States, such as Energy Storage Canada, and the Energy Storage Association in the United States, respectively.At the subnational level, energy storage associations have formed advocacy coalitions/alliances with governments, utilities, and other non-state actors.Examples include the Alberta Storage Alliance, the Massachusetts Energy Storage Initiative, and the New York Battery and Energy Storage Technology Consortium.Similar developments have been occurring in the European Union, with the emergence of the European Association for Storage of Energy and the Association of European Manufacturers of Automotive, Industrial and Energy Storage Batteries.These developments have occurred in the context of wider shifts in policy discourses regarding the structure of energy systems, especially among Organization for Economic Develop and Cooperation countries.Energy policy discourses, particularly around electricity, have shifted from a focus on the development of large non-renewable generation towards renewable generation technology based systems.These shifts have driven by a combination of concerns over climate change and other environmental impacts associated with non-renewable electricity sources, the geopolitical risks associated with fossil fuel supply chains and nuclear energy; and the cost, legacy and catastrophic accident risks associated with nuclear power.There has been a parallel shift in focus from centralized power systems towards more distributed systems.These are seen to be potentially more resilient and adaptive, and more amenable to local control.The development of renewable energy sources, along with smart grids and new energy storage technologies are seen to carry the potential for the development of new industries and services, and form part of the foundation for an ecological modernist vision for economic, social and environmental transitions.At the same time, there are growing concerns over how transmission and distribution infrastructure will be maintained as the traditional rate bases of utilities, rooted in the consumption of electricity from the grid, may be eroded by distributed generation and behind-the-meter activities.The state of energy storage policy development among the jurisdictions studied is summarized in Table 1 below.The changes in landscape conditions outlined in Section 2.3, including the need to integrate a growing portion of intermittent renewable energy sources into electricity systems, the maturation of the interest community around energy storage, and expanding interest in distributed generation, have created the conditions for a potential shift from niche level developments and deployment of energy storage technologies in the direction of deeper integration into electricity regimes.These landscape-level developments have so far prompted investments in technology development and other forms of what can be seen as niche creation around energy storage by governments and some utilities."These have taken the form of one-off pilots/demonstration projects like Ontario distribution utility Alectra's PowerHouse project – a local energy network aggregating household level renewable energy generation and storage resources, the establishment of developmental or special markets and mandated procurements such as those in California and Ontario.New storage technologies are generally not being funded as regular services off the electricity rate bases, with the implication of acceptance as part of the regime.The exceptions tend to be relatively marginal functions, like ancillary services, deferrals of transmission and distribution system upgrades, and certain types of demand response services.The movement of energy storage technologies from these niche level functions towards transformations or reconfigurations of the socio-technical regime is at this stage uncertain, with the implication that the potential contributions of storage technologies to energy systems may not be fully developed.Private capital is increasingly interested in storage technology development and commercial scale investments, but is waiting for regulatory and policy frameworks which clarify how the services storage can provide will be remunerated through the energy services rate base or firm long-term contracts.This view has been consistently reflected in comments from venture capital providers at energy storage conferences in Canada, Panel 4 – “Follow the Money”), the United States, including comments by FERC Chair Norman Bay) and Europe, Panel 3 – “Business and Finance”).Several of the jurisdictions studied have published energy storage development strategies or roadmaps over the past three years.These include California, Massachusetts, Germany and Canada.These typically have been developed through industry-government collaborations and attempt to lay out institutional roles, identify key barriers to storage technology deployment and outline technology development strategies.In other cases, such as Ontario, storage is embedded in wider energy strategies), or is more emergent, as is the case at the federal level in the US.The goals around the development and deployment of energy storage technologies vary considerably among the different jurisdictions studied.In some cases overall jurisdictional goals with respect to energy storage have yet to be fully articulated.There is a considerable public discussion of the potential role of energy storage as a disruptive technology, with the potential to lead to de-alignments and realignments in the energy sector, displacing existing actors and technologies and leading to the creation of new regimes.However, formal policy statements around energy storage generally avoid such framings.In some jurisdictions, like Germany and Ontario, this is a departure from the approaches taken with other new energy technologies, particularly renewable energy sources, where explicit strategies of technological substitution, designed to displace existing institutions and technologies with new entrants, were pursued through FIT programs and similar initiatives.Rather, some jurisdictions, such as Ontario, and the US Federal Energy Regulatory Commission, have framed energy storage as a useful technology to improve grid reliability, provide ancillary services, avoid or defer transmission and distribution system upgrades, and strengthen demand response strategies.These jurisdictions frame the entry of energy storage technologies from the niche to regime level as transformative.The development of their full potential may require incremental adjustments to existing regulatory and institutional arrangements, but they are unlikely to disrupt existing regimes.Indeed, in some cases, energy storage may be seen as a way to maintain existing technological regimes, particularly around the management of surplus baseload generation from large and inflexible generating facilities.Other jurisdictions see energy storage technologies as facilitating a larger reconfiguration of energy systems in a manner consistent with an overall structural adjustment towards low-carbon energy sources, particularly renewable energy.The US states of California and Hawaii, provide examples of such approaches.Germany also regards the development and deployment of energy storage technologies and an important element of its energiewende, or energy transformation.In some cases, economic development, through the commercialization and export of energy storage services and technologies emerges as an important sub-theme in energy storage strategies.Examples of such strategies are found in Massachusetts, New York, the Canadian federal government, and Québec.A defining consideration in the pathways for energy storage technology development and deployment is the underlying structure of jurisdictional electricity systems as utility monopolies or liberalized or semi-liberalized market regimes.In monopoly systems, where a single vertically integrated entity provides generation, transmission, and distribution services, like the Canadian provinces of British Columbia, Manitoba or Québec, technology development tends to be sponsored directly by the utility, either through an in-house research arm or through direct funding of outside research where the utility sees the potential for useful developments.In some cases, technologies that are perceived to have long-term economic development potential may also be sponsored."Hydro-Québec's research institute has, for example, maintained a longstanding research program on EVs and batteries.Manitoba Hydro has sponsored research on the use of secondary EV batteries for grid balancing purposes.Although in some cases governments may make policy interventions to prompt the adoption of specific technologies, the determination regarding whether a technology or service moves beyond the niche level and is incorporated into the regime is predominantly in the hands of the monopoly utility.In theory, organized markets offer a more open landscape for technology developers than a simple monopoly utility model where niche to regime transitions are almost entirely at the discretion of the monopoly operator.This is particularly the case where an organized market incorporates multiple sub-markets, for example, for energy, capacity/reserves/balancing, demand response/peak shaving services, conservation and demand management, and ancillary services.The initial model for storage service providers entering liberalized markets has been one of simple arbitrage - charging storage resources when demand and therefore market prices are low, and then discharging when demand and therefore electricity prices are higher.Although some storage service providers consider this approach economically viable in the short-term, the arbitrage model is increasingly regarded as inadequate for several reasons.The model is seen as potentially self-limiting, as the more successful storage service providers are - increasing demand during periods of normally low demand, and increasing supply when demand is high - the more they will reduce the difference between electricity prices at peak versus low demand.This problem may be reinforced as other demand response strategies are implemented by consumers and system operators, also reducing prices at peak demand.More broadly, the simple arbitrage model is seen to fail to make full use of the potential contributions of storage technologies to electricity systems.Participation by storage resources in short-term energy markets may only make limited and incidental contributions, for example, to balancing intermittent renewable energy sources or capacity or reserve requirements more generally, or helping to manage the impact of behind-the-meter activities on transmission and distribution systems.Rather, the owner/operator of a storage resource is managing its operation to maximize their own revenues, and any contribution to other system needs is an ancillary benefit.The situation has prompted storage focussed policy development initiatives in several jurisdictions in North America and the European Union with liberalized or semi-liberalized electricity markets.In some cases, like the rule-making proposal published by the US FERC in November 2016, these initiatives are intended to enable better use of storage resources in liberalized markets - transformations in MLP terms.Federal Energy Regulatory Commission.The Canadian provinces of Ontario and Alberta - the only provinces with liberalized or semi-liberalized electricity markets - are following similar paths, 2015).In other jurisdictions, such as California and Germany, storage is seen as an important element of larger reconfigurations of their energy systems.Reflecting these directions, Germany and California encourage the embedding of storage resources with household level solar PV systems to support self-generation and consumption and thereby reduce stresses on grid resources in managing intermittent distributed resources.Notwithstanding differences in landscape conditions in terms of the mixes of generation sources and long-term system orientations, across the different jurisdictions reviewed, several common themes emerge around the barriers to the development of energy storage technologies and services in liberalized markets.These themes are outlined in the following sections.In each case, the problem is summarized, supported by examples from the jurisdictions examined, and a brief discussion of responses that have been proposed within jurisdictions, provided.In many cases market rules incorporate technical requirements which restrict the ability of storage resources to participate in markets.These may include such factors as minimum capacity requirements.In the Canadian province of Alberta, for example, only projects with a range of over 5 MW, 10 MW, and 15 MW can participate in the supplemental, spinning and regulating reserves ancillary service markets, respectively.For participation in the regulating reserve market, the continuous real power requirement is 60 min.Other examples identified by FERC among RTOs and ISOs in the United States include issues related to minimum and maximum charge and run times, and charging and discharge rates.In response, FERC has recommended that bidding parameters “reflect and account for the physical and operational characteristics of electric storage resources”.A key feature of energy storage resources is their ability to provide a much wider range of services than conventional generation resources).The possibility that a single facility or market participant might be able to provide services in multiple markets – energy, capacity, ancillary, and demand response – for example, was generally not contemplated when electricity systems were liberalized.In addition to making full use of the potential contributions of energy storage resources to electricity systems, the ability to participate in multiple markets is seen as essential to the economic viability of storage services on a merchant or commercial, as opposed to pilot or one/off, basis.The ability to offer “bundles” of services is seen as important in attracting private capital investment in storage technologies.These types of limitations take different forms.In Alberta for example, wholesale market participants are not allowed to be both generators and consumers.In other cases, storage resources are limited to specific sub-markets, such as ancillary services or even sub-components of such markets, or certain types of demand response markets.Storage resources are excluded from some capacity, energy, ramp capability and contingency reserve markets.In other jurisdictions storage providers would have to participate in different markets virtually as separate entities, applying for status as market participants and paying licensing and other fees in each market they want to participate in.In some cases, markets do not exist for services storage resources can offer.The absence of capacity markets in Ontario and Alberta are examples of such situations.Capacity markets are seen to offer potentially large grid-scale applications for storage resources.Similar problems exist in Germany.As storage is considered final consumption and withdraw/output is considered energy generation, in principle a double EEG surcharge is due.Proposed amendments to the legislation would reduce the surcharge in the amount paid for electricity for storage.Although there are concerns about the fairness of storage providers being paid to provide different services simultaneously, FERC in has recommended that storage resources be able to supply any capacity, energy, or ancillary services that they are able to provide in liberalized wholesale electricity markets.One of the potential functions of storage resources that has garnered a great deal of attention has been their role in facilitating the management and integration of distributed energy resources.Generally, these types of resources, like household level solar PV systems, are too small to participate individually in wholesale electricity markets on a stand-alone basis.They may also present grid management challenges at the distribution level given the intermittency of their output.The integration of these resources with distributed storage capacity resources could facilitate the use of these types of behind-the-meter resources for demand response purposes, and the aggregation their output into useful and more manageable resources at the distribution or transmission grid levels.A major challenge around the aggregation of these types of distributed energy resources is the lack of clearly defined rules for aggregation services or significant limitations where they do exist.The need for distributed generation aggregation services was generally not contemplated in the original design of wholesale electricity markets, with the result that there are no established models regarding who can provide aggregation services, and on what basis they should be paid for these services.To the extent that distribution level electric storage and other distributed energy sources participate in wholesale electricity markets they tend to do so as behind-the-meter demand response resources.These demand response programs have reduced barriers to load curtailment resources.However, they can constrain the operation of other types of distributed energy resources, such as electric storage or distributed generation, and the services that such resources are able to offer.In Ontario, for example, a market for behind-the-meter demand response resource aggregation exists but is very limited and is fragmented into a series of distinct niches or silos, some of which may be mutually exclusive.Potential aggregators of distributed resources may be limited in other ways."The municipally owned local distribution companies that provide distribution services in most of the province's towns and cities - potentially logical candidates to act as distributed resource aggregators - are not permitted to act as generators in the wholesale market above 10 MW.In Germany aggregators of distributed resources are only permitted to participate in the tertiary balancing market.The situation has led to proposals for the recognition of aggregators of behind-the-meter storage and generation resources as a new form of market participant.FERC, for example, has proposed that RTOs/ISOs permit distributed energy resource aggregators to participate in wholesale energy, capacity, and ancillary services markets in a way that “best accommodates the physical and operational characteristics of its distributed energy resource aggregation.,This would include setting appropriate rules regarding location, bidding parameters, information, and data and metering requirements, for distributed energy resource aggregators, as well as coordination mechanisms between aggregators and grid and distribution system operators."In Germany, the federal ministry of Economic Affairs and Energy's White Paper on the future of electricity system proposed to expand the range of services aggregators of small and medium-sized consumers can provide, particularly participation in secondary balancing markets.The intention is to enable the aggregation of small-scale battery systems such as the household level systems being incentivized through loan programs to accompany household level solar PV systems."In Ontario, the province's LDCs have proposed that they function as behind-the-meter generation and storage resources aggregators, under the concept of being Fully Integrated Network Operators.As such they would enable diverse distributed energy resource integration, and facilitate and potentially operate distributed energy resources markets within their distribution systems.Recently issued rules around licencing energy storage providers as market participants in Ontario seem to limit such status to non-utility third parties, specifically excluding transmission and distribution grid owners, like LDCs, from such status.In many jurisdictions, a significant feature of the discussions about how to incorporate energy storage resources into liberalized electricity markets has been debates about the need to maintain the “technological neutrality” of markets.However, the concept of technological neutrality means different things to different constituencies.For renewable energy advocates and storage developers, the existing market regimes are not regarded as technologically neutral, as the current market rules are seen to present barriers to new technologies.Technological neutrality is therefore understood to mean that the current market rules need to be adjusted to enable to the participation of new technologies on a “level playing field” with existing technologies)."Established network operators and suppliers, on the other hand, tend to interpret “technological neutrality” meaning that deliberate technological substitution strategies in favour of new technologies, like the FIT program under Germany's EEG, should not be pursued in support of new technologies.These concerns may flow from the disruptive impacts of such strategies on the economic viability of established utilities, and the risks that such strategies may introduce price distortions and cross-subsidization.There are also ongoing debates about the extent to which storage resources should be owned directly by utilities and grid operators versus provided through third party providers on a market basis.Developers tend to prefer market-based models, as they are theoretically more open to new entrants, and offer the potential for ongoing revenue streams as opposed to one-time sales of technologies.In practice, many jurisdictions with liberalized wholesale markets place limits on transmission or distribution utility ownership of distributed energy resources.There is an underlying issue of the extent to which jurisdictions wish to run system elements like ancillary services and balancing on a market basis, as opposed to having these services provided directly by utilities and grid operators.There are concerns over potential conflicts of interest in utilities owning the enabling platforms for potentially competing services and technologies as well.Utility control would also return the niche to regime transition question to the hands of the utility rather than the market.The appearance of advanced energy storage technologies, and the resurgence of interest in existing technologies like pumped hydro, over the past decade, presents an important opportunity to study niche formation and niche to regime socio-technical transitions.Niches for energy storage technology development have emerged through multiple mechanisms.In some cases, they have been deliberately created by governments through initiatives like research and development funding, and procurement mandates.In other cases, utilities have consciously created and sheltered niches for their own technology development purposes.Finally, in liberalized market systems third party investors have been creating niches for the development of new services and technologies that might be offered on a commercial, for-profit basis.The latter pathway for niche creation has been relatively less theorized or studied than deliberate efforts by utilities and governments.While multiple mechanisms for niche creation for energy storage have emerged in monopoly and liberalized market electricity systems, the niche to regime transition stages emerge as more complex and uncertain.Monopoly utility and liberalized market systems offer different niche to regime transition pathways.In monopoly utility regimes, the niche to regime transition lies chiefly in the hands of the utility.This may make movement beyond relatively niche-level applications like ancillary services support, and infrastructure investment deferral, challenging.Although such regimes may undertake internally initiated transitions) monopoly utilities are unlikely to be interested in enabling technologies that may result in the reconfiguration or realignment of their systems unless propelled there by overwhelming landscape level developments.The US State of Hawaii provides an example of a deliberate reconfiguration of a monopoly utility, mandated by the state legislature.These considerations explain in part the concentration of private sector interest in energy storage development in liberalized or semi-liberalized market systems.In theory, within such systems, the empowerment stage of niche to regime transitions for new technologies depend on choices made by the market.In practice, energy storage developers are finding that the empowerment pathways in liberalized and semi-liberalized market systems are much more complicated than the underlying theory, grounded in assumptions of free entry and technological neutrality, would suggest.Numerous barriers to new technologies turn out to be embedded in market rules largely designed before new storage technologies existed.The result has been to push storage resources towards sub-optimal applications like arbitrage in energy markets, and marginal applications in other markets, where they exist.In effect, the key strengths of liberalized markets around the empowerment stage of niche to regime transitions for new technologies turn out to be their key weaknesses as well.The complexity of liberalized markets offers the potential for the emergence of multiple niches.This complexity also provides many potential pathways from niches to incorporation into regimes.In practice, however, these empowerment pathways turn out to be very complicated.They are subject to highly complex sets of rules which, consciously or unconsciously, favour or were designed around existing technologies and institutional arrangements.Consequently, the final empowerment stages of the niche to regime transition turn out to be very challenging.Empowerment may require adjustments to regime rules around which new actors or entrants may or not be able to assemble the necessary support.The utility monopoly model, in contrast, is potentially less creative and offers fewer opportunities for niches to emerge, but its pathways from niche to regime are simpler and clearer, if more arbitrary in terms of the interests of existing institutions and actors.These trade-offs lie at the core of the differences between monopoly utility and liberalized market models as structures for the development and adoption of new technologies and practices.In the storage case, the attention of private sector investors and technology developers is strongly focussed on liberalized or semi-liberalized market systems.Within these types of systems, the need for adjustments to existing market rules and structures to address the barriers to the full utilization of energy storage resources they present is now the focus of major discussions in the United States, Canada and EU.The specific issues that have been identified as needing attention include:the removal of technical barriers to market participation by storage resources;,the facilitation of the simultaneous participation of storage resources in multiple markets; and,the establishment of new categories of market participants, like aggregators of behind-the-meter resources, including energy storage, that were not anticipated when organized markets were originally designed.It remains an open question whether the maturing interest community of storage developers and advocates has the capacity to advance these types of changes to existing regulatory regimes.The sensitivity of established actors to risks of further reconfigurations or realignments, which may present additional challenges to existing business models and technologies, is particularly important in this regard.The prospects for the implementation of significant policy changes are likely to be strongest in jurisdictions, like California and Germany, that are engaged in wider deliberate reconfigurations of their energy systems towards low-carbon energy sources.Storage resources are expected to play central roles in these processes.A further factor influencing the likelihood of changes to market rules and structures to make better use of energy storage resources relates to the jurisdictional complexity of those markets.The most active jurisdictions around energy storage policy development tend to operate on a liberalized or semi-liberalized market system model and have a principally single-jurisdiction grid operator or ISO.Examples of the combination of a liberalized market and a single-jurisdiction system operator include California, Texas, New York, Ontario and Alberta.Institutional coordination tends to be much simpler where the system involves organizations from the same jurisdiction, all operating under mandates from a single legislature.The next steps for energy storage policy among the jurisdictions examined are likely to be determined by the combination the existence of liberalized or semi-liberalized electricity markets, the presence of a single jurisdiction system operator, and a jurisdictional commitment to the low-carbon reconfiguration of electricity systems.At the federal level in the United States, the direction of the Trump administration on the regulatory issues raised by FERC in its November 2016 proposed rule-making, and energy storage more broadly, is unknown.It is unlikely, however, to pursue deliberate low carbon reconfigurations of electricity systems.Activity at the individual state level, in contrast, is far more likely continue to move forward.Jurisdictions like California and Hawaii, who by virtue of their single jurisdiction system operators, are less affected by developments at the federal level, and who are committed to the low-carbon reconfiguration of their electricity systems, seem positioned to continue their leads in regulatory and policy development around energy storage.In Germany, the landscape-level need to manage the low-carbon reconfiguration of the electricity system towards the large-scale integration of intermittent renewable energy sources will continue to propel the development of storage resources and their integration into the existing energy regime.That said, there are ongoing debates about regime technological neutrality and desirability of further realignments of electricity systems.In Canada, the absence of a significant federal institutional and regulatory role around electricity, and dominance of single-jurisdiction system operators, means that determinations of niche to regime transitions for energy storage technologies will take place at the provincial level."Among the provinces with liberalized electricity markets, further significant reconfigurations of Ontario's electricity system, and an accompanying growth in the grid scale deployment of intermittent renewables or distributed generation activities, are not anticipated.This may limit energy storage applications to their current, relatively niche-level applications, such as ancillary services."In Alberta, the need for a significantly expanded role for storage resources depends in large part on whether the province's planned wider low-carbon reconfiguration of its electricity system, including a coal phase-out and transition towards an expanded role for intermittent renewable energy sources survives the next provincial election, expected in 2019.The emergence of advanced energy storage technologies, and the revival in interest in existing technologies, provides the opportunity to study a niche to regime transition in progress.The creation of niches emerges as a relatively straightforward process.A variety of mechanisms for niche creation have been employed or have emerged in the storage case, in both monopoly and liberalized market systems.The niche to regime transition is much more difficult.Here the monopoly and liberalized market system models offer significantly different pathways for the empowerment of niche level developments.Both have the potential to provide routes for niche to regime transitions, but there are substantial trade-offs between the two.In monopoly regimes, the pathway to adoption into a regime is relatively direct, but largely at the discretion of the monopoly utility.Such utilities may be unenthusiastic about the adoption of technologies which may disrupt their existing operational models.In liberalized systems, there may be multiple transition pathways, but these pathways are grounded in complex rules, which have largely been designed around existing technologies and actors, who may be resistant to change to accommodate new technologies and entrants.The ability of the maturing interest community of energy storage developers and advocates to advance significant regime change in favour of the full utilization of the potential energy storage technologies will be strongly influenced by the landscape-level features of the availability of liberalized or semi-liberalized market system configurations, the simplifying presence of a single jurisdiction system operator, and most importantly, a jurisdictional commitment to a low-carbon reconfiguration of electricity and energy systems.These factors are well established in some of the jurisdictions studied, such as California.In other jurisdictions they remain unresolved, particularly with respect to commitments to low-carbon system reconfigurations.In the result, the path forward for energy policy regime change around energy storage will remain jurisdictionally uneven until commitments to energy system de-carbonization are deepened and become more prevalent.
This paper employs a multi-level perspective approach to examine the development of policy frameworks around energy storage technologies. The paper focuses on the emerging encounter between existing social, technological, regulatory, and institutional regimes in electricity systems in Canada, the United States, and the European Union, and the niche level development of advanced energy storage technologies. The structure of electricity systems as vertically integrated monopolies, or liberalized or semi-liberalized markets, is found to provide different mechanisms for niche formation and niche to regime transition pathways for energy storage. Significant trade-offs among these pathways are identified. The overwhelming bulk of energy storage policy development activities are found to be taking place in liberalized or semi-liberalized markets. The key policy debates in these markets relate to technical barriers to market participation by storage resources, the ability of storage technologies to offer multiple services in markets simultaneously, the lack of clear rules related to the aggregation of distributed energy resources, and issues related to the meaning of “technological neutrality” in liberalized market systems. Landscape conditions, particularly jurisdictional commitments to pursue deliberate reconfigurations of their energy systems towards low-carbon energy sources, emerge as the most significant factor in the implementation of policy reforms in these areas.
219
Optical Molecular Imaging Frontiers in Oncology: The Pursuit of Accuracy and Sensitivity
Kun Wang, Chongwei Chi, Zhenhua Hu, Muhan Liu, Hui Hui, Wenting Shang, Dong Peng, Shuang Zhang, Jinzuo Ye, Haixiao Liu, and Jie Tian declare that they have no conflict of interest or financial conflicts to disclose.Imaging has become an unprecedentedly powerful tool in preclinical cancer research and clinical practice.In the past 15 years, there has been a significant increase in the number of imaging technologies and their applications in the field of oncology , but perhaps the biggest breakthroughs are in the new developments in optical molecular imaging.With recent advances in optical multimodality imaging, Cerenkov luminescence imaging, and intraoperative optical image-guided surgery, the sensitivity and accuracy of tumor diagnoses and therapeutic interventions have moved to a whole new level.Researchers and clinicians are now on the verge of being able to address some of the important questions in oncology that were once impossible to conclusively answer.How do we shift from conventional in vitro assay-based findings to non-invasive in vivo imaging-based detection?,Is it possible to obtain accurate and quantitative biological information on a three-dimensional cellular or sub-cellular level?,How can we better delineate tumor boundaries and guide tumor resection?,Can we exceed the sensitivity limitation of conventional imaging methods for effective small tumor foci detection both preoperatively and intraoperatively?,How do we translate preclinical OMI into clinical applications for better tumor treatment outcomes?,In this article, we highlight some recent advances of OMI in three categories: optical multimodality imaging, CLI, and optical image-guided surgery.We review cutting-edge optical imaging instruments, the development of optical tomographic imaging models and reconstruction algorithms, and promising optical imaging strategies with smart utilization of multiple molecular probes in both breadth and depth.We also demonstrate specific applications and state-of-the-art in vivo imaging examples of OMI in biomedical research and recent clinical translations.Different imaging modalities have their inherent advantages and disadvantages, and they are complementary.For example, radionuclide imaging has a superb sensitivity to molecular targets but a limited spatial resolution, whereas computed tomography and magnetic resonance imaging can offer good spatial resolution but suffer from low sensitivity in detecting molecular events .Planar optical imaging that adopts photographic principles is the simplest technique for capturing visible and/or near-infrared light emitting from optical reporter molecules in vivo .This planar technique can offer good superficial resolution, high sensitivity, and high-throughput imaging ability, and it is technically easy to implement preclinically .However, it also has two major limitations.The first limitation is the difficulty in quantification of the in vivo distribution of optical probes due to the nonlinear relationship in spatial position and signal strength between the detected surface flux and the light source .The second is the relatively shallow imaging depth due to the significant light scattering and absorption inside the tissue and organs of imaged animals .These features result in the application of this approach primarily to qualitative superficial observation.Although various efforts have been made to develop different algorithms for tomographic reconstruction solely using planar optical images, the process inevitably involves erroneous interpretation of the data collected unless the nonlinear effects are explicitly corrected or accounted for .Therefore, combining planar optical imaging with other modalities is recommended in order to compensate for these limitations while building on its strengths to capitalize on its great potential .Tomographic reconstruction of the biodistribution of optical molecular probes can be traced back to the early 1990s.The first theoretical frameworks were proposed as a way to spatially resolve intrinsic tissue contrast in the context of studying hemodynamics or organelle concentration .Visible and NIR photons are highly scattered in tissue and start to diffuse within a millimeter of propagation .However, a portion of the light can still penetrate several centimeters and reach a small-animal skin surface because of the low photon absorption in this spectral window , which is known as the first NIR window .At wavelengths shorter than 650 nm, there is an increased absorption by blood and skin, whereas at wavelengths longer than 950 nm, water and lipids demonstrate stronger absorption.In recent optical multimodality tomography, the diffuse light patterns are collected from a small animal surface at one or multiple angles using photodetector sets or various charge-coupled-device cameras.Meanwhile, the anatomical structure of the animal is acquired using CT or MRI as one of the a priori sets of information for helping optical reconstruction.Based on the types of optical molecular probe applied, this 3D non-invasive whole-body small-animal imaging technology is subdivided into three categories: fluorescence molecular tomography , bioluminescence tomography , and Cerenkov luminescence tomography .With the presence of the extra dimension, the combination of sufficient imaging information from different modalities, and appropriate optical molecular probes with specificity to cellular and sub-cellular processes, OMT is able to overcome the first limitation of the conventional planar optical imaging technique mentioned earlier, and offer more accurate and robust quantitative imaging on a cellular and molecular level.The hardware setup of these multimodality imaging methods may vary in different biomedical applications or when prepared by different research groups.Examples include the hybrid optical-CT imaging system shown in Figure 1 and, the hybrid optical-MRI system in Figure 1–, and the triple-modality optical-CT-MRI system shown in Figure 1 and.However, there are two generic factors that have significant influence on tomographic performance.First, it is crucial to develop appropriate mathematical imaging models describing photon propagation in tissues—an issue that is known as the forward problem; and second, it is equally important to develop sophisticated algorithms for tomographic reconstruction—an issue that is known as the inverse problem .Typical forward problems proposed for OMT are based on numerical or analytical solutions of the diffusion equation with the assumption that the imaging subjects are homogeneous in optical properties .To further improve the accuracy, different forward models based on approximate solutions to the radiative transport equation, on diffusion equation solutions merged with radiosity principles, or on higher-order spherical harmonic approximations were also proposed for different tissue and organs where optical properties were assumed to be heterogeneous inside living small animals .There is an evolution of the optical imaging model toward better accuracy with acceptable increase of the computational cost for in vivo applications.Due to high photon scattering in tissues, the system matrix of OMT is ill-conditioned and the inverse reconstruction is ill-posed .In contrast to single modality optical tomography, OMT can utilize prior information or guidance obtained from other imaging modalities in order to minimize these problems.Furthermore, various regularization methods, such as Tikhonov regularization , sparsity regularization , total variation regularization , and reweighted L2 and L1 regularizations , can be employed to achieve computationally fast and robust reconstruction.These methods can be used with fast analytical solvers or numerical solutions in order to further accelerate the reconstruction speed .Overall, there is a consistent demand for faster and more robust inverse algorithms, as the acquired data sets increase in size due to the application of more complicated multimodality imaging systems.With the rapid development of hardware systems, optical imaging models, and tomographic reconstruction algorithms, OMT has become more and more practical and easy to implement.In the last decade, there has been a significant shift from mathematical simulation or artificial phantom studies to in vivo small-animal studies in the field of OMT.A wide range of unique biomedical applications in small-animal tumor model imaging was investigated by several pioneering groups all over the world.Here, we summarize some of the most recent breakthrough studies in order to demonstrate the superiority of the hybrid FMT and BLT.As CLT is reviewed in more detail in Section 3, we do not introduce its in vivo applications in this section in order to avoid repetition.The Vasilis Ntziachristos group has successfully achieved highly accurate hybrid FMT-CT performance on a subcutaneous 4T1 tumor mouse model, an Aga2 osteogenesis imperfecta model, a Kras lung cancer mouse model and a pancreatic ductal adenocarcinoma model, as shown in Figure 2 .The in vivo imaging results were compared to single modal FMT and CT, respectively, and were also validated against post-mortem planar fluorescence images of cryoslices and histology data.The study vividly demonstrated that OMT can provide much more accurate 3D information on tumor lesions than either stand-alone FMT or CT can achieve, or than can be obtained using the planar optical imaging method.The Jing Bai group and the Brian Pogue group have separately developed FMT-CT and FMT-MRI techniques for dynamic fluorescence molecular imaging, both of which are powerful new four-dimensional tools for cancer research.They are especially useful as better ways of studying receptor-targeted drug delivery and cancer progression non-invasively.These groups employed their OMT approaches and successfully reported the binding kinetics of different optical molecular probes in the blood pool and in tumor xenografts .Furthermore, the Brian Pogue group applied this technique to image the uptake kinetics of two optical probes in U251 brain tumor mouse models simultaneously.One probe targeted the receptor of interest, and the other acted as a non-targeted reference. and for more detail.),These dynamic data were then fit to a dual-tracer compartmental model in order to achieve accurate quantification of the receptor density available for binding therapeutic drugs in tumor tissues .The Pogue group also applied a similar imaging technique with dual-optical probes for the accurate quantification of tumor burden in lymph nodes in breast cancer mouse models.This non-invasive imaging method reached an ultimate sensitivity of approximately 200 cells in detecting breast cancer metastasis .In contrast to the above studies, the Jie Tian group developed a BLT-CT technique that was sufficient for in vivo imaging and applied it to evaluate the therapeutic interventions of new designed anti-tumor drugs .For in vivo planar bioluminescence imaging or 3D BLT, biological entities are tagged with a reporter gene that encodes one of a number of light-generating enzymes.In the presence of oxygen and other factors, enzymes convert unique substrates into light .The propagation of emitted photons in living animals can be simulated as a diffusion process that is similar to fluorescence photon propagation.However, it is more difficult to achieve practical in vivo imaging using BLT than using multimodality FMT approaches, because BLT does not use external illumination sources.Although this feature carries the major advantage of a high tumor-to-normal-tissue contrast due to the absence of inherent background noise, the fewer source-detector pairs available complicate the tomographic problem mathematically.By integrating multi-angle imaging with a priori information on much more accurately defined tissue heterogeneity, the Jie Tian group improved the performance of the inverse problem of BLT .Their recent studies demonstrate that with well-developed reconstruction algorithms, the technique can provide accurate 3D information on orthotopic liver tumors, and the anti-tumor efficacy of newly developed therapeutic drugs or other interventions can be monitored quantitatively without sacrificing the tumor-bearing mice, as shown in Figure 2 and .In addition to all the technological developments and biomedical applications of OMT described above, other research groups are acquiring optical molecular images and images from other modalities using separate systems, and then combining the information to obtain better sensitivity and accuracy.This optical multimodality molecular imaging strategy normally involves the application of multimodal molecular probes and avoids the difficulty of developing complicated hybrid imaging systems.Recent achievements even circumvented the second limitation of conventional planar optical imaging and achieved deeper imaging depth.The Sanjiv S. Gambhir group designed a unique triple-modality imaging nanoprobe .Because of the better tissue-penetrating ability of photoacoustic imaging and the better spatial resolution of optical Raman imaging, this triple-modality imaging strategy allowed for non-invasive accurate brain tumor delineation through an intact skull, shown in Figure 3 and, and even more accurate intraoperative image guidance for tumor resection, shown in Figure 3–.With the group’s in-house-modified imaging systems, the probes were detectable with at least picomolar sensitivity in living mice.These impressive features of this optical multimodality approach hold great promise for enabling more accurate and sensitive brain tumor imaging and resection than ever before.In Gambhir’s study, PAI is the key to overcoming the limitation in optical imaging depth.The reason why this emerging hybrid imaging technique achieves better tissue penetration than conventional fluorescence and bioluminescence imaging is because of its unique utilization of the photoacoustic effect .It employs a pulsed nanosecond-long laser beam instead of a continuous wave to illuminate the targets of interest, causing a slightly localized heating and resulting in thermoelastic expansion.This transient thermoelastic tissue expansion generates pressure waves with high frequency that can be detected by ultrasonic transducers .Therefore, PAI combines the advantages of optical excitation and acoustic detection.Since an acoustic wave has a much lower scattering coefficient in biological tissue than light does, PAI offers imaging at a greater depth than conventional optical imaging methods, as shown in Figure 3 and, thereby ultrasonically breaking through the optical diffusion limit .Recent advances in PAI have made it a powerful imaging tool in both biological and clinical applications.In recent years, the Lihong Wang group and the Vasilis Ntziachristos group have been developing various PAI instruments and extending the applications of such instruments, respectively .Their efforts are likely to accelerate OMI from preclinical studies toward clinical translations in cancer diagnoses.In addition to PAI, techniques using the second NIR window have also been developed in order to overcome the limitation in optical imaging depth.Compared with the first NIR window, NIR-II can offer deep penetration depth in tissues and a higher signal-to-noise ratio for fluorescence imaging .However, the synthesis of biocompatible and bright NIR-II fluorescent probes and the application of suitable CCD cameras with high quantum efficiency in this longer wavelength window play critical roles for achieving practical in vivo NIR-II imaging.The Hongjie Dai group reported the use of biocompatible, bright single-walled carbon nanotubes as NIR-II imaging contrast agents for the imaging of blood velocity in both normal and ischemic femoral arteries .In this study, an indium gallium arsenide camera and a conventional silicon camera were applied for NIR-II and NIR-I imaging, respectively, in a hybrid optical imaging system, as shown in Figure 4.A thorough comparison of the imaging performances in the two NIR windows demonstrated the superiority of NIR-II in obtaining more information from deeper tissues, as shown in Figure 4 and.The group also performed imaging of mouse arterial blood flow in deep tissue at the ultrafast video-rate of>25 fps, with a high quantum yield of synthesized polymers as fluorescent agents, as shown in Figure 4– .Furthermore, Dai’s group has reported the non-invasive through-scalp and through-skull brain imaging of mouse cerebral vasculature, without using craniotomy or skull-thinning techniques, as shown in Figure 4– .In this work, the imaging depth was 2 mm deeper than in previous efforts and the imaging rate of 5.3 fps permitted the dynamic monitoring of blood perfusion in the cerebral vessels of the mouse brain.Recent advances in optical multimodality molecular imaging are embodied in every aspect of the technology, including optical molecular probes, and especially in the nanoscale.An enormous development has occurred in the form of a variety of new nanomaterials that are modified for in vivo OMI, such as polymers , liposomes, micelles , metallic nanoparticles , inorganic particles , and carbon structures .Many groups are dedicated to synthesizing molecular probes with better optical properties that are applicable for multi-targets as well as for multimodality imaging in order to enhance the overall sensitivity and accuracy of the imaging performance.However, even though there have been significant achievements in preclinical applications with nanoprobes, and the future of OMI-based nanotechnology seems promising, the progress of clinical translation in OMI-based nanotechnology has still been slower than expected over the last decade.Major concerns regarding the metabolic rate and toxicity of nanoscale imaging agents inside human systems are still preventing their application in clinical practice.To the best of our knowledge, Doxils, Abraxanes, and Feridexs are the only three nanoscale imaging agents that have been approved by the Food and Drug Administration for clinical use.All three are composed of simple formulations without tumor specificity.However, the emergence of CLI and intraoperative fluorescence image-guided surgery may facilitate the clinical translation of OMI with clinically approved radioactive probes and operating-room-fitted imaging systems.Cerenkov luminescence is the light generated when charged particles—usually electrons or positrons emitted from radioisotopes upon radioactive decay—exceed the speed of light in a dielectric medium.As an emerging OMI technology, CLI was first reported much more recently than fluorescence and bioluminescence planar or tomographic imaging technologies.Compared with nuclear imaging, which also employs radioactive tracers, CLI has several advantages that are inherent in optical imaging: higher throughput, cost savings, and greater surface resolution .Furthermore, a variety of tracers approved by the FDA enable clinical applications , giving CLI a unique inborn advantage in clinical translation.However, the low intensity of CL and its violet-blue-dominated spectrum limit its depth of tissue penetration.Hence, significant signal amplification and alternative imaging techniques are required to enhance CLI sensitivity.CLI suffers from tissue depth-dependent signal weakening, strict restriction of background light, and a lack of additional information, compared with positron emission tomography.To enhance the intensity of CLI signals and to complement PET, radiopharmaceutical-excited fluorescence imaging , secondary Cerenkov-induced fluorescence imaging , Cerenkov radiance energy transfer , radioluminescence imaging , radioisotope energy transfer , and enhanced CLI have been explored.Self-illuminating 64Cu-doped nanoparticles, such as gold nanocages , nanoclusters , CdSe/ZnS , and CuInS/ZnS QD , shift Cerenkov radiation toward longer wavelengths.Rare earth nanophosphors doped with Eu3+, Tb3+, Er3+, and Yb3+ enable inherent multimodality and increase the signal-to-noise ratio of optical imaging .Among these studies, the Jie Tian group utilized europium oxide nanoparticles and radioactive tracers in order to convert γ and Cerenkov radiation into signal-enhanced red emission .This unique internal dual-radiation excitation mechanism combined the advantages of nuclear and OMI and provided non-invasive yet highly sensitive detection of tiny early-tumor lesions.In 2010, Liu et al. demonstrated multispectral, deep-tissue, and potentially targeted imaging by the in vivo excitation of quantum dots contained in Matrigel pseudotumors using a radiotracer source .The three quantum dots can be multiplexed using corresponding filters in order to acquire information for each channel.In 2013, Thorek et al. proposed SCIFI as a new CL imaging strategy that enables activatable imaging via the biologically specific fluorescent conversion of Cerenkov radiance.The multiparameter imaging of tumor markers with a high signal-to-background ratio is shown by using HER2/neu-targeted 89Zr-DFO-trastuzumab to excite αvβ3-targeted cRGD-QD605.The SCIFI signal indicates the co-expression of HER2/neu and αvβ3 signatures of tumor cells.Matrix metallopeptidase-2 enzymatic activity in vivo imaging is presented via gold nanoparticles conjugated with carboxyfluorescein-labeled peptides.The peptide sequence IPVSLRSG can be cleaved specifically in the microenvironment of an MMP-2 positive tumor.Specific cleavage of the peptide dissociates fluorescence-quenching AuNPs from FAM; hence, the SCIFI signal indicates MMP-2 enzymatic activity in vivo, which correlates with the ex vivo quantitative Western blotting assay.Since Robertson et al. first detected luminescent signals from positron-emitting radionuclides using Xenogen IVIS , CLI featured as a low-cost imaging technology among radionuclide imaging modalities.To break the limit on penetration depth due to the strong scattering of Cerenkov light and to increase the sensitivity of CLI, Kothapalli et al. proposed the transmittal of Cerenkov light through different clinical endoscopes and conventional optical fibers.They constructed an optical endoscopy imaging system, shown in Figure 6, in which a CCD camera was uncoupled with a 6 mm fiber-optic bundle, shown in Figure 6, and used their system to successfully detect as low as 1 μCi of radioactivity emitted from 18F-FDG, as shown in Figure 6 .With the development of endoscopic CI, Liu et al. built an optical endoscopy imaging system , shown in Figure 7, using a CCD camera coupled with an optical imaging fiber bundle that was 108 mm long, shown in Figure 7.The distal end of the fiber, shown in Figure 7, was coupled with a micro-imaging lens.Intraoperative surgical guiding systems based on CLI have been demonstrated by Holland et al. and Thorek et al. .Holland et al. targeted HER2/neu-positive expressed BT-474 subcutaneous tumors with 89Zr-DFO-trastuzumab and performed a dissection of the BT-474 tumor under the guidance of CLI.Thorek et al. demonstrated the utilization of CLI to aid in the resection of sentinel lymph nodes .Liu et al. demonstrated the feasibility of utilizing a Cerenkov luminescence endoscopy system to guide the resection of tumor tissues with an in vivo tumor imaging study, depicted in Figure 7 and.With further improvements in sensitivity and spatial resolution, clinical applications of CLE systems may arise in the near future.CLI and CLE acquire only superficial scattered and attenuated CL; therefore, they lose more accuracy as the depth of the radiotracer source increases.In 2010, two groups of researchers developed Cerenkov luminescence tomography systems independently.Li et al. measured the Cerenkov optical luminescence with a commercial optical imaging system and acquired a micro-PET scan for the validation of radiotracer distribution and a micro-CT scan for anatomic reference.As shown in Figure 8, due to the placement of two side mirrors, the CCD camera simultaneously captured images of the emitted photons from the top and from two side surfaces.Hu et al. used a dual-modality ZKKS-Direct3D molecular imaging system, including a scientific liquid-cooled back-illuminated CCD camera and a micro-CT system consisting of a micro-focus X-ray source, to localize the implanted radioactive sources by 3D reconstruction; they achieved distance errors in the low millimeter range, compared with CLT and SPECT .The anesthetized mouse was affixed to the animal-imaging holder and placed on the rotation stage.The early CLT reconstruction method adopted the radiative transfer equation to describe the photon transport problem.The mouse was assumed to be homogeneous and all the optical parameters were set by the same value over the whole body .Hu et al. introduced the CT data as the prior information in order to establish the heterogeneous model .As shown in Figure 10, the heterogeneous model fuses the CT data and optical data successfully and improves the reconstruction accuracy.Spinelli et al. introduced multispectral information to reconstruct the radiotracers; this method not only improves the reconstruction accuracy but also simplifies the detecting devices .All of the methods mentioned above adopt the diffusion equation in order to approximate the RTE.The diffusion equation loses its accuracy in spectrums with short wavelengths, such as the blue spectrum.As most emitting photons in Cerenkov radiance are in the blue-light spectrum, and the light intensity is inversely proportional to the square of the wavelength, the diffusion equation is not very appropriate for CLT reconstruction.Therefore, a third-order simplified spherical harmonics approximation of RTE was employed to model Cerenkov photon propagation by Zhong et al. .More studies were done to reduce the reconstruction time .Current medical imaging techniques play an important role in the field of preoperative diagnosis and post-operative evaluations.However, with regards to intraoperative imaging technology, the most frequently used imaging modalities were surgeons’ eyes and hands.Novel techniques that can extent a surgeon’s vision and sense of touch are desired.Existing medical imaging techniques such as CT, MRI, and PET can image a tumor that is larger than 5 mm in human applications .However, more sensitive and accurate imaging technologies are urgently required.The rising technology of OMI shows great superiority in clinical translations.Fluorescence molecular imaging technology, a branch of OMI, has been applied to several clinical surgeries such as ovarian cancer , early stage esophageal cancer diagnosis , and sentinel lymph node detection .There are three principal advantages in using FMI for complex intraoperative applications:① high sensitivity and specificity in the detection of micro lesions; ② real-time imaging at the detection area during the dissection process; and ③ no radiation or contact with tissues, resulting in no influence on traditional surgery procedures.Current studies have already proved the high sensitivity and accuracy of micro-tumor dissection using FMI systems in clinical applications.To achieve superior sensitivity and accuracy in clinical translations and to fulfill clinical demands, novel medical instruments should include three important features in addition to obtaining FDA certification: ① multispectral visualization including white light and lesion-specific fluorescence; ② optimized fluorescence detection and sufficient light sources for excitation; ③ convenient manipulation for surgeons.Among all other imaging challenges, detection depth is the major issue for the FMI technique.In order to achieve a high signal-to-background ratio during surgery, the first NIR window is commonly used, in which light absorption and scattering are relatively low and cannot be seen by the naked eye .Many academic and commercial systems based on the principle of fluorescence imaging are available for clinical applications .Milestone studies, such as precise cancer imaging and nerve-damage protection surgery, have been performed that aimed for increasing the precision in intraoperative applications .Several academic and commercial intraoperative FMI systems are currently available for clinical applications.Depending on the surgical application, these systems can be classified as either open-surgery or endoscopic FMI systems.Furthermore, different types of FMI systems have different advantages due to a focus on different features.Multispectral intraoperative FMI systems have advantages in image acquiring and processing.The FLARE™ system was developed by the Frangioni Laboratory, Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School.This system uses dual NIR channels and a white-light channel to simultaneously collect images.Three CCD cameras are used to collect images in different spectrums: The NIR channel 1 is used to collect light above 800 nm, the NIR channel 2 is used to collect light between 700−800 nm, and the white-light camera is used to collect light between 400−650 nm.The excitation light usually used involves two components: halogen for white light at 40 000 lux and LEDs for NIR light.For the image results, different NIR images are pseudocolored with red and green, respectively, and merged onto the white-light images.In terms of real-time imaging, the FLARE and mini FLARE systems acquire and display images at a rate of 15 fps.This academic system has been used in several clinical applications such as cancer surgery and SLN mapping .Another multispectral FMI system was developed by Technische Universität München and Helmholtz Zentrum, and is similar to the FLARE system.This system has the advantage of imaging processing that corrects for attenuation from the excitation light, and has been remarkably applied to human ovarian cancer surgery .Another FMI system was developed in the Key Laboratory of Molecular Imaging, Chinese Academy of Sciences, and has the advantage of convenient operation.This system also improves the quality of image results by using feature point algorithms in order to ensure rapid and precise imaging fusion for the two cameras in the system .The system has been successfully applied in breast cancer SLN mapping and liver cancer detection in clinics.As intraoperative FMI systems aim to assist surgeons in precision surgery, convenient designs were considered in many commercial products.The first FDA-approved FMI product was the SPY™ system developed by Novadaq Technologies, Inc.It showed previous application in the field of vessel bypass surgery.Recently, the SPY™ system has been applied to accurately identify mastectomy-flap necrosis with the fluorescence imaging agent indocyanine green in 62 breast-reconstruction cases during surgery .The Photodynamic Eye, produced by Hamamatsu Photonics, is approved by the FDA for clinical applications including SLN detection in breast cancer and liver cancer surgery .This handheld imaging system emits annular 760 nm LED NIR light and uses a single CCD camera to detect the reflected fluorescence light.As LED light energy is limited, the image quality has great potential for improvement for further operation-room applications.Another handheld product designed by Fluoptics, named Fluobeam®, has similar functions as the PDE™ and has been used in clinical trials aiming to prove its feasibility during surgery.In addition, ArtemisTM simultaneously shows the color image and the fluorescent overlay, which provides excellent utility for nerve surgery .Nerve preservation is an important issue during most surgery because accidental transection or injury results in significant morbidity.NIR window I fluorescent light has the potential to provide high resolution, high sensitivity, and real-time avoidance of nerve damage.By using a fluorescent probe, FMI imaging can image the nerves in human tissue samples.Fluorescence highlighting is independent of axonal integrity, suggesting that the probe could facilitate the surgical repair of injured nerves and help prevent accidental transection .Recently, novel fluorescence endoscopic and laparoscopic systems were designed to realize minimal invasiveness and to solve the detection-depth problem during surgery .The most commercial endoscope systems use 400−700 nm visible-light-spectrum fluorescence imaging, which is similar to an ordinary white-light image.However, the lack of contrast between lesions and normal tissue makes it difficult for surgeons to decide where to dissect.In order to detect early micro-tumor lesions and improve the effects of treatment, several endoscopic techniques such as image-enhanced endoscopy, endoscopic microscopy, and NIR fluorescence endoscopy were used to improve diagnosis accuracy.Because the NIR FMI technique showed great superiority in intraoperative high-SBR imaging, some novel NIR endoscopic FMI systems were designed for intraoperative image-guided surgery.The challenging problem of how to design the endoscopic optical path to simultaneously achieve white-light and NIR fluorescent images has limited most fluorescence endoscopic imaging systems.An NIR fluorescence endoscopic system should balance the sensitivity and size of the image detector, the optical coupling efficiency, and the excitation energy.Glatz et al. used an electron multiplying charge-coupled device to increase the detection sensitivity of significant contrast between tumors and normal tissues and evaluated their system for the identification of colorectal tumor margins .Venugopal et al. reported a unique optical path design to simultaneously achieve color and NIR fluorescence images .This prism-based 2-CCD camera endoscopic imaging system, which is compatible with two light sources in color and in the NIR range, realized one hand operation and real-time imaging registration during surgery.Furthermore, thoracic SLN mapping in a porcine model validated the feasibility of the system performance.For clinical translations, Hide et al. addressed a recurrent intracavernous sinus dermoid cyst using an NIR endoscopic system in order to confirm the patency of the internal carotid artery and the cavernous sinus.This NIR endoscope was superior for real-time imaging and a high SBR for the lesions, and showed great value for successful endonasal transsphenoidal surgery .Plante et al. reported a pilot study using a novel endoscopic FMI system for SLN mapping in cervical and endometrial cancer with ICG .The results showed that with the help of an NIR FMI system, SLN mapping detection can achieve a very high overall and bilateral rate.Pan et al. used fluorescence imaging systems, confocal endomicroscopy, and blue light cystoscopy in fresh surgically removed human bladders with fluorescently labeled CD47 antibody as molecular imaging agent for accurate diagnosis and image-guided surgery.The results showed 82.9% sensitivity and 90.5% specificity, which improved diagnosis and resection thoroughness for bladder cancer .In summary, intraoperative surgical navigation systems using FMI technology can extend the vision of surgeons, enabling them to precisely distinguish lesions from normal tissues, and improving surgery sensitivity and accuracy for many important biological applications and clinical translations .In the past decade, as more and more researchers and clinicians realize the significance of precise medicine and personalized cancer-patient treatment based on a fundamental understanding of cancer biology at the cellular and molecular level, enormous strides have been made in the field of in vivo OMI and its clinical translations.With this strong motivation, many new optical imaging systems, algorithms, agents, and reporters have been applied to cancer research, new therapy development, and clinical patient care.Optical multimodality imaging has transformed the optical observation of molecular processes and events from a crude qualitative planar approach to a quantitative 3D imaging technique.Small-animal whole-body biodistribution, the targeting specificity and pharmacokinetics of optical molecular probes, and the tumor-cell response to therapeutic interventions can all be imaged accurately and dynamically in intact host environments.Important biological information, such as tumor marker expression, available reporter density, and lymph node tumor burden quantification, which in the past could only be accessed by analyzing select tissue specimens extracted by biopsy or tumor tissue resection, can now be analyzed in vivo with unprecedented accuracy and sensitivity.CLI has provided a new pathway to imaging clinically approved radioactive tracers with optical-based technologies, and exhibits many advantages.This technique has the potential to accelerate the speed of clinical translation for OMI, as the majority of fluorescence probes are still limited in preclinical applications.The combination of nuclear and optical imaging, the two most sensitive imaging modalities, may break through the current sensitivity limitation of in vivo early-tumor imaging and may provide a way to observe tumor progression at a much earlier stage.The clinical translation of OMI has been slower than was initially hoped for.The reasons behind this slower translation are complex, but one of the biggest problems involves the regulatory hurdles for optical molecular probes.However, imaging systems and strategies have moved forward by exploring the potential clinical applications of FDA-approved optical imaging agents such as ICG.Intraoperative fluorescent image-guided surgery has become well-accepted in clinical trials, with promising tumor-resection accuracy in tumor-margin definition and superb sensitivity in tiny tumor foci or residual detection.The outdated perception of lower market profit margins for OMI than for therapeutic drugs is gradually changing.With recent advances in OMI technologies, we believe that more and more unverified biomedical hypotheses can be investigated using more powerful imaging strategies, and significant new discoveries in oncology can be achieved through imaging observation.The more accurate and sensitive imaging studies performed using OMI have enabled researchers to obtain a deeper and better understanding of molecular processes and evens of cancer—an understanding that is likely to motivate even faster development of OMI in the next decade.The clinical translation of these novel imaging technologies will continue and accelerate.Applying these new optical imaging technologies to human healthcare will lead to a fundamental improvement in diagnostic and therapeutic cancer interventions.In the era of molecular imaging, optical technologies hold great promise to facilitate the development of highly accurate and sensitive cancer diagnoses as well as personalized patient treatment—one of the ultimate goals of precision medicine.
Cutting-edge technologies in optical molecular imaging have ushered in new frontiers in cancer research, clinical translation, and medical practice, as evidenced by recent advances in optical multimodality imaging, Cerenkov luminescence imaging (CLI), and optical image-guided surgeries. New abilities allow in vivo cancer imaging with sensitivity and accuracy that are unprecedented in conventional imaging approaches. The visualization of cellular and molecular behaviors and events within tumors in living subjects is improving our deeper understanding of tumors at a systems level. These advances are being rapidly used to acquire tumor-to-tumor molecular heterogeneity, both dynamically and quantitatively, as well as to achieve more effective therapeutic interventions with the assistance of real-time imaging. In the era of molecular imaging, optical technologies hold great promise to facilitate the development of highly sensitive cancer diagnoses as well as personalized patient treatment—one of the ultimate goals of precision medicine.
220
The impact of Toxoplasma gondii on the mammalian genome
The mammalian genome has clearly been influenced by infection.The extraordinary genomic complexity of the rearranging receptors of lymphocytes and the complex array of immune functions assembled in the mammalian MHC are testimony to millions of years of pathogen pressure.Less straightforward is to document where and how specific pathogens have triggered specific genomic effects.Recent fatal pandemics have left their marks on the human genome, for example in the shape of a number of more or less dysgenic alleles of α-globin and β-globin for malaria, witnessing the urgency and intensity of selection by novel pathogens.In mice the superantigenic ORF proteins of endogenous mammary tumor viruses appear to have taken a toll of T cell receptor Vβ families as the selective priorities for the mouse seem to have favoured a sub-optimal T cell repertoire over the risk of inflammatory death.The strongest evidence for a definite recent causal relationship between specific features of pathogen and host genomes is reciprocal polymorphism with an experimentally demonstrable causal chain.Apart from the classic examples noted above, this level of analysis has only occasionally been achieved, and most notably in plant disease resistance .When experimental data has been substantiated by ecological evidence, one may fairly describe such a scenario as co-evolution.The pattern of host–pathogen co-evolution depends on the extent to which host resistance reduces pathogen transmission.Fast-evolving pathogens counter this cost by rapid evasive evolution.These familiar ‘Red Queen’-like processes can result in polymorphic variation in host and pathogen as each attempts to sidestep the other.Toxoplasma gondii is an extremely promiscuous pathogen, generating recombinational diversity through gametogenesis in all species of true cats, and all warm-blooded animals are potentially intermediate hosts.Evolutionary significance of hosts for T. gondii is therefore not yes or no, but a quantitative parameter.Some species are certainly in this sense important hosts for transmission of the pathogen, others probably not, a distinction applying equally to definitive and intermediate hosts.In a comparison between important and less important hosts we might identify genomic signatures of the immune resistance machinery that reflect selective pressure from the parasite on the ESH.The evolution of T. gondii will be driven in the foreseeable future by its relationship with the domestic cat as definitive host, but the absolute dominance of the domestic cat is recent and it is unknown whether any genomic coadaptation has already occurred.The limited genotypic diversity of T. gondii in the Old World compared with S. America may reflect an ancient S. American origin for the species although there are arguments against this view .In any case, the original genetic diversity of Old World T. gondii may have been larger and the recent expansion of the domestic cat, an Old World species until the sixteenth century, may have favoured a specific subset of pre-adapted genotypes.The identification of dominant ESH species as intermediate hosts is more complex, but mammal or bird species that are rare or inaccessible as prey for domestic cats must be low down on the hierarchy, while species that are abundant and accessible are high up.Humans, on the other hand, while abundant and globally infected by T. gondii at a rate over 1% per year of age , are inaccessible as prey for domestic cats and can be eliminated as an ESH.The parasite is completely uninterested in defeating, or being defeated by, human immunity."In the event, while human immunity is normally sufficient to reduce morbidity from T. gondii infection to very low levels, the parasite's exceptional ability to use host immunity in general as a trigger for bradyzoite conversion means that infected humans do carry cysts and so far no immunity sufficient for parasite elimination has yet been recorded in man.What we may fairly say is that no components of human immunity seem to be specifically dedicated to resistance against T. gondii.The human genome thus seems to provide a reasonably reliable negative control.What about the strong ESH candidates?,Cat and mouse are global species and sympatric.Furthermore, foraging mice should have a significant chance of ingesting oocysts spread in cat feces.Infection rates in urban Mus musculus above 50% have been reported in the UK , but much lower rates are more general .In US studies, infection of wild M. musculus is reported to be in the low range and values for the US native mouse, Peromyscus, are similar .Since unconfined domestic cats defecate and hunt in the natural environment adjacent to their homes, rather than at home, the ecology of Apodemus and other local wild-life may be more relevant to the evolution of modern T. gondii in Europe than that of M. musculus.Significant infection rates have been reported in the European field mouse, Apodemus as well as in voles and shrews , abundant Eurasian small mammals often found near human habitation but scarcely overlapping in range with the domestic mouse.Likewise, domestic cats regrettably catch the common wild songbirds that live with us, as well as unloved but abundant urban feral rock pigeons.These may also be important ESH species, but it is certain that a significant proportion of T. gondii pass through M. musculus during the generational cycle and the mouse is certainly the best-known candidate for a species with significant ESH credentials.Two striking differences between mouse and man have been highlighted, a recognition mechanism and an effector mechanism.In mice, innate recognition of T. gondii infection depends on two members of the TLR family, TLR11 and TLR12, probably forming a heterodimer , and the trigger was identified as T. gondii-profilin .Both TLR proteins are absent in human .Without them, mice are susceptible to normally avirulent T. gondii strains.Secondly, members of a family of 10–20 interferon-γ-inducible GTPases, the IRG proteins , assemble on and disrupt the parasitophorous vacuole membrane .IRG proteins are essential for mouse survival from normally avirulent T. gondii infection .Humans express only one non-inducible IRG fragment, IRGM, of uncertain function.A further family of interferon-inducible GTPases, the 65 kDa guanylate binding proteins, is present in both species.In the mouse GBPs assemble on a proportion of IRG-loaded parasitophorous vacuoles and contribute to the strength of IFNγ-inducible resistance .In the human, GBPs do not assemble on parasitophorous vacuoles although a resistance function distant from the vacuole has been proposed .Much of the immune machinery involved in resistance against T. gondii is, however, common to man and mouse, forming the general innate-adaptive response axis: macrophages, dendritic cells, IL-12, IFNγ, CD4 and CD8 T cells, CD40, the MHC, as well as NO and active oxygen radicals are all implicated in resistance against T. gondii .It was shown recently that polyubiquitin is deposited on the vacuolar membrane in both mouse and human cells .Human resistance against T. gondii is remarkably effective despite the absence of TLR11/12 and IRG proteins.Tryptophan depletion by the catabolic action of an IFNγ-inducible indoleamine dioxygenase has been implicated in restricting T. gondii growth in human cells , but this has not been generalizable over cell types and culture conditions .The human NLRP1 inflammasome has also been implicated as an initiator of some cell-autonomous immunity in human macrophages in the absence of IFNγ , but the effector mechanism is unknown.Human TLR5 has recently been shown to be triggered by T. gondii-profilin , arguably replacing TLR11/12.Perhaps immunity of humans against T. gondii is the sum of small effects.Certainly the human mechanism in its entirety does not exist in mice since loss of IRG or TLR11/12 proteins is fatal.Thus we have a clear dichotomy: mice have the essential TLR11/12 and IRG mechanisms but not the human mechanisms, while humans have their mechanisms, whatever they may be, but not the TLR11/12 and IRG mechanism.The known specializations of the mouse accompany its ESH status .It was further recently found that IL-12 production in mice is triggered by live parasite invasion, in human by phagocytosis and the authors state: ‘possibly reflecting a direct involvement of rodents and not humans in the parasite life cycle.’,However while mouse is an ESH and human not, no causal connection has been offered.Gazzinelli and colleagues tried to strengthen the link, both for TLR11/12 and for IRG proteins, by looking at a wider range of species, but no convincing correlation emerged.IRG genes were certainly most abundant in small rodents, but nearly absent in rabbits.Horses, an unlikely prey for small cats, have no recorded IRG genes but their relative abundance in elephants and manatees stretches the correlative argument too thin.The same problem afflicts the TLR11/12 distribution, present in rodents and lagomorphs, but also in horses, rhinos, elephants and manatees.Absent or pseudogenised in humans, orcas, dogs and cats, the expression of TLR11/12 seems to associate inversely with carnivory, but this correlation too is destroyed by their absence also from the obligate herbivore, the giant panda.The correlative argument contains no causal link.If, however, host and pathogen show reciprocal polymorphism in virulence and resistance, it would suggest that the system is under selection.In the TLR11/12 recognition/response mechanism against T. gondii-profilin, no relevant polymorphism was found correlating with infection status in Apodemus .However in the IRG system in mice, there is functional reciprocal polymorphism with T. gondii virulence proteins .Eurasian T. gondii strains designated type I are virulent for laboratory mice, for example, C57BL/6.Differential virulence of T. gondii strains is due to allelic variation in two homologous, polymorphic secreted proteins, a kinase and a pseudokinase.Together, these phosphorylate two conserved threonines on effector IRG proteins and inactivate them .Type I strains are, however, resisted by a wild-derived mouse strain, CIM, from South India .In crosses between CIM and C57BL/6, all the resistance maps to a highly polymorphic IRG gene cluster on chromosome 11.The polymorphic surface of the pseudokinase ROP5 binds the nucleotide-binding domain of Irga6 adjacent to the target threonines .This surface region, on the homologous Irgb2-b1 protein encoded on chromosome 11, is the same region that shows evidence of recent directional selection.Irgb2-b1 from the CIM mouse transfected into C57BL/6 cells blocks phosphorylation of Irga6 by a virulent type I strain of T. gondii.In this analysis, the argument favouring a causal chain from virulence to resistance is complete.The polymorphic variation of Irgb2-b1 ‘matches’ the polymorphic variation of parasite ROP5 and the results have biological meaning.Type I strains that are virulent in mice carrying the laboratory mouse allele of Irgb2-b1 kill their host within a few days and thereby essentially eliminate the chance of their own transmission.In mice carrying the CIM allele of Irgb2-b1, however, both parasite and host profit; the parasite can encyst in a resistant host, while the host lives out a normal life.These results leave us with a number of questions.It has recently been shown that much of the virulence of S. American T. gondii strains for laboratory mice is also due to alleles expressed at ROP5 .Can we conclude that the allelic variation in ROP5 across multiple parasite strains globally is all directed at allelic variants of IRG proteins?,Or are different ROP5 alleles directed at entirely different target proteins relevant to different ESH species?,The house mouse is a Eurasian species, yet most of the polymorphic variation in ROP5 is found among the enormous diversity of S. American strains .Which species are ESHs in S. America and what if any IRG proteins do they have?,Since the resistant allele of Irgb2-b1 is advantageous to both host and parasite at least in Eurasia, why is it not fixed?,What selection pressure has led to the evolution of the susceptible Irgb2-b1 allele of the laboratory mouse strains and to its greatly reduced expression level?,Just as the polymorphic virulence factors of T. gondii may have different molecular targets in different ESH species, so the IRG resistance system is certainly not directed exclusively at T. gondii.Polymorphic variants of the IRG system found among laboratory mouse strains also regulate resistance to Chlamydia trachomatis and Chlamydia psittaci , while IRG proteins are also essential for resistance of mouse cells against the microsporidian fungus, Encephalitozoon cuniculi, although differential resistance has not been shown for IRG alleles .Both Chlamydiales and Microsporidia are ubiquitous and abundant pathogen classes and may well be more important for the evolutionary dynamics of the IRG system than Toxoplasma.The strategy of T. gondii as a parasite is based on a quest for avirulence, a capacity to attenuate but not to destroy the immune resistance of the host, thus securing the permanent residence required to await transmission.How the parasite achieves this ideal state in thousands of potential hosts, with strikingly different immune systems is the major unknown in T. gondii biology.This power is analogous to the ability of the adaptive immune system of vertebrates to resist thousands of different pathogens.The adaptive immune system shows little co-adaptation at a genomic level to different pathogens; it is a general anti-pathogen machine.Likewise, T. gondii has a general anti-host machine, not perfect, but able to titrate host immunities of many different kinds against the self-destructive potential of its own replicative powers.Armed with this instrument, whatever it consists of, it is presumably irrelevant whether a specific host species is an ESH or not.Polymorphic variation in ROP5 and ROP18 is essential in mice, that use the IRG system, but irrelevant in humans, that do not .Presumably the polymorphism and regulation of other genetic systems are essential against different immune resistance mechanisms favoured by other host species.T. gondii sometimes fails to achieve its goal of avirulence in geographically incoherent infections; some strains of S. American T. gondii are highly virulent in humans , a non-native species, and many S. American strains are highly virulent for laboratory mice, which represent W. European M. m. domesticus >90% genetically .Likewise, the type I Eurasian strains relatively frequent in the far East are highly virulent for laboratory mouse strains but avirulent for M. m. castaneus strains from the East Asian region.These instances hint at further co-evolution between T. gondii and its intermediate hosts.For the moment the proven relationship between polymorphic virulence alleles of T. gondii and the proven resistance alleles of the IRG system of mice presents the strongest evidence that this host pathogen-pair are now or have recently been in a dynamic co-evolutionary relationship of sufficient intensity to contribute to genome modification through allelic diversification by both partners.The weak correlation in species distribution of the TLR11/12 pair possibly suggests that this recognition system also helps or has recently helped several mammals in immunity against T. gondii, but does not tell us that selection by this organism has brought it into existence, any more than that it is likely that H-2Ld, for example, which is known to present several T. gondii peptides to T cells owes its existence to the parasite, and there is no evidence yet in either case of a dynamic co-evolutionary process at work.T. gondii has evolved a complex orchestra of actions that play on vertebrate pathogen resistance machinery and except in the case of the IRG system there is little reason to believe that host resistance machinery is anything other than beneficial to both host and parasite in enabling the avirulent state and encystment.The polymorphism of Irgb2-b1 and its intimate association with the virulence polymorphism of ROP5 and ROP18 raises the question, what selection generates type I virulence in T. gondii strains where M. musculus is an ESH?,Avirulent types II and III strains can also encyst in the highly resistant CIM mice so there is no ‘need’ for extra virulence.Arguably, the type I virulent strains are preferentially adapted to another important ESH species, perhaps a rat, whose IRG system is capable of enforcing sterile immunity on strains lacking the virulent alleles of ROP5 and ROP18.The Irgb2-b1 allele of mice would then be accounted for as an essential adaptation for mice living within the range of such virulent strains, perhaps typically in the Far East.Polymorphic variation in the IRG system is probably also driven by other parasites as well, for example Chlamydia or Microsporidia.Allelic frequencies will depend on the ratio of the intensity of selection pressures from the parasites.Much work will be required and at many different analytical levels, genetic, biochemical, structural, ecological and immunological, to clarify these issues.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Nobody doubts that infections have imposed specialisations on the mammalian genome. However sufficient information is usually missing to attribute a specific genomic modification to pressure from a specific pathogen. Recent studies on mechanisms of mammalian resistance against the ubiquitous protozoan parasite, Toxoplasma gondii, have shown that the small rodents presumed to be largely responsible for transmission of the parasite to its definitive host, the domestic cat, possess distinctive recognition proteins, and interferon-inducible effector proteins (IRG proteins) that limit the potential virulence of the parasite. The phylogenetic association of the recognition proteins, TLR11 and TLR12, with T. gondii resistance is weak, but there is evidence for reciprocal polymorphism between parasite virulence proteins and host IRG proteins that strongly suggests current or recent coevolution.
221
Gene co-expression networks shed light into diseases of brain iron accumulation
Aberrant brain iron deposition occurs in common neurodegenerative disorders), and more prominently in rare inherited diseases categorized as Neurodegeneration with Brain Iron Accumulation.Iron is essential for normal brain function and is heterogeneously and dynamically distributed in the brain.The basal ganglia are among the regions with highest iron levels, and the highest concentrations are observed in oligodendrocytes.Our understanding of brain iron metabolism and how it relates to neurodegeneration and disease is limited due to the inability to distinguish brain cell types via non-invasive techniques and poor understanding of how iron traffics in the brain to adequately supply neurons, astrocytes, oligodendrocytes and microglia.NBIA disorders are clinically characterized by a progressive movement disorder with complicating symptoms that can vary significantly in terms of range and severity, and frequently include neuropsychiatric disturbances, such as cognitive deficits, personality changes with impulsivity and violent outbursts, depression, emotional lability, and obsessive compulsive disorder.This clinically heterogeneous picture is unified by focal brain iron accumulation, predominantly in the basal ganglia.Ten NBIA genetic diseases have already been defined, yet many cases remain genetically undiagnosed.Two NBIA genes are directly involved in iron metabolism, but it remains elusive whether other NBIA genes also regulate iron-related processes in the human brain.We analyzed whole-transcriptome gene expression data from normal human brain and used weighted gene co-expression network analysis to group NBIA genes into modules in an unsupervised manner.This systems-biology approach enables the identification of modules of biologically related genes that are co-expressed and co-regulated, and can give insights on cell-specific molecular signatures.The main goal of this study was to expand our understanding of these iron overload diseases by identifying relationships and shared molecular pathways between known NBIA genes, and unraveling transcriptionally linked novel candidates to facilitate discovery of new genes associated with these diseases and of possible entry points to therapeutic intervention.Brain samples from 101 adult individuals were collected by the Medical Research Council Sudden Death Brain and Tissue Bank.All brains samples were neuropathologically normal, had fully informed consent and were authorized for ethically approved scientific investigation.Within the frame of the UK Brain Expression Consortium, total RNA was isolated and processed for analysis using Affymetrix Exon 1.0 ST Arrays as described elsewhere.Using whole-transcriptome gene expression data, NBIA genes/transcripts were assigned to co-expression modules identified through WGCNA.For the adult brain network analysis a total of 15,409 transcripts passing quality control were used to identify modules, and 3743 additional transcripts were assigned to modules based on their highest module membership, as previously described.Briefly, the WGCNA network was constructed for each tissue using a signed network with power of 12 to achieve a scale-free topology.A dissimilarity matrix based on topological overlap measure was used to identify gene modules, through a dynamic tree-cutting algorithm.More details are given by Forabosco et al.Module preservation statistics were calculated as previously described to assess how well modules from one tissue are preserved in another tissue.Based on the empirical thresholds proposed by Langfelder et al., Z summary scores above 10 indicate strong evidence for module preservation across brain regions.To determine the relevance of each gene in a module, we estimated the module membership, also known as eigengene-based connectivity.Gene interconnections within NBIA transcript-enriched modules were further investigated using VisANT.Hypergeometric distribution was used to evaluate the overrepresentation of NBIA and iron-related gene transcripts in the gene co-expression modules.To further assess the statistical significance of the enrichment of NBIA genes in given putamen modules, we developed a permutation test to estimate the probability that g genes will be found together by chance within a module of size equal or less than m for a given partition of genes G = , arranged into k modules P = , such that each gene gi belongs only to a single module.To estimate the probability of finding g genes in a module of size m or less in partition P, we randomly permuted the genes in G in a list and annotated each gene in that list with the module in P to which the gene belongs.Then we repeated the following procedure 106 times, randomly choosing g positions from the list and checking whether the corresponding genes were annotated with the same module and the module had size m or less.Finally, the probability of finding by chance g genes in a module of size m or less was estimated by dividing by 106 the number of times g genes were found together in such modules.We used independent and publicly available basal ganglia gene expression networks, from 27 adult caudate nucleus samples, to investigate whether our NBIA-containing modules overlap with modules in those previously published networks.We also used the only publicly available basal ganglia pediatric whole-transcriptome gene expression data set to perform WGCNA.We generated pediatric signed networks using a power of 33 and a height of 0.2.A total of 15,285 genes passing quality control were used to identify modules."Fisher's exact test was used to determine the significance of the overlap between distinct networks.Additional validation studies investigated whether the NBIA-containing modules overlap with differentially expressed genes in human NBIA disorders.We used post-mortem basal ganglia tissue from two adults, one male and one female, with a confirmed clinicopathological diagnosis of NBIA, and two age- and gender-matched adults with no diagnosed neurological conditions.All brain tissue was obtained with fully informed consent and the study was approved by the Human Research Ethics Committee of the University of Newcastle, Australia.Total RNA was obtained as previously described, and arrays performed using the Illumina HumanHT-12 v4 Expression BeadChip.Following Cubic Spline normalization in GenomeStudio Gene Expression Module, genes were considered differentially expressed if the fold-change of the mean NBIA signal relative to the mean control signal for each brain region was at least 1.5.The small sample size prevented statistical comparison of means and further analysis.Chi-square testing determined the significance of the overlap between differentially expressed genes in NBIA brain and NBIA-enriched co-expression modules in normal human brain.The Hfe−/− xTfr2mut mouse model of iron overload was generated previously by crossing mice with deletion of the Hfe gene with mice harboring the p.Y245X nonsense mutation in the transferrin receptor 2 gene.Mice were on an AKR genetic background, which manifests a strong iron loading phenotype.Further details of the Hfe−/− xTfr2mut model are provided elsewhere.All protocols were approved by the Animal Ethics Committees of the University of Western Australia.Male wild-type and Hfe−/− xTfr2mut mice were fed standard mouse chow from weaning.At 10 weeks of age, Hfe−/− xTfr2mut mice were switched to an iron-supplemented diet containing 2% carbonyl iron for 3 weeks.At 13 weeks of age, mice were anesthetized and perfused transcardially with isotonic saline.Brain tissue was collected and snap frozen in liquid nitrogen.RNA isolation and microarray analysis were performed as described in the previous section except that samples were hybridized to Illumina Sentrix MouseRef-8 v2.0 BeadChip microarrays, as previously described.Microarray data were subjected to Average or Cubic Spline normalization in GenomeStudio Gene Expression Module and differential expression determined using either GenomeStudio or GeneSpring GX 7.3 as described elsewhere, generating four lists of differentially-expressed genes.Analyses considered both the union of these four lists to minimize false negatives, and the intersection to minimize false positives, as detailed elsewhere.A subset of differentially expressed genes was selected for further investigation in additional biological replicates using real-time RT-PCR.To evaluate the biological and functional relevance of NBIA gene expression networks g:Profiler was used.Only genes present in the adult brain networks were used as background for this analysis.Overrepresentation of Gene Ontology categories, KEGG pathways, and Human Phenotype Ontology terms was examined.The g:SCS algorithm was used to account for multiple testing correction, and corrected p < 0.05 was considered significant.Our data and data from the Human Brain Transcriptome project show that NBIA genes are highly expressed in the human brain throughout development and aging.Analysis of 10 brain regions demonstrates that the basal ganglia are usually not among the regions with the highest expression levels of NBIA genes.Strikingly, NBIA genes typically associated with white matter changes exhibit the highest expression levels in the white matter, while those not associated with white matter involvement exhibit their lowest levels in this region, suggesting a relationship between the pattern of gene expression and the observed pathology.To better understand the functional relationship between NBIA genes, we applied WGCNA to whole-transcriptome data, and focused on basal ganglia modules.The substantia nigra shows no significant clustering of NBIA genes.The putamen, however, shows the highest clustering in all 10 brain regions analyzed, consistent with clinicopathological features found in NBIAs converging on basal ganglia involvement.Only 4/20 putamen gene co-expression modules contain NBIA transcripts, with a statistically significant clustering in the brown and green modules.Fig. 2 shows interconnections of genes within these NBIA-enriched modules.All putamen NBIA-containing modules are well preserved across all 10 brain regions, which is in line with the clinicopathological complexity of NBIA disorders involving other brain regions.The putamen along with the caudate nucleus forms the striatum, the primary recipient of inputs to the basal ganglia system.To validate our networks, we determined whether our NBIA-containing modules are composed of the same genes as those in publicly available caudate nucleus networks and identified a highly significant overlap between these two basal ganglia networks.Furthermore, as NBIA disorders often have a childhood onset, we constructed pediatric gene co-expression networks.The greatest overlap between the adult NBIA-enriched brown and green modules, and the pediatric striatum modules occurs with the striatum aliceblue and lightyellow modules, respectively.Five out of 10 NBIA genes cluster in the aliceblue module, 3 of which belong to the adult brown module.We next asked whether the putamen NBIA networks can provide information on specific cell types that may be involved in the origin of NBIA disorders.A statistically significant enrichment for neuronal markers, including GRIN2B and SYT1, was found in the putamen brown module.GAD2 gene, a marker for GABAergic neurons, is also present.Conversely, an overrepresentation of oligodendrocyte markers, including MAG, MOG, and OLIG2, was found in the putamen green module.This module contains the NBIA gene FA2H, also previously described as an oligodendrocyte-enriched gene.The other two NBIA-containing modules, including the one with CP, resemble astrocytic signatures.Overlapping modules from the caudate nucleus networks are reported by Oldham et al. to be enriched for the same cell types.The pediatric striatum aliceblue and burlywood modules also resemble neuronal and oligodendrocytic signatures, respectively.We further investigated expression patterns of NBIA genes using mouse brain data from the Allen Brain Atlas.NBIA genes in the brown module seem to differentiate gray from white matter and behave similarly to the neuronal markers, while genes in the green module behave similarly to oligodendrocytic markers.This is also in agreement with the relatively higher expression of green module genes in the white matter.In line with the cell types described above, in the putamen brown module, which includes PANK2, ATP13A2, C19orf12, and COASY, several GO terms related with synaptic transmission, neuron projection development, protein modification by small protein conjugation or removal, modification-dependent protein catabolic process, and synapse are overrepresented.Synaptic vesicle cycle KEGG pathway is also enriched.We further investigated this module for the presence of genes encoding for well-known synaptic vesicle and synaptic plasma membrane proteins.Synaptic vesicle genes VAMP2, SYT1, and RAB3A have high module memberships in the brown module, as do pre- and post-synaptic plasma membrane genes, including SNAP25, ATP1A3, DLG2, DLG3, and DLG4.For the green module, which includes FTL, DCAF17, and FA2H, enrichment analysis shows a significant overrepresentation of multiple GO terms, such as ensheathment of neurons, lipid biosynthetic process, membrane organization, myelin sheath, dipeptidase activity, and protein binding.The Sphingolipid metabolism KEGG pathway, which is essential for proper myelination, is also overrepresented.We observed that our NBIA-containing modules comprise at least 21/29 myelination-related genes), and the green module alone contains 12, including MAG, MOG, PLP1, and CNP.As the unifying feature in NBIA disorders is focal basal ganglia iron accumulation, we investigated whether iron metabolism-related genes are present in the putamen NBIA-enriched modules.The brown module contains the iron-responsive element binding protein 2 gene, a key gene in the regulation of intracellular iron homeostasis.It also includes SLC25A37, which encodes for the mitochondrial iron importer mitoferrin 1, as well as EXOC6, hub gene – 91st quantile), HMOX2, and PGRMC1.The green module, apart from the NBIA FTL gene, also includes the ferritin heavy polypeptide 1, which are both important for iron sequestration.Additionally, it contains transferrin, transferrin receptor, and solute carrier family 11 member 2 genes, all involved in iron uptake.An overrepresentation of key iron-related genes is observed in these two gene expression modules, suggesting that they play a central role in brain iron metabolism.NBIA genes and iron-related genes are interconnected with the same hub genes, suggesting that perturbations of these networks may underlie the iron metabolism dysregulation seen in NBIA disorders.We observed a significant overlap between genes differentially expressed in post-mortem basal ganglia tissue of NBIA cases compared to matched controls and genes in the green and brown modules, suggesting that these networks are indeed disturbed in NBIA brains.To investigate whether the accumulation of iron itself triggers disturbances in NBIA networks, we investigated gene expression changes in an iron overload mouse model without mutations in NBIA genes.This mouse model shows increased brain iron and ferritin levels at 12 weeks of age.In both mutant and wild-type mice brains, iron levels increase with age, and iron localizes predominantly in the basal ganglia and the choroid plexus, and overlaps with myelin-rich areas.At age 13 weeks, brain gene expression analysis in male mice reveals differentially expressed genes in the mutants when compared to wild-type.The mutant mice show downregulation of five NBIA genes, and there is excessive overlap between genes in the human NBIA-enriched modules and genes differentially expressed in the mutant compared to wild-type mice.Many of these genes are highly interconnected within the NBIA networks, including for example CNP from the green module, and ATP6V0A1 from the brown module.This suggests that increased brain iron load disturbs the NBIA networks.Functional enrichment analysis for dysregulated genes in the mutant mice overlapping with green or brown module genes reveals an overrepresentation of GO terms related with the endomembrane system.Phagosome and Synaptic vesicle cycle pathways are also enriched for genes overlapping with the green and brown modules, respectively.Overall, brain iron overload seems to compromise membrane trafficking by disrupting basal ganglia NBIA gene networks.We further investigated whether genes in our basal ganglia NBIA networks are associated with additional human neurological disorders with abnormal brain iron content."This is the case for genes associated with Mucolipidosis type IV, X-linked sideroblastic anemia with ataxia, Parkinson's disease, Alzheimer's disease, and amyotrophic lateral sclerosis.Furthermore, as a proof of principle that these networks provide pools of candidate genes, there are recent reports on NBIA compatible phenotypes associated with mutations in RAB39B and UBQLN2 – two genes belonging to our NBIA-enriched networks.We also explored Human Phenotype Ontology terms associated with the 10 NBIA genes and inferred whether other genes in our networks are associated with the same HPO terms.In the top 15 most significantly enriched terms for NBIAs are core NBIA symptoms, such as Dystonia, Dysarthria, Cognitive impairment, Parkinsonism, and Spasticity."Crossing our putamen NBIA-enriched modules with genes associated with at least two of those core HPO terms and an OMIM entry, we found genes associated with Parkinson's disease, spastic paraplegia, Lesch-Nyhan syndrome, and other neurological diseases/syndromes, among highly interconnected genes of the brown module.In the top 5 quantiles of the green module, we found genes associated with spastic paraplegia, Niemann-Pick disease, and Canavan disease.NBIA disorders share a core set of clinicopathological features, including neurodegeneration, but not much is known about the originating cell type.According to our data and in line with NBIA histopathological features, multiple cell types are likely to be involved.Neuronally-derived eosinophilic spheroid bodies, thought to represent degenerating neurons and accumulation of protein and lipid storage material as well as damaged organelles, are a pathologic hallmark of several NBIA disorders, including those caused by mutations in PANK2 and C19orf12 — genes that belong to a co-expression module that reflects neuronal signatures.Myelin loss has been associated with FTL mutations and FA2H deficiency — genes of a module that reflects oligodendrocytic signatures and is associated with myelination.Indeed, factors involved in myelination, namely FA2H, are gaining relevance in brain disease.Enlarged and distorted iron-overloaded astrocytes are a core pathological feature in NBIA patients with CP mutations, and this gene is in an astrocytic-like module.This multitude of cellular origins suggests that neuronal death in NBIA disorders can result from direct insults to neurons or as secondary events caused by the loss of support normally provided by astrocytes and/or oligodendrocytes.Dysfunction of membrane trafficking is a hallmark of many neurological and psychiatric diseases, with a decreased degradation capacity of pre- and post-synaptic trafficking compartments leading to the accumulation of dysfunctional intracellular machineries.Our data shows the involvement of the synapse and the endomembrane system in the NBIA networks.It is possible that the characteristic NBIA spheroids are a reflection of these events, with consequent neurodegeneration due to the inherent toxicity of the cargo overload or a toxic cellular response to such overload."NBIAs share the variable accumulation of α-synuclein-positive Lewy bodies and/or tau pathology and brain iron deposition with common neurodegenerative diseases).A better understanding of the synaptic pathology in NBIAs raises the hope for the development of therapeutic strategies that will improve synaptic maintenance, which is essential for neuronal health, and help to therapeutically tackle NBIAs and more common neurological, psychiatric and neurodevelopmental diseases sharing underlying pathology.Iron is essential for normal neurological function, as it is required for the production of high levels of ATP needed to maintain membrane ionic gradients, synaptic transmission, axonal transport, neurotransmitter synthesis, myelination, etc.The brain tends to accumulate iron with age, and the globus palllidus, red nucleus, substantia nigra, and caudate-putamen have higher concentrations of iron throughout life.Pronounced and premature iron accumulation in the basal ganglia is a hallmark of NBIA disorders, which probably involves loss of concordant regulation between iron uptake, storage and transport within the brain.Only two NBIA genes have been so far directly implicated in iron metabolism.While mutations in FTL disrupt the structure of ferritin and modify its capacity to incorporate iron, mutations in CP lead to defective export of iron from cells.We showed important connections of iron-related genes within the basal ganglia NBIA networks, indicating a broader involvement of NBIA genes in iron-related processes.Deficiency of the IREB2, a gene present in our networks and a key regulator of intracellular iron homeostasis, is enough to cause progressive neurodegeneration with prominent caudate-putamen iron accumulation in mice.Genes involved in iron uptake and storage are present as well.Therefore, disruptions in these networks likely dysregulate iron-related processes.We have also shown that brain iron overload can be associated with dysregulated expression of genes present in the NBIA networks, including downregulation of several NBIA genes, even in the absence of mutations in NBIA genes.Altogether, this raises the hypothesis that disturbances in NBIA gene networks contribute to dysregulation of iron metabolism and, in turn, progressive increase in brain iron levels aggravates the disruption of these gene networks.According to this hypothesis, iron accumulation is not mandatory for the onset of the symptoms, but it seems essential in determining the fate of disease progression.This is consistent with the fact that not all patients with mutations in NBIA genes show significant brain iron overload in early stages of the disease.Whether this finding merely reflects the incapacity of MRI methods to detect subtle iron level changes remains debatable.A recent report with promising results on the stabilization of the disease upon treatment with an iron-chelating agent lends further support to that hypothesis.In conclusion, our human brain gene co-expression network analysis suggests that multiple cell types act in the origin of the clinically heterogeneous group of NBIA disorders, and reveals strong links with iron-related processes.Overall, our results show convergent pathways connecting groups of NBIA genes and other neurological diseases genes, providing possible points for therapeutic intervention.Given the enrichment of these networks for genes associated with NBIA and overlapping phenotypes, they provide reservoirs of candidate genes useful for prioritizing genetic variants and boosting gene discovery in ongoing collaborative sequencing initiatives.The authors declare no conflict of interest.
Aberrant brain iron deposition is observed in both common and rare neurodegenerative disorders, including those categorized as Neurodegeneration with Brain Iron Accumulation (NBIA), which are characterized by focal iron accumulation in the basal ganglia. Two NBIA genes are directly involved in iron metabolism, but whether other NBIA-related genes also regulate iron homeostasis in the human brain, and whether aberrant iron deposition contributes to neurodegenerative processes remains largely unknown. This study aims to expand our understanding of these iron overload diseases and identify relationships between known NBIA genes and their main interacting partners by using a systems biology approach.We used whole-transcriptome gene expression data from human brain samples originating from 101 neuropathologically normal individuals (10 brain regions) to generate weighted gene co-expression networks and cluster the 10 known NBIA genes in an unsupervised manner. We investigated NBIA-enriched networks for relevant cell types and pathways, and whether they are disrupted by iron loading in NBIA diseased tissue and in an in vivo mouse model.We identified two basal ganglia gene co-expression modules significantly enriched for NBIA genes, which resemble neuronal and oligodendrocytic signatures. These NBIA gene networks are enriched for iron-related genes, and implicate synapse and lipid metabolism related pathways. Our data also indicates that these networks are disrupted by excessive brain iron loading.We identified multiple cell types in the origin of NBIA disorders. We also found unforeseen links between NBIA networks and iron-related processes, and demonstrate convergent pathways connecting NBIAs and phenotypically overlapping diseases. Our results are of further relevance for these diseases by providing candidates for new causative genes and possible points for therapeutic intervention.
222
Multi-echo fMRI, resting-state connectivity, and high psychometric schizotypy
Over the last two decades, the dysconnection hypothesis of schizophrenia has gained growing neurobiological support due to technical advances in structural and functional magnetic resonance imaging.The dysconnection hypothesis suggests that the hallmark symptoms of schizophrenia arise from abnormal functional integration between distributed brain regions due to altered neuromodulation of synaptic plasticity, particularly in regions with dopaminergic afferents.A key area is the striatum, which receives prominent innervations from dopaminergic neurons in the midbrain, and is central to the orchestration of activity of limbic, associative and motor brain regions through interconnected cortico-striatal loops, thereby supporting a range of neural computations necessary for normal cognitive function.Striatal dysregulation may therefore be involved in widespread disruption of these circuits and the emergence of positive symptoms.Indeed, a number of studies in patients with schizophrenia and individuals at clinical high risk of psychosis reported increased presynaptic dopamine synthesis capacity and release in the striatum, and a direct correlation between the extent of striatal dysfunction and the severity of positive psychotic symptoms in patients.Moreover, there is evidence to suggest that positive symptoms may be associated with disrupted task-related striatal activation and connectivity during the attribution of aberrant salience to otherwise irrelevant stimuli in healthy individuals, CHR subjects, and patients with a full-blown psychotic disorder.Such evidence aligns well with predictions based on animal models of psychosis which show that striatal dysfunction may result from increased hippocampal activity, which may in turn be related to prefrontal cortex abnormalities, and propose that disrupted interactions within this corticostriatal circuit contribute to the development of aberrant salience processing and positive symptoms.In this context, resting-state functional magnetic resonance imaging provides a powerful tool to examine patterns of altered functional connectivity and their relationship to symptomatology in patients with established psychosis as well as in individuals at genetic or CHR of psychosis.rs-fMRI studies focusing on striatal connectivity in patients with schizophrenia and in their relatives have reported altered functional integration of this region with a number of cortical areas, including mainly the prefrontal and temporal cortices.Dimensional views of psychosis postulate that there is continuity between subclinical psychotic-like experiences which can be detected in healthy people using validated self-report questionnaires and psychotic symptoms in patients with schizophrenia.Consistent with this psychosis continuum view, a recent rs-fMRI study reported that scores on the positive dimension of schizotypy were positively associated with ventral striatal–PFC connectivity, and negatively associated with dorsal striatal–posterior cingulate connectivity.Similarly, another recent study by Rössler and colleagues reported ventral striatal dysconnectivity in a schizotypy sample and provided preliminary evidence that this might indeed result from dopaminergic alterations, supporting the dysconnection hypothesis.In particular, the authors found lower ventral striatal connectivity in participants who scored higher on a schizotypy scale regardless of whether they had received an L-Dopa or placebo challenge, whereas participants with lower schizotypy scores showed striatal dysconnectivity following L-Dopa administration.However, the samples used in both studies above were largely composed of individuals with scores in the low to moderate range.It thus remains unclear whether corticostriatal dysconnectivity extends to individuals with high positive schizotypy scores.This is an important question as previous studies in schizophrenia and CHR subjects indicate that the greater the rs-fMRI dysconnectivity, the higher the severity of positive symptoms, and that high scores in positive schizotypy scales are associated with higher severity of positive symptoms in patients with schizophrenia.The high schizotypy paradigm is a widely used strategy to examine neurobiological factors related to the expression of psychotic symptoms in the absence of possibly confounding disease-related effects such as antipsychotic medication exposure and illness chronicity which can affect rs-fMRI data.However, additional confounders in imaging studies may arise from technical limitations, as for example, rs-fMRI data tends to be noisy and may result in indeterminacy of the sources of blood oxygenation level dependent signals, particularly within subcortical regions.Previous studies in psychosis and schizotypy used standard rs-fMRI, which is based on single-echo echo-planar imaging sequences employing echo times designed to roughly correspond to the average tissue T2*, in order to optimize contrast.However, because T2* varies regionally, so does the signal-to-contrast ratio, resulting in signal loss in parts of the brain where T2* is particularly short or long.This compromises the quality of the data, especially in low T2* regions such as the inferior temporal cortices, or indeed the orbitofrontal cortex and ventral striatum.Adding to this issue, rs-fMRI connectivity findings are highly vulnerable to spurious effects: because they are often based on correlational analyses, any factor that simultaneously influences signal in more than one region of the brain will increase observed connectivity, while factors that influence signal in a single region will decrease observed connectivity, such as head motion, cardiac and respiratory rates, arterial CO₂ concentration and blood pressure.Typically, this type of so called physiological noise is dealt with using band-pass-filtering for the BOLD signal frequency band and removal of the variance explained by separately acquired physiological nuisance recordings using linear regression.However, significant noise remains even after data clean-up, and nuisance variation that has not been modelled will inevitably remain.These limitations can be addressed by using an fMRI sequence that collects multiple echoes after each pulse.Firstly, the collection of multiple echoes allows for the relaxometric estimation of region specific T2* values, and hence for the voxel-wise computation of a contrast optimized signal from appropriately weighted echoes, which drastically improves overall contrast-to-noise ratio.Secondly, the collection of multiple echoes allows for blind separation of BOLD-like from non-BOLD-like signal components: while the observed percent signal change ΔS always depends on both changes in the initial signal intensity and changes in T2*, BOLD effects modulate T2* much more than S0, and non-BOLD effects modulate S0 much more than T2*.Since T2* scales linearly with TE but S0 does not, regression of signal components identified using independent component analysis against TE and S0 can be used to differentiate between BOLD- and non-BOLD-like components.Thus, nuisance contributions can be reliably removed even if their source is unknown.To date, these technical limitations have not been addressed in investigations of rs-fMRI connectivity in the psychosis spectrum.Hence, this study sought to investigate rs-fMRI corticostriatal connectivity in a sample of healthy adults with high scores on a psychometric measure of positive schizotypy using contrast-optimized, and independent components analysis-denoised, multi-echo EPI data.Based on findings implicating corticostriatal dysconnectivity in the emergence of positive psychotic symptoms, we hypothesized that individuals with high positive schizotypy would show altered corticostriatal functional connectivity compared to a group of similar individuals with low positive schizotypy scores as control group."Two hundred and fifty potential participants who had responded to online advertising via the Research Volunteer Recruitment Webpage of King's College London were pre-screened using the short version of the Oxford-Liverpool Inventory of Feelings and Experiences.Subjects were invited to participate in the study if they scored <2 or >7 on the Unusual Experiences subscale of the O-LIFE, as in a previous imaging study in our center.The UE subscale of the O-LIFE questionnaire reflects positive schizotypy and is associated with positive symptoms in schizophrenia patients.Participants were excluded if they had a history of neurologic/psychiatric disorders as assessed using the Mini International Neuropsychiatric Inventory and the Psychosis Screening Questionnaire.Other exclusion criteria included contraindications to MRI scanning, having a first-degree relative with present/past history of psychotic disorder, present/past history of use of psychotropic medications, and use of recreational drugs in the two weeks prior to scanning or meeting criteria for substance abuse/dependence by self-report.The final sample included 20 participants in both the HS and LS groups.Three studies have reported previous findings from overlapping sub-samples of this cohort with other imaging modalities.Ethical approval for the study was obtained from the KCL College Research Ethics Committee and all participants provided written informed consent before initiating any study procedures.On the day of scanning, before scanning commenced, participants completed a semi-structured interview adapted from the Early Psychosis Prevention and Intervention Centre Drug and Alcohol Assessment Schedule to assess current/past medication use and current/past use of alcohol, tobacco and cannabis; the Social Function Questionnaire to measure social functioning; and a validated short version of the Wechsler Adult Intelligence Scale-III to measure intellectual ability.Analysis of demographic and questionnaire data was performed in SPSS 24, with the effect of group being tested using independent sample t-tests for parametric data and χ2-tests for non-parametric data."Scanning was performed on a General Electric Discovery MR750 3 T system at the Institute of Psychiatry, Psychology & Neuroscience, King's College London.For the rs-fMRI, participants were asked to lie still with their eyes open, and to think of nothing in particular while a fixation cross was displayed in the center of a screen which they viewed through a mirror system.Scanning time for the rs-fMRI was 12 min.During this time, ME-EPI images sensitive to BOLD contrast were acquired to measure hemodynamic responses acquisition and 1-mm interslice gap).A structural scan was acquired for co-registration of the ME-EPI data by means of a three-dimensional T1-weighted inversion recovery-prepared gradient echo sequence.After resetting of the origins for both T1-weighted and ME-EP images, a study specific template was created using Advanced Normalization Tools for later normalization, in order to reduce localization error and improve sensitivity.One subject from the HS group had to be excluded because of atypical anatomy which undermined the template quality, resulting in a final sample of 20 LS and 19 HS subjects.The ME-EPI echoes were separated into four distinct time series, which were then de-spiked using 3dDespike in the Analysis of Functional NeuroImages framework, and slice time corrected using SPM12."Parameters for motion correction were estimated from the first echo, and applied to all four echoes using FSL's mcFLIRT. "Subjects' ME-EP images were then co-registered to the T1 scan using boundary-based registration as implemented in FLIRT.Again, parameters were estimated for the first echo, and subsequently applied to all four echoes.All echoes were spatially normalized to the study-specific template, and from there to Montreal Neurological Institute space.Finally, the images from all echoes were z-concatenated for further processing, i.e. the space-by-time matrices from each echo were appended to one another in the z-direction to form a single matrix using the 3dZcat function in AFNI.TEDANA, a python script that forms part of the Multi Echo Independent Component Analysis package was called to perform TE dependent ICA-based denoising and T2* weighted averaging of echoes as described above.The denoised, optimally combined images were subsequently taken forward for motion correction, removal of white matter and cerebrospinal fluid signal via regression, and band-pass-filtering.A comparison of the mean framewise displacement between HS and LS subjects revealed no significant difference in head motion between groups.No individual subject showed mean FD in excess of 0.12 mm.We defined six bilateral striatal seeds based on previous validated work on striatal connectivity: ventral striatum inferior/nucleus accumbens; ventral striatum superior; dorsal caudate; dorsal caudal putamen; dorsal rostral putamen; and ventral rostral putamen, with the radius set at 3.5 mm."The mean signal was extracted from the seed regions using the REST toolbox to perform Pearson's correlation coefficients between these regressors and the rest of the brain, which were subsequently Fisher's Z transformed.The resulting Z-maps were then taken to group-level whole-brain analysis using the General Linear Model as implemented in SPM12.Connectivity differences between groups were examined using t-contrasts.We used a cluster forming threshold of p < .001 uncorrected, to then enforce cluster-wise correction for multiple testing at p < .05 family-wise error rate, based on previous studies.Potential effects of age or substance use on areas showing significant group differences in connectivity were examined with an additional ANCOVA in SPM.Finally, associations between symptom scores in HS subjects and Z-scores averaged across clusters showing group differences were analyzed using linear regression in SPSS.Table 1 summarizes the sociodemographic characteristics of each group.HS and LS differed, by design, only on the schizotypy measures.Specifically, HS had higher scores on the O-LIFE subscales measuring unusual experiences, cognitive disorganization, and impulsive non-conformity."To test our hypothesis that individuals with HS would show altered functional connectivity of the striatum relative to subjects with LS, we compared Fisher's Z-values of whole-brain connectivity for each striatal seed between groups.This analysis revealed hypoconnectivity in HS compared to LS individuals between ventral striatal regions and the ventromedial PFC. Specifically, hypoconnectivity was observed between the VSi and a cluster including the bilateral gyrus rectus and right medial orbital gyrus, and between the VRP a cluster including the right medial orbital gyrus, left gyrus rectus and right anterior cingulate cortex.Furthermore, we found hypoconnectivity between dorsal striatal regions and temporo-occipital areas in HS compared to LS subjects.More specifically, HS subjects showed hypoconnectivity between the DRP and a cluster centered on the right hippocampus extending into occipital regions, left middle occipital gyrus, and calcarine sulcus; and between the DCP and the right middle occipital gyrus/calcarine sulcus, the left hippocampus, and cerebellar areas.There were no other regions showing significant differences in rs-fMRI.For completeness, within-group rs-fMRI connectivity results for each striatal seed and the rest of the brain are shown in Fig. 2B and C. For a full list of significant clusters within each group, see Supplementary Table 1.There were no other regions showing significant differences in rs-fMRI.For completeness, within-group rs-fMRI connectivity results for each striatal seed and the rest of the brain are shown in Fig. 2B and C. For a full list of significant clusters within each group, see Supplementary Table 1.Groups were matched in demographic variables and including age as a covariate of no interest in the imaging analysis did not change the results.The hypoconnectivity between between DRP – calcarine sulcus, DCP – hippocampus, and DCP – middle occipital gyrus remained apparent at cluster-wise pFWE < 0.05 when alcohol, cigarette and cannabis use were added to the statistical model as covariates of no interest.However, adding these variables as covariates of no interest rendered the reductions in VSi – vmPFC, VRP – vmPFC, DCP-cerebellum and DRP-middle occipital gyrus connectivity no longer significant at cluster-wise pFWE < 0.05.Within HS subjects, linear regression of Z-scores averaged across significant clusters against O-LIFE UE scores revealed no significant associations with either ventral or dorsal striatal connectivity.However, there were trend-level positive associations between positive schizotypy scores as assessed by O-LIFE UE and VSi-vmPFC and VRP-vmPFC connectivity, respectively = 4.046, p = .061, R2 = 0.152; and F = 4.403, p = .052, R2 = 0.167).In an exploratory analysis, we further assessed the association between significant clusters and the O-LIFE subscores Impulsive Non-conformity and Cognitive Disorganization in HS subjects.Linear regression revealed no significant associations with either ventral or dorsal striatal connectivity.Because the O-LIFE subscore Introvertive Anhedonia did not differ between groups, we tested the association between this subscore and connectivity indices across both HS and LS.Again, linear regression revealed no significant associations with either ventral or dorsal striatal connectivity.Using novel multi-echo rs-fMRI methodology, the present study found lower corticostriatal resting-state functional connectivity in healthy individuals with high levels of psychotic-like experiences compared to those without such experiences.These findings support the notion that corticostriatal dysconnectivity is involved in the expression of psychotic-like experiences across the extended psychosis phenotype, including healthy individuals, people at CHR of psychosis, and patients with an established psychotic disorder.Interestingly, rs-fMRI striatal connectivity in HS subjects was characterized by both a lower level of positive coupling and a lower level of negative coupling compared to LS subjects.Specifically, distinct functional connectivity patterns were detected along the ventral-dorsal striatal axis; HS subjects showed lower positive coupling between the ventral striatum and vmPFC, and lower negative coupling between the dorsal striatum and temporo-occipital regions including the hippocampus, middle occipital gyrus and calcarine sulcus.Previous studies have shown that striatal dysconnectivity in patients with schizophrenia, CHR subjects and individuals with low-to-moderate levels of positive schizotypy may be characterized by both higher and lower connectivity of different striatal regions.This pattern has been proposed to indicate a potential risk biomarker for psychosis onset in vulnerable individuals.However, the directionality of the changes in previous studies has shown some inconsistency; the above studies reported a dorsal-to-ventral gradient of hypoconnectivity to hyperconnectivity with frontal regions.In contrast, reduced connectivity between ventral striatal and ventral prefrontal areas has been reported in unmedicated patients with schizophrenia, in line with the results of the present study in high positive schizotypy.To our knowledge, there are no previous reports of hypoconnectivity between dorsal striatal areas and temporo-parietal regions in patients that would mirror our findings in high schizotypes, suggesting a lack of continuity of this phenotype across the psychosis spectrum.Further, we found no evidence of an association between dorsal system alterations and symptom scores in our data.We are therefore cautious to interpret our finding as a reflection of psychotic symptoms.A possible explanation is that, rather than a pathological mechanism, the group difference might reflect a resilience mechanism that protects healthy individuals with high psychometric schizotypy from their psychotic experiences to become clinically relevant.Previous work on schizotypy has reported striatal hypoconnectivity with posterior regions, but lower connectivity with the vmPFC has not been reported.These discrepancies may relate to the nature of the sample, as we used a group of subjects comprising high scorers in positive schizotypy as identified using a measure of subclinical psychotic experiences such as the O-LIFE, while Rössler et al. and Wang et al. studied low-to-moderate scorers as identified using the Schizotypal Personality Disorder Questionnaire."The differences may also relate to the application of a multi-echo rs-fMRI sequence in our study but not in Rössler et al.'s and Wang et al.'s studies, such that our finding of lower vmPFC-ventral striatal connectivity in the HS group may have been masked in those studies since these regions have low CNR in standard EPI-sequences.While we did not detect significant associations between connectivity differences and positive schizotypy scores in the HS group, there was trend level evidence for a positive relationship between ventral striatum – vmPFC connectivity and the Unusual Experiences scale of the O-LIFE.Due to the limited size and low variability along the schizotypy dimension within the HS group, the possibility that there was insufficient power to detect significant associations with UE scores cannot be ruled out.Nevertheless, our findings highlight the promising applicability of multi-echo rs-fMRI methods for the detection of dysconnectivity patterns in a cross-sectional study design.Further research assessing functional connectivity across different groups across the psychosis spectrum with larger sample sizes will help clarify whether rs-fMRI connectivity changes vary according to the degree of vulnerability and of severity of psychotic experiences.Mechanistically, the dysconnection hypothesis of psychosis proposes that the likely neurobiological basis for dysconnectivity would be aberrant neuromodulation.Animal work suggests that altered striatal dopaminergic signaling may disrupt the ventral aspects of frontostriatal connectivity in relation to psychosis phenotypes as, for example, in the rodent nucleus accumbens, mPFC afferents are modulated by dopamine via D2 receptors, such that increases in tonic dopamine levels attenuate mPFC inputs.In this context, ventral striatal-vmPFC hypoconnectivity in positive schizotypy could be driven by increased tonic striatal dopamine.Indeed, there is some evidence of an association between schizotypy scores, disrupted dopaminergic neurotransmission, and striatal dysconnectivity, although findings have been less consistent than in frank psychosis, possibly due to high heterogeneity in the experimental designs and methods used.An interesting corollary of the hypothesis that a dopaminergic dysfunction of the striatum in the psychosis spectrum compromises the functional integrity of the limbic cortico-striatal loop is that it might also account for reduced vmPFC connectivity with the default mode network in psychosis patients.Striatal dopamine function has been directly associated with vmPFC-DMN connectivity in a study of the effects of antipsychotics on the DMN, as well as in an investigation of a single nucleotide polymorphism of the D2 receptor gene.Consistent with a disruption of coordinated activity of the vmPFC driven by putatively dopaminergic striatal abnormalities, Wang et al. report a breakdown of the reciprocal interaction between the striatum and DMN nodes, including the vmPFC, in schizophrenia.An alternative mechanistic explanation could be that connectivity alterations relate to changes in GABA- or glutamatergic alterations in the striatum, or indeed that primary vmPFC dysfunction may lead to dysconnectivity, as preclinical evidence suggests that striatal hyperdopaminergia may be a downstream effect of a failure of the mPFC to regulate hippocampal hyperresponsivity.Consistent with this notion, our recent positive schizotypy work found increased resting-state perfusion of the hippocampus in a largely overlapping sample, in line with previous CHR studies.In this study, we extend the existing knowledge by investigating resting-state connectivity in otherwise healthy individuals on the high end of the schizotypy spectrum using multi-echo fMRI data.Our approach is advantageous for two reasons: First, the acquisition of multiple echoes is thought to afford superior noise removal and contrast optimization compared to traditional techniques, yielding better quality data.Second, variation due to illness chronicity, medication status, impaired global function and other illness associated factors are curtailed as in other studies on schizotypy, but high schizotypes are arguably phenotypically closer to psychosis patients that those in the low to moderate range that were previously examined.Thus, our results are an important addition to the literature on the role of striatal dysconnectivity in the development and maintenance of psychotic traits.One limitation of the present study is that the cross-sectional nature of the study prevents elucidating whether the observed findings reflect a risk or a resilience phenotype, in particular given reports that high schizotypes have a lower likelihood of developing psychosis than a CHR group.Further, while participants were asked to refrain from taking recreational drugs for two weeks prior to scanning, obtaining a biological measure of drug use on the day of scanning would have helped further confirm this issue.The ventral striatal – vmPFC and dorsal striatal-occipital hypoconnectivity were no longer significant after including substance use in the statistical analysis.Finally, our study aimed at elucidating the role of striatal connectivity in the expression of psychotic-like experiences of the positive dimension based on previous literature in patients with psychosis and animal models, but further studies including schizotypal individuals based on negative and disorganized dimensions would clarify whether differences in functional connectivity are also related to negative and disorganized traits.Future work with multi-echo rs-fMRI should directly investigate how striatal connectivity differences relate to markers of striatal dopaminergic function across the extended psychosis phenotype in patients, high-risk individuals, and high schizotypes.This would provide crucial evidence regarding the similarities and differences in striatal connectivity between individuals across the spectrum, yielding clues as to the potential determinants of psychosis risk, the occurrence of symptoms, illness-status and severity.Additionally, longitudinal studies should investigate which connectivity changes accompany transition to frank psychosis, as well as potential protective factors.Further, the functional and behavioral consequences of aberrant connectivity in psychosis spectrum disorders should be ascertained in studies combining resting state and functional measures of e.g. salience attribution.Using a multi echo rs-fMRI sequence and independent component analysis, we found that high compared to low positive schizotypy was associated with lower functional connectivity between ventral aspects of the striatum and the vmPFC and between dorsal striatal regions and temporo-occipital areas.Given that aberrant functional integration has been implicated in the pathophysiology of psychosis, the present results offer some support to the notion of a central role of striatal dysconnectivity in the extended psychosis spectrum.The following is the supplementary data related to this article.Resting-state fMRI striatal connectivity in high and low schizotypy.Supplementary data to this article can be found online at https://doi.org/10.1016/j.nicl.2018.11.013.GJB received honoraria for teaching from General Electric Healthcare, and acted as a consultant for IXICO, at the time of this study.The other authors declare no competing financial interests.
Disrupted striatal functional connectivity is proposed to play a critical role in the development of psychotic symptoms. Previous resting-state functional magnetic resonance imaging (rs-fMRI) studies typically reported disrupted striatal connectivity in patients with psychosis and in individuals at clinical and genetic high risk of the disorder relative to healthy controls. This has not been widely studied in healthy individuals with subclinical psychotic-like experiences (schizotypy). Here we applied the emerging technology of multi-echo rs-fMRI to examine corticostriatal connectivity in this group, which is thought to drastically maximize physiological noise removal and increase BOLD contrast-to-noise ratio. Multi-echo rs-fMRI data (echo times, 12, 28, 44, 60 ms) were acquired from healthy individuals with low (LS, n = 20) and high (HS, n = 19) positive schizotypy as determined with the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE). After preprocessing to ensure optimal contrast and removal of non-BOLD signal components, whole-brain functional connectivity from six striatal seeds was compared between the HS and LS groups. Effects were considered significant at cluster-level p <.05 family-wise error correction. Compared to LS, HS subjects showed lower rs-fMRI connectivity between ventromedial prefrontal regions and ventral striatal regions. Lower connectivity was also observed between the dorsal putamen and the hippocampus, occipital regions, as well as the cerebellum. These results demonstrate that subclinical positive psychotic-like experiences in healthy individuals are associated with striatal hypoconnectivity as detected using multi-echo rs-fMRI. Further application of this approach may aid in characterizing functional connectivity abnormalities across the extended psychosis phenotype.
223
Persona-centred information security awareness
Information Security issues are now prevalent concerns for organisations, specifically where issues directly impact upon regulatory, risk-based or reputational concerns resulting from intrusions and losses of data.Industry reports such as the PwC 2015 Data Breach report highlight a large number of internal data breaches are still directly attributed to human factor issues, either intentional, accidental or with malicious intent.Businesses can no longer rely solely on process and technology for risk reduction of security issues, and need a greater consideration towards people integration with process and technology.Although mandated for some, security education, awareness and training can support general understanding of issues through mandatory or annual refresher content.Several approaches exist for addressing security awareness, however, their focus is generally towards achieving compliance aspects.For example, applying and maintaining data confidentially, integrity and availability risk reducing controls.In most cases, a blanket approach would be applied without tailoring to the actual human factors involved.Human interaction that is central to business, processes, and system interaction therefore needs to be understood if security awareness needs are to be effectively addressed.Research suggests current security awareness approaches do not entirely meet this requirement of designing for the user.As a means to bridge this gap, an opportunity is presented to explore Human Computer Interaction techniques that could be incorporated into a security awareness approach.We illustrate the application of such techniques through the use of personas.Personas are archetypical descriptions of users that embody their goals.By representing archetypes of business users, personas offer insights into users that may otherwise be overlooked.Personas as user models can also be useful for identifying threats, vulnerabilities and likely areas of risk in their given environment.The output of the personas could, therefore, be used to tailor security awareness needs using relevant topics and content addressing current business and people risks.Personas may also be incorporated into the awareness content itself, or potentially used for other process and procedure modification or security and risk assessment purposes.To explore the potential of adopting a user-centred approach to security awareness, this paper illustrates an approach where the creation and application of personas was used to address business specific human factors within awareness activities.Our approach uses personas as a means of identifying audience needs and goals for security awareness requirements.These aim to address relevant human factors to reduce risk and improve a security minded culture.To demonstrate how personas may be integrated into an on-going cycle of security awareness, the steps taken leading to the design and implementation are incorporated within six on-going awareness programme steps.These build on positive features of other awareness approaches where they apply, making it relevant to persona integration and business tailored security awareness output.This can be embedded into business-as-usual activities with 90-day cycles of awareness themes to ensure a more frequent up-to-date approach towards addressing relevant security risks through security awareness.To provide an overview in support of our approach, we begin by first considering existing frameworks and communication approaches for security awareness in Section 2.We consider current challenges, benefits and drawbacks, and how the use of personas may be integrated.Based on the research findings, we address the research gap by presenting a process for a persona-centred methodology in Section 3, integrating personas to help identify and reduce risk through tailored security awareness.Findings from testing elements of the approach with a case-study business, referred to as Company X, are detailed throughout Section 4.In Section 5 we discuss observations from the application of one approach towards combining HCI techniques into security awareness design requirements, aiming to reduce or mitigate Information Security risks.We then conclude in Section 6 and detail directions for future work.Raising awareness and changing security behaviour can be challenging, given the audience must be engaged with the reality of threats, and understand the process for identifying and addressing issues or concerns.The audience must then be motivated into applying positive behaviours, change risk perceptions and engrained behaviours, supported by relevant topics that are not overly information-heavy.Challenges identified by Bada et al. found annual awareness and compliance orientated programmes were often treated as tick-box exercises and do not always lead to desired behaviours.Some approaches rely on invocations of fear to change behaviours, or result in a lack of motivation and ability to meet unrealistic expectations, which may derive from poorly designed security systems and policies.In some cases, security awareness goals were clearly identified and communicated.However, on a cultural level, people did not feel a need to browse internal security guidance as users did not believe they had security concerns.Some felt a lack of reward or recognition for applying positive behaviours, or did not feel empowered to make information or technology security decisions.Awareness programmes were more likely to be successful when receiving top-level buy-in, business-wide support engaging with awareness, commitment and co-operation towards a security culture, using a participative creative design process tailored to business needs.Awareness should be communicated by a variety of means relevant to the business, its people and culture, and is best reinforced using an on-going 90-day programme."Delivery of awareness content should be engaging, appropriate and on-going, with a range of relevant topics that are targeted, actionable, doable and provides feedback to help sustain people's willingness to change.Communications must reinforce each other with a consistent message delivered across a number of channels to a culture addressed in a synchronous manner that supports the goals of the awareness programme.Also, consider effectiveness across genders, generations or roles, of focusing on communicating how to achieve something, rather than dictating what should not be done.Baseline measures should be taken to establish needs and metrics relevant to the target audience and programme output.Measuring the level of awareness is, however, more complicated, given that questionnaires can indicate a level of knowledge, but may not imply levels of motivation to improve behaviours."Awareness evaluation may be built-in using evaluation cycles embedded into the programme's awareness activities, and be considered at levels of Management, Audience, and Effectiveness against measurable performance objectives of awareness topics.Breach notifications or queries may also increase, therefore clarity may be required as to whether more issues are occurring or the increases are due to raised awareness.An awareness programme should use simple consistent rules of behaviour for employees, offering increased perception of control and better acceptance of suggested behaviours.Cultural differences in risk perceptions should be considered when embedding positive security behaviours with support, knowledge and awareness.Responsibility, trust, communication, and co-operation are said to be the four cornerstones of an engaging security culture.Using an approach that motivates and empowers employees to play an active role in security is important towards achieving awareness and positive behaviours."Awareness output should be tailored to employees' organisational context, addressing specific security needs on an on-going basis to reinforce awareness, embedding security practices into normal routine of a security minded culture.ENISA awareness initiatives use a three-phase approach of Plan and Assess, Execute and Manage, Evaluate and Adjust.This includes sub-steps considering resource, budget, project team, programme materials, defines goals and objectives, programme implementation, evaluation of effectiveness, and considers updates ready for the next cycle to begin when required.The NIST special publication by Wilson and Hash offered a comprehensive guide towards designing, developing, implementing and maintaining a framework using a life-cycle approach to address Information Security risks and awareness.A needs assessment would first be conducted establishing business needs, risks, resource, geography, roles, responsibilities, budget and other project related dependencies."Following implementation, feedback mechanisms with manual and automated monitoring and tracking should assist measurement of the programme's effectiveness.Furthermore, new approaches, policies, procedures and technology should be considered and incorporated into revisions of the programme, ensuring it remains effective and up-to-date.Other similar life-cycle approaches include The Security Culture Framework that offers a generic approach towards promoting security awareness.This aims to set organisation goals and measures, involve the right people in the project cycle, understand the audience and build trust, choose relevant activities and topics, then plan, execute, measure and revise the programme.Alternatively, the Security Awareness Cycle establishes baseline metrics, identifies the relevant audience, desired behaviours and high risks, and solutions to facilitate a behavioural change mitigating risks.Whereas, the framework by Maqousi et al. had similar steps, but gave a specific focus towards methods of delivering computer and web-based security awareness.An awareness programme can also be approached as a branded marketing activity promoting Information Security to employees as a product.This incorporates techniques such as surveys and focus groups to understand required design content and context, with added branding addressing elements of emotions, values, impressions and expectations, supported by relevant security metrics.Awareness content and communications may cover a range of delivery methods and topics applicable to business requirements and available resources.This should incorporate ease of use, scalability, interactivity, and accountability, with continued improvement being a main goal.Awareness material should be developed towards relevant roles, behaviours and required skill-sets applicable to functions.Another approach uses an on-going life-cycle for the awareness communications avoiding ineffective repetitive general advice, and tailored to business needs and behaviours.Communication methods could include participatory methods such as games, quizzes, short video clips, or short topic briefs included as part of team meetings, which may incorporate rewards or recognitions for positive behaviours.Other methods include face-to-face training sessions, e-mail messages, presentations by speakers, guidelines, booklets, posters, and awareness training workshops.Gundu and Flowerday note that security campaigns may require additional budget in terms of direct and indirect costs included in production and maintenance of the programme, although it is suggested that use of e-learning can reduce distribution costs.Online tools could also enable greater user interaction, such as online forums, news sections, alerts, and surveys.This could be delivered by a range of web-based tools, and maintained with appropriate up-to-date content.A review of relevant awareness metrics could then be conducted by specific reviewing team, or administrative and technical staff.Many of the awareness frameworks and approaches reviewed relied on some form of data collection to understand the environment, and in varying degrees, its people and culture.Personas – archetypical descriptions of users that embody their goals – could instead be used as a tool primarily within the design stage to address these areas.Within focus groups or team meetings, considering how personas might behave is also a useful technique for on-going awareness building.Rather than modelling inaccurate general stereotypes of users, the use of personas representing archetypes of business users can be used.This approach was arguably first popularised by Cooper and has since grown in their usage across different domains, such as marketing, websites and interface designs using varied design approaches.Nielsen identifies four common design approaches used for personas:A goal-directed perspective that considers psychological aspects of the design process;,A role-based approach focusing on specific target roles and collects a range of data through qualitative and quantitative methods;,A fiction-based perspective usually designed using intuition or assumptions to formulate the personas;,An engaging perspective created with the use of data.According to Norman, fiction based or assumptive perspectives are used to create an empathetic focus in the design process."Whereas, engaging perspectives provide a story-oriented approach towards visualising character descriptions using a narrative building the story's beginning, middle and end, supported by story and scenario contexts.When creating personas, the design team is likely to include a range of roles who may begin by obtaining and analysing background information and data from various sources, leading to a view towards areas of user focus.This view may be debated, agreed and refined leading to representative personas that can be built upon with relevant supporting scenarios.Personas should be generative and engaging, using scenarios to apply them to relevant situations.This approach is similarly used in marketing where persona-focused storytelling is essential to branding.Atzeni et al. provide an approach that aims to develop Attacker Personas using the process of data collection, reference elicitation, affinity diagramming to graphicalise the problem space, which helps with characteristic development, and concludes with the creation of the persona.Faily and Fléchais adopt a user-centred design approach resulting in the creation of personas that may be used for a number of analysis purposes.Coupling personas with relevant scenarios and expected behaviours can be an effective means of validating the assumptions made."Acceptance is an important feature of the design process, and is based on review, opinions or feedback from the design team's participatory interaction.If acceptance becomes an issue, it is necessary for designers to argue the characteristics of personas."Faily and Fléchais illustrate an approach for doing this using Toulmin's Model of Argumentation to justify a claim about a characteristic, strengthening the foundations of the persona, while guiding the elicitation and analysis process.Personas may then be disseminated using programme specific scenarios, and should be reviewed annually to confirm the relevance of certain personas, carry out updates to the descriptions, or create new versions when required.A more general approach taken by Stewart is based upon the Pareto principle, whereby in the context of security and awareness, 80% of the risk to be addressed derives from 20% of the topics to be covered.Therefore, the challenge of identifying relevant topics is addressed using personas.Interviews are undertaken with a target sample of the audience, which may equate to between 8–12 personas covering a range of departments.Relevant risks and behaviours towards security are assessed leading to the creation of targeted awareness materials, which may incorporate personas enabling their characters to become embedded within the organisation.The output incorporating the persona characters and communication of the persona-based awareness is suited towards internal or online means of standard communication, posters, or flyers.Hand-outs or novelty giveaway promotional items may also act as future guides or reminders towards awareness.Research carried out by Hochleitner et al. gave a specific focus towards giveaway promotional items that integrated personas.This considered seven different marketing styled items providing information relating to each persona.When comparing the effectiveness of long-living marketing materials against consumables, the consumables such as birthday cake, QR code cookies, or bottled drinks offered a fun and quirky interaction towards the personas, yet offered the least amount of information.Long-living materials such as a persona savings box or posters presented more information about the persona, but had the least interactivity.This suggested technological applications such as online quizzes could be incorporated to improve the efficiency and interactivity of long-living materials.Previous work by Pruitt and Grudin identified potential issues when using personas.For example, incorrect construction of personas without using relevant data led to unbelievable character types being created.Other issues include a lack of budget or resource towards the design, implementation, and suitable delivery methods.Or, the potential use of personas was not maximised, contributing to a lack of understanding towards how personas could be applied across the development and implementation cycles.Personas may however be maximised by introducing initiatives such as a persona “Fact of the Week” campaign that could utilise email as a delivery medium.In summary, for personas to be successful they should be grounded in data relevant to the business and their employees, and support focused requirements using participatory or cooperative design methods that give focus towards users.Integrating the personas with story-based scenarios to consider how they would apply to the persona can achieve engagement and anticipation towards user behaviours.Moreover, by maximising the use of personas this increases the likelihood of success within a programme.Personas also have the potential to be used within the Social Engineering card game as an awareness activity in group or team meeting environments, or be integrated throughout the business embedding them into the culture.When considering the research of related work, despite a wealth of security awareness approaches, many focus on standard compliance related awareness topics.Very few, however, really consider relevant business-specific human factors identifying actual security awareness needs of people interacting with process and technology to support business goals.None offer a consistent HCI method of integrating human factors into security awareness using personas.When designing for the user, the integration of HCI tools such as personas was found to offer requirements engineers or user-experience designers an important and useful means of understanding the user audience behaviours, needs and goals.Personas also offer potential towards security requirements for identifying risks that may otherwise have not been considered.The concept of using personas for awareness was discussed along with approaches towards incorporating personas in awareness materials.To align HCI concepts with security awareness, we considered areas of benefit for integrating personas to identify business-specific needs and goals of users.This would aim to provide a means of addressing human factor related security risks, leading to a tailored approach to security awareness activities.From the review of current awareness approaches, we identified challenges and strengths of programme steps and communication approaches.The need for interaction, participation and co-operation was presented as a consistent theme adding greater success factors to awareness programmes.We then considered how personas could integrate into a cycle of steps and activities for design and implementation of security awareness.Most widely used awareness approaches reviewed contained between 3–7 steps to establish: Needs and goals, for design and development, implementation, and a review and measure of effectiveness.Methods from these approaches best suited to assist the integration of personas were identified.This enabled general steps and considerations to form the basis of our proposed methodology, some of which were tested with Company X.Fig. 1 shows the main steps of the life-cycle beginning with a preliminary step and continues with six on-going steps inheriting many of the positive consistent features discussed within Section 2.However, the key feature of this particular approach includes the use of personas at the design stage, with options for further use within the campaign communications.Personas could also be utilised outside of awareness activities addressing other security risks.By following the steps of the methodology, we aimed to provide a framework offering specific, measurable, attainable, realistic and timely objectives towards meeting the goals.This allows flexibility of integration into business-as-usual activities, while adapting the process to suit business type and needs.The process can be maintained by a working group providing oversight and review on a quarterly or annual basis.Personas may then to be refreshed and updated, ensuring they continue to accurately reflect business user needs and security risks related to the current threat landscape.Before a programme can begin, it must be driven by a business need and supported requirement for implementation.To achieve this goal, a business decision to commit to on-going Information Security awareness should be given, with Senior management buy-in, support and commitment.A Stakeholder team of representatives from key departments across the business should be initiated, conducting introductory meetings to set up the initial project team and objectives.Inter-departmental co-operation towards engaging with awareness activities and promoting a security culture is paramount to its success.To focus and prioritise the programme output, activities are used to elicit business needs and goals towards Information Security awareness.These include assessments, surveys and focus groups to establish business needs and current security culture, locations, risks, roles, responsibilities, resource, budget, and any other identified project related dependencies.Current awareness threat trends should also be considered, together with issues or breaches where human behaviour was the likely root cause.A current snap-shot of metrics relevant to Information Security issues should also be established.Finally, general requirements and the target audience should be confirmed before choosing a theme, and conducting topic-specific research.This step is perhaps the most crucial given the dependency on integrating personas as a tool to identify human factors and security risk.This should begin by organising interviews with a random selection of users from across the business.The interviews are used as a basis to gain necessary data for constructing personas.Interview questions should be prepared before conducting interviews for persona data collection to ensure context specific questions are incorporated.Interviews can be recorded then user responses are transcribed to enable elicitation of relevant behaviours from data.Alternatively, good note-taking of interviews could be used, but is harder to elicit data in detail.The identified characteristics are written on post-it notes as factoids, then grouped based on behavioural clusters.Relevant information to support each persona are drawn from the grouped factoids then presented as affinity diagrams leading to the creation of persona templates.Based on findings from the empirical data, these templates are elaborated into detailed archetypes of typical users suitably tailored to the business.Throughout the creation of personas, it is important to maintain traceability from the data to the finished persona to ensure the credibility of each persona and their characteristics.Artistic licence may be permitted when bringing to life the archetypical character, such as choosing a representative photograph to help humanise the persona, or describing their back story.However, to ensure they are archetypes, the core of the persona, e.g. their attitudes, motivations and business context, must be derived from the data.Each persona can then be presented in such a way to begin to reap the benefits of business-specific user models brought to life as recognisable employee archetypes.When repeating the awareness cycle, personas may be reviewed during the following cycle, but may be replaced with new versions annually, or as required.This is to ensure that as the threats evolve, or security culture and awareness improves, personas continue to accurately reflect current business user needs and security risks, for which we can address through tailored security awareness.A critical analysis is undertaken against the identified behaviours and characteristics of each persona.This is considered against business needs, risks and requirements to establish and prioritise awareness needs.Using simple consistent rules of behaviour, desired behaviours can be considered with realistic expectations to integrate persona needs using scenario-based contexts.Business risks, issues, needs and goals are considered against persona roles, together with behaviours, cultural contexts, differences in risk perceptions, and the applicable skill-sets required to apply the learning.Stakeholders are encouraged to participate in the creative design process.Any other relevant points and observations can then be considered before target behaviours and areas for relevant awareness content delivery are agreed.Design requirements to address identified security risks are derived from critical analysis and topic-specific research.This provides direction for design and development of tailored content, whilst utilising available resources and means of communication.Updates to future iterations that may currently be out-of-scope are considered for planning at a later stage, e.g. system-based awareness.Delivery methods and resources should be agreed considering ease of use, scalability, interactivity, and an element of accountability.Awareness that engages people on a personal level should be applied, which can motivate and empower employees to play an active role in Information Security.Content requirements are specified for a consistent message delivered across various channels.Branding can be incorporated, making the material relevant and personalised to the organisation, its goals and people, promoting Information Security to employees as a product.Other required styling may be incorporated to address elements of emotions, values, impressions and expectations leading to relevant design content and context.How the personas can be incorporated into awareness material output can be considered, thus maximising their potential where possible.Participatory design methods for communication may incorporate games, quizzes, short video clips, or short topic briefs included in team meetings.Also, posters, novelty or promotional items, Web and system-based development tools.Or, guidelines, booklets, on-going news sections, online and Social Media alerts, surveys, or forums.Industry proven security best practices could also be used to provide content of awareness material.There are a number of options available, however these should be tailored to the context of persona and business needs, using available resource and budget.To ensure a timely implementation, a roll-out strategy should be prepared taking into account the required staff availability and other business priorities.The delivery of tailored content and communication methods around other priorities should be planned and implemented.Baseline metrics relevant to the audience and awareness cycle should be established, and the effectiveness of certain elements of the awareness activities is measured where required.The Review stage provides feedback-loops to identify the effectiveness, benefits, drawbacks, and improvements for the awareness cycle.Evaluation metrics might vary in type dependent upon the nature of the business, but may measure effectiveness of the awareness cycle against baseline metrics.On-going effectiveness may use feedback mechanisms, such as manual and automated logging, system and internet logging, monitoring and tracking.Also, issue reporting, root-cause analysis, Helpdesk trends, e.g. password resets, desk and environment spot-checks, surveys or questionnaires, internal Phishing or Social Engineering campaigns.Review the findings to understand and agree required updates or modifications to policies, procedures, or the awareness process.Consider new technology or threats to be incorporated into subsequent revisions of the cycle, ensuring it remains up-to-date and effective against current business and persona needs.Continue back to Step 1, repeating the on-going 90-day cycle by first establishing the current business risks, threats and goals for the next cycle.To support design, implementation and testing of certain elements of the persona-centred awareness methodology, we applied our approach to a case-study business referred to as Company X.This specifically helped us validate the notion of personas being used as a means of addressing human factors security risks, for which guides the selection and design of tailored security awareness.Also tested were specific parts of the proposed process steps that could be used to integrate the use of personas within a security awareness cycle.Due to limitations of the available time-frame for testing, combined with other business priorities for Company X, some task outputs from the preliminary step and Step 1 of the methodology were assumed to some degree.This was supported by interviews and observations confirming senior management and the business as a whole were already very security focused and committed.The willingness to participate in trying something new to improve their security awareness was another positive indication.Testing of the project therefore begins with the relevant theme and topic-specific research at the end of Step 1, findings of which are discussed in Section 4.1.Preliminary discussions with Company X took place to manage requirements and expectations, understand the culture, business needs and goals.It was determined topics relating to Social Engineering would be the chosen theme for the current instalment of the awareness programme.Social Engineering can be described as means of manipulating people by deception into performing an action or giving out information, which can bypass or undermine other technological security controls.Card games such as Ctrl-Alt-Hack are designed to help improve security awareness in group-based environments.A similar type of game developed by Beckers and Pape acts as a training activity on Social Engineering techniques with the aim of identifying possible weaknesses.As part of our topic specific research, we evaluated this Social Engineering card game, where a number of attack types were carried out by players against a pre-designed persona, their system or workstation.The game-play was at first a little slow whilst understanding the game mechanics, but improved with repetition, allowing focus to be on the awareness of attack types and styles.Participants had a good level of understanding towards the subject matter, so it was unclear how less technically experienced people may understand the game or terminology.This suggested the game may be an ideal candidate for implementation.A number of approaches towards the design and use of personas were considered.Given the nature of the application and context of the business, a goal-based approach was largely adopted with some role-based elements.The process for persona creation can be viewed similar to that of Atzeni et al.In preparation to begin the process, interview questions were created for data collection leaving scope for additional questions where required.These would aim to elicit relevant information from employees describing behaviours and perceptions relating to the business and Information Security.Nine interviews were conducted with randomly selected employees based on their availability during the interview day.According to Yin, randomisation is also said to assist with data validity.The interviews offered insights into day-to-day life covering a range of roles and experience at Company X, demonstrating a generally security minded culture with a positive attitude towards Company X.The audio recorded interviews were transcribed to give text-based accounts, then reviewed to identify relevant behaviours, characteristics and perceptions that could be extracted.The identified information was written onto post-it-notes and placed on a wall into categories based on the approach used by Cooper et al. to broadly identify:Aptitudes – What education and training the user has, and ability to learn;,Skills – User abilities related to the product domain and technology;,Activities – What the user does, frequency and volume;,Motivations – Why the user is engaged in the product domain;,Attitudes – How the user thinks about the product domain and technology.Through analysis, 281 relevant pieces of data were identified corresponding to various activities, behaviours and perceptions.These were sub-grouped into variable types such as internal and external motivations, or differing attitudes towards awareness, risks or challenges.The data were duplicated into the affinity diagrams for further visual analysis.The Activities grouping, for example, and its six distinct sub-groupings demonstrated a likely split in persona roles.It should however be noted that although the design approach for the personas lends itself towards a goal-based approach, it was useful to determine the likely roles to help with the representative split of the other data categories.As a means of comparing the persona role data, Company X recommended the use of a Radar diagram to visually demonstrate the most common roles or activities from the data listed; these were often used within Company X when creating user-experience personas for system design purposes.After a number of tests, the best approach of comparing data in the diagram was to divide the generally perceived percentage of workload for each person interviewed, then re-order the rows of data relating to interviewees whereby the data visually flowed and activities crossed over various role types.For example, a typical person from IT may devote 100% of their day to technical activities, whereas a person at Manager level would split their time between client and team management or other activities.Based on the output of data specific to the business, roles, users, and behaviours, it was determined that three personas could be derived, each of which presented relevant characteristics and behaviours to address.To keep within the context of Company X, gender and age of the personas were derived based on the interviewees."For example, persona one was based on two males and one female, generally in their 20's.The remaining behaviour data were colour-coded consistent with the Radar diagram.This provided visual validation of relevant behaviours and perceptions applicable to each persona, enabling traceability from the persona to the affinity diagrams and relevant factoids applied, originating from business users.Images representative of each persona were sourced online under a Creative Commons License, enabling each persona to be viewed as a fictional person, bringing the three archetypes to life.The images were also helpful towards visualising and reflecting on the persona as an archetype, not a stereotype, when applying a story-based narrative aligned with the main behaviours and perceptions.When validating the design of the personas with Company X, although the persona descriptions were deemed as appropriate representations of business users, the wordiness of the full personas was noted.Therefore, it was suggested that the descriptions could be broken into bullet points presenting an overview for each persona."This prompted development of a further Lite version of the personas, using another approach by Cooper et al. to identify each persona's Life, Experience and End Goals.The full personas were reviewed to identify descriptions within these categories and refined to a one-line summarised statement, resulting in a more concise overview for each persona.It could be argued this summarising approach may have been used with the post-it-note wall, however, it was important to maintain a high degree of fact-finding and original context as closely as possible.By doing this, the intention was to strengthen data validity by providing a true account within the main persona templates, allowing for further refinement if required.This reduces risk of persona characteristics becoming overly diluted or deviating from the data they were based on.With the personas designed and validated, Company X agreed relevant awareness needs and learning styles applying to each persona could be analysed.However, the lite-personas offered less applicable detail in comparison to the full personas, suggesting they may be better incorporated into outputs and communications.The full personas were therefore used to elicit relevant needs, points and observations indicating learning styles and level of detail required.This was achieved by considering how each persona would act or respond in a given business scenario.Their motivations, attitudes and understanding could then be analysed to identify any weaknesses towards security.The personas did, however, demonstrate a good level of understanding or existing awareness, and that some mechanisms were already successfully used within Company X.During analysis, this information was further considered in the context of the company, its culture, current processes and procedures, workloads and the type of clients they work with, and how the theme of Social Engineering applies.Also of consideration were the Cyber Security Essentials and ISO 27001 accreditations in place with the risks, needs and requirements for maintaining these.ISO 27001 audits were frequently carried out.Therefore, practising security was essential to the business given their types of clients, ranging from small businesses to government organisations, whereby security clearances were required for applicable employees.This was also reflected within interviews and observations, where certain information could not be discussed or revealed.Findings from interviews and observations determined Company X operated primarily in one location with a secondary London office.The culture was fast-flowing, energetic, technology orientated and generally security minded.A positive ethos was evident with a desire to continually improve, being the best at what they do.Security needs were balanced, for example, with internet or Wi-Fi connections, where employees need web access for business and personal purposes to social media or streaming services, using varied systems or devices.Technological controls were, however, in place to integrate security seamlessly, many of which were beyond the users knowledge or control, supported by a number of relevant policies and procedures addressing human interactions.The calibre of new and existing employees covering varied age ranges and backgrounds, appeared to be maintained to a high standard.New employees were given a detailed and well-presented company handbook, and follow a staggered induction programme to introduce the new starter to company life, expectations, policies and procedures, including general security awareness.Company stand-up meetings take place on a weekly basis for general updates, and other team meetings may be carried out when required.People matter to the business, so there was a good support structure in place and usage of the internal online system as the central portal for information was encouraged.As with some employees, persona three occasionally travels to meet clients.External security awareness is a consideration for the business, as secure external access to systems for permitted employees was important for business operations.Clients and other visitors may also arrive at Company X for meetings, so there were layered security procedures and awareness on how this should be managed, along with procedures for discussing information with third-parties.Most employees observed had a good level of understanding towards technological subjects and terminology.However, despite all three personas appearing technically minded, albeit at differing levels, some consideration should still be given towards less technical areas of the business or new starters.The next stage of analysis reconsidered each of the persona needs from the context of the business overview.This established the most relevant approach towards design and implementation of awareness content tailored to business and audience needs.For example, workloads were high and time management was maximised where possible to meet business, client and security needs.To be most effective, it was identified methods should therefore cause least interruption to daily activities, be interactive or participatory, and incorporated into existing team meetings.This could include the Social Engineering card game considered in Step 1, quizzes, short video clips, topic briefs, or guidelines, preferably bite-size and to-the-point with user-friendly language.This would likely suit all persona needs.However, for persona three, there was an appetite towards deeper and broader content for certain topics, and the use of industry-based social media security updates for threat awareness.Awareness needs may be further supported by displaying relevant awareness posters, utilising the internal online system implementing awareness material, alerts and on-going news.Use of desktop backgrounds and screensavers was discussed with Company X. However, based on their usability experience, for desk-based employees background images are often obscured and overlooked, and screensavers become annoying after continual display over time.It was therefore concluded these may be counter-productive and would not be incorporated.If budget permits, awareness styled novelty or promotional items maximising use of the lite-persona identities within the designs could provide opportunities for company branded take-away items.This was, however, out of scope for testing, as was the development and introduction of a system-based awareness tool.It was acknowledged such a system could offer a number of benefits incorporating refresher training and awareness exercises integrating the personas, and be supported by record keeping functionality.This would likely help reinforce awareness needs, thus preventing or reducing the likelihood and impact of security-based risks, and was a likely consideration for the future.A cost-benefit-analysis may wish to be considered to examine advantages and disadvantages of developing a system in-house compared to that of existing third-party systems or services.As with many awareness approaches, this may be difficult to quantify financially.However, when compared with the potential of costs, losses and reputational damage resulting from a data breach, the on-going system costs for an awareness tool or other recommended activities may be minimal in the long-term by comparison.After concluding the analysis in Step 3, recommended primary awareness communications were selected.These included the Social Engineering card game, short topic briefs, quizzes, factsheets, and short video clips, incorporated into existing team meetings.Secondary communications included relevant awareness posters, use of the internal online system to publish on-going updates, short guidelines, booklets or factsheets, integrating personas where possible.The thematic needs of Social Engineering awareness were considered against business-specific persona needs and behaviours.Findings indicated applicable topics should include visitor and ID badge requirements, prevention of unauthorised access to systems or buildings, shoulder surfing, reverse Social Engineering, insider targeting, and types of Phishing.Web users would benefit from awareness of online attacks such as water-holing and pop-up windows, and risks towards themselves or the business when using social media – and what to do in the event of these occurring.In all cases, there should however be an acceptance towards areas of human factors that may be vulnerable and could therefore be improved upon.For example, persona one would specifically benefit from awareness of Voice of Authority or Third-party authorisation attacks.Persona three would benefit from awareness of preventing in-person Social Engineering attacks whilst working externally.A number of these relevant topics were addressed by the Social Engineering card game.This provides for interactive and participatory awareness building towards risks of Social Engineering, where the purpose is to explore attack scenarios and approaches applicable to Company X.When designing the game board, it is suggested this should duplicate the actual business layout to benefit from its familiarity towards locations of systems, devices or other attack vectors.However, in agreement with Company X to reduce security risk towards the business, a generic floor plan was instead created with some basic similarities, meaning it was still fit-for-purpose meeting the needs of Company X.Within the game, each player would carry out an attack on one of the personas using the cards selected.The persona would be allocated their workstation within the game board.The attacker has to determine whether they are an insider or outsider, and how they would gain access to the building, systems or data relevant to the persona.As a group, the likelihood of applicability and success of the attack type is discussed, then concludes a score for the player, before moving on to the next player.Secondary communications may incorporate posters from online awareness resources, a selection of which were printed and discussed with Company X.These covered Social Engineering topics ranging from simple and to-the-point, detailed, graphical, movie style and comical cat posters.When considering the personas and culture, Company X determined the comical cat posters would likely be most accepted.Continuing to support the theme, quizzes could offer fun bite-sized awareness.One option was to use free and reputable online quizzes or a downloadable Phishing e-learning module, used individually or as a fun group or team meeting awareness building exercise.An introduction to the team meeting and the theme of Social Engineering could utilise short videos, such as an interview with insights from professional Social Engineer Jenny Radcliffe, who touches on many key points relevant to businesses and Social Engineering.Company X determined future iterations of the cycle could include awareness videos made with in-house technology, or demonstrations could be arranged to show the ease of system exploits.Internal Phishing campaigns could be developed, or a Social Engineer Penetration test could be arranged.However, before activities of this nature would be implemented, it was confirmed considerable discussions should take place to identify advantages and disadvantages of such a test.For example, where this may create a risk of distrust between the business and its employees.The focus therefore prioritised the implementation approach agreed with Company X based on the personas.As time required for full implementation and review would fall outside of the testing time-frame with Company X, it was agreed a primary and secondary communication method would be tested.Testing of the secondary supports validation of the targeted awareness material and design based on persona needs.Aspects of this are considered when testing the primary method that supports validation towards incorporating personas within awareness output.Two simulated team meetings were used whereby the card game and a review of preselected awareness posters could be tested.Each meeting lasted up to an hour, with the first group consisting of four employees from more technical roles.The second group had three employees from less technical roles, supported by the security manager with the game-play.Both groups were introduced to the purpose of the meeting, the design and use of personas, and how they applied to company awareness, the game and floor plan.The more technical group were able to understand game terminology, scenarios and principles with ease.Much discussion time was spent debating in technical terms how attacks may be improved at the company using particular approaches or technology.The less technical group were slower by comparison to understand the game, and were appreciative of the manager support helping describe attack scenarios and principles by relating them to the business using analogies.This promoted further discussion where they identified techniques that may be used in the public domain.Both groups engaged in fun discussions throughout each round of the game, demonstrating to some degree the game was creating awareness through discussion based on the game content.In both cases, the floor plan became almost redundant and instead relied on discussion and visualisation to walk-through attacks within the company.The concept of using personas to identify and tailor awareness needs and their integration into the game output was generally understood.However, for game-play, group members did not have time to fully absorb the persona templates, and were more distracted by the need to understand the concept of the game and Social Engineering.Participants found the use of personas as victims was useful, as most group members could imagine working with them or having similarities to other employees.For example, both groups identified persona one was susceptible to Voice of Authority attacks.Interestingly, this part validated findings of Step 4 where this was previously identified.Some group members felt a focus on how they themselves may be a victim would have been more useful.That said, throughout the game, there were constant reflections on how the scenarios may apply to themselves, colleagues or even their family.Therefore, it could be argued this need was still met, thus promoting awareness.After game completion, group members individually provided feedback towards the game and its integration with personas.The remainder of the session then turned focus towards the preselected awareness posters.Ten posters were sourced online and two posters were created to represent a theme that was basic and to-the-point.Group members individually reviewed the posters, considering which posters they believed would be appealing and effective in raising awareness within the company culture.This offered a sense of empowerment and participation towards internal awareness activities.The findings of the game and poster reviews are discussed in Section 4.6.Given the project time-frame and other business priorities, it was not possible to obtain a baseline metric towards measuring the effectiveness of the awareness cycle.Furthermore, at the conclusion of the activities, it was not appropriate to apply and review manual or automated feedback mechanisms suggested in previous sections.These were considered long-term measures that would be reviewed after the completion of the cycle that would end outside of the time-frame working with Company X.However, in addition to the verbal feedback from management and staff regarding the integration of personas, two opportunities were used to offer feedback towards the effectiveness of the awareness activities.Following this feedback process assisted in validating certain parts of the process.A game review form was used to gain feedback towards the Social Engineering card game that integrated personas assisting in raising awareness of Social Engineering techniques.A poster review form and guide sheet was used to gain feedback towards the likely appeal and effectiveness of the awareness posters, previously selected based on the personas.Poster findings demonstrated a varied appeal in both groups, as individuals had differing likes and dislikes towards the ascetics, wording or overall appeal and use in the company culture.This suggested that regardless of the gender, age or technical expertise, each person had different tastes, meaning a range of posters should be considered.Feedback indicated which posters were likely to have a positive impact, although the long-term impact would need to be considered at a later stage, considering whether they would be overlooked after the 90-day cycle.Interestingly, the cute comical cat posters were considered fun and humorous, which suited the culture, although other posters could still be an option.This finding in particular validates where Company X determined in Step 4, based on the personas the comical cat posters would likely be accepted.Game findings suggested it was well received and did promote awareness.The more technical group members felt they were more familiar with the topic and gained the least awareness, although once the game-play was understood, the less technical group gained the most security awareness.When combining these findings together, we found that participants were at first unsure or unaware of the persona concept.They were more used to stereotypes found online or in magazines.Once they understood these were instead archetypes based on real data from their colleagues, they were able to appreciate the benefits better.This was evident throughout the game where participants could identify with them as other employees.At an employee level, the personas were more visible within the output, but gave less focus towards the awareness being based on personas.Whereas, at management level, the benefits of using personas as a means for identifying human factor security risks was accepted as an approach towards tailoring awareness, although integrating them within output was secondary.The aim of our work was to develop a user-centred approach integrating the use of personas to identify business-specific human factors and security risk to be addressed within awareness activities.To enable the integration of personas with awareness activities, a persona-centred on-going Information Security awareness solution was proposed.This aimed to reduce or mitigate related Information Security risks through business tailored security awareness output.However, to test these concepts, the project required a case-study business for data elicitation to build personas based on their employees, and tailor subsequent activities to its business needs.Despite approaching companies in good time, some difficulties were experienced in securing a company to work with.Although candidates provided positive feedback, the reality was, companies were unable to support the project.Either with time constraints, provide any type of supervisory support, resource or budget, regardless that much of the project would be delivered at no cost, compared to the use of external consultant services.From a different perspective, this demonstrates the reality businesses face when considering, planning or implementing such a programme, when other priorities, budgets and resource are already stretched.Company X kindly offered their assistance to test and validate our work.However, as there was insufficient time for Company X to plan or build this project into weekly activities, some limitations would apply.Working with Company X enabled testing of many features from the proposed methodology, providing the business a means for implementing on-going awareness.Company X were consulted at each stage of the process, considering research findings leading to the general approach and application of the persona-centred methodology, creating a cycle of activities tailored to business and persona needs.Considering threats to validity described by Yin, e.g. Internal, External, Construct and Reliability validity, we first consider the notion that personas can be used as a tool to elicit behaviours and characteristics from a given audience.Once developed from empirical data, personas were specifically validated with Company X who agreed their appropriateness.The elicitation of human factor security risks from the personas leading to tailored output was part validated with Company X at selection stage, and again from participant feedback at implementation and testing.Data quality from persona findings is likely to be subject to the ability and understanding of those applying the process and resources available.Business type and context may impact on external validity with differing results, yet will still provide empirical data.To assist the integration of personas into an awareness process, we identified consistent process steps from other common frameworks and approaches to create a structured persona-centred methodology.This also helps with construct validity by defining the problem space and steps to address it, and reliability validity to ensure process repetition.The full validity of the process was more difficult to confirm due to shortened time-scales for testing.Elements described that were tested enabled a useful and structured means of applying the personas in each step.For example, once elicited, tailored awareness needs could then be matched to the business context with optimal communication methods using available resource.Although full testing of the process as an on-going cycle could not completed, feedback gained from activities tested suggested an approach using personas has potential benefits towards addressing security related human factors.Communication methods of a card game and posters generally worked for Company X.In combination, this helped raise most individuals awareness, albeit at differing levels, and had positive effects on creating participatory discussion promoting a security minded culture.We presented the development and application of a persona-centred on-going Information Security awareness solution for the workplace.Specifically, we tested the concept of designing for the user by integrating personas.This HCI approach is used to bridge the gap between standard awareness approaches by incorporating business-specific human factor security risks, leading to tailored security awareness output.A review of related work, personas, approaches and frameworks was undertaken to understand how such an approach could be combined.From this, a persona-centred methodology was devised and largely tested with a case-study business.Personas were constructed based on empirical data relevant to the business, providing a useful means to identify audience awareness needs, communicated with a predefined security theme for the programme cycle.However, the personas generated were generally based on more technical roles.Collection of data from less technical roles providing a balanced spread of the business audience would be more appropriate when fully applying this methodology in a real-world scenario.Individual persona roles could also be used to identify needs at a team or department level, or for other related purposes.For example, application to security risk assessment, or related control and procedure modifications.Despite test environment limitations, personas appeared to offer a good level of value towards the design process, demonstrating potential for their overall effectiveness as a persona-centred tool for addressing human factors in security awareness.In both workload and analysis, work conducted with personas took time to produce and validate, yet provided a useful and relevant method for tailoring security awareness needs.It was, however, unclear how effective selected activities would be over time towards changing and improving behaviours.Further consideration may need to be given towards embedding personas into the business and awareness programme output, such as promotional take-away items.Having carefully and methodically considered appropriate steps and tasks for the persona-centred methodology, application of the process with Company X appeared to work well and offered value towards an area they wished to improve upon.Although Company X were already very security minded, the test subjects appeared to benefit from activities that could, for example, be participative and incorporated into a team meeting, providing an indication the application of the methodology was positive.By continuing to embed this process into business-as-usual activities, it is likely this process could be adapted to suit business needs, whilst providing the flexibility to evolve.This also gave Company X ideas of how other updates may be delivered.The inclusion of a system-based tool for computer based training and awareness was considered a future advantage for extending awareness."Further work relating to the programme's long-term effectiveness of improving behaviours, reducing risks and embedding security into an unconscious routine, would also be of interest to validate its long-term effects.To further enhance its validity, this process may also be trialled in a smaller less security orientated business, or indeed as part of a larger national organisation to observe any differences in the approach required.That said, the process is presented at a level whereby the steps could be followed in most scenarios, or integrated with other risk or awareness approaches, retaining the main feature or novelty of our approach using personas; archetypes based on real business users, needs and behaviours, as a means for identifying workplace security awareness needs.
Maintaining Information Security and protecting data assets remains a principal concern for businesses. Many data breaches continue to result from accidental, intentional or malicious human factors, leading to financial or reputational loss. One approach towards improving behaviours and culture is with the application of on-going awareness activities. This paper presents an approach for identifying security related human factors by incorporating personas into information security awareness design and implementation. The personas, which are grounded in empirical data, offer a useful method for identifying audience needs and security risks, enabling a tailored approach to business-specific awareness activities. As a means for integrating personas, we present six on-going steps that can be embedded into business-as-usual activities with 90-day cycles of awareness themes, and evaluate our approach with a case study business. Our findings suggest a persona-centred information security awareness approach has the capacity to adapt to the time and resource required for its implementation within the business, and offer a positive contribution towards reducing or mitigating Information Security risks through security awareness.
224
Omega-3 polyunsaturated fatty acid supplementation during the pre and post-natal period: A meta-analysis and systematic review of randomized and semi-randomized controlled trials
Infant development during the last trimester of pregnancy and the first year of life is considered critical for many clinical outcomes like behavior, cognition as well as immunity.During this time of gestation and infancy, there is a large demand for n-3 polyunsaturated fatty acids, like docosahexaenoic acid, as well as the n-6 PUFA arachidonic acid .Several scientific and clinical studies show a deficiency in n-3 PUFA negatively impacts physiological outcomes in infants including visual development , behavior , and cognition .Indeed, numerous studies have found positive correlations between n-3 PUFA consumption and cognition or visual outcomes, although randomized controlled clinical trials are not conclusive .In addition, several studies have linked fish consumption, rich in n-3 PUFA, with infant anthropometric measures although the long term biological effect of this is unknown.DHA has also been proposed to exert a suppressive effect in cardiovascular and other chronic inflammatory-related diseases suggesting that DHA modulates immunity.Accordingly, n-3 PUFA intake, in particular DHA, has been expected to provide beneficial effects on infant health and development.Dietary lipid intake during the pre and post-natal period alters the biochemistry and physiology of the developing infant."Lipids are supplied to infants through placental transfer and breastmilk and a mother's diet influences breastmilk lipid composition .Indeed, DHA in breastmilk is lower in mothers who do not supplement .Similarly, term infants fed standard formula, not fortified with n-3 PUFAs, have lower DHA and AA in their erythrocytes and lower n-3 PUFAs in their cerebral cortex than breastfed infants.Yet, previous meta-analyses on breastfed infants supplemented with n-3 PUFA and fortified formula conclude that n-3 PUFA supplementation cannot be supported or refuted in infants.Still, the American Pregnancy Association recommends 300 mg of DHA daily supplements to pregnant mothers since DHA is deemed essential for neurological and visual development in infants with similar claims made by several infant formula manufacturers.The aim of this study, was to update previous meta-analyses thoroughly evaluating all clinical trials assessing the effects of n-3 PUFA supplementation, taken maternally or through formula/directly, on infant health and development.Our selection criteria identified 32 infant formula/directly supplemented and 37 breastmilk studies providing data on 2443 infants on n-3 PUFA supplemented formula/direct and 4553 infants feeding from mothers supplemented with n-3 PUFA.This meta-analysis shows that n-3 PUFA supplementation in infants, delivered either maternally or in formula/directly, does not improve visual acuity, language development, or cognition.However, some aspects of growth, motor development, behavior and cardiovascular health are differentially altered in the summary effects with the more desired effect occurring often in breastfed infants compared to formula fed infants.Moreover, this meta-analysis suggests that n-3 PUFA supplementation may affect infant immune development reducing normal pro-inflammatory responses in both breastfed and formula fed infants.Overall, the evidence does not support the continued supplementation of infant formula with n-3 PUFA and more studies are required to understand the effects of maternal n-3 PUFA supplementation on infant immunity.The inclusion and exclusion criteria were determined prior to our literature search."Inclusion criteria for the breastfeeding group included trials that examined infants born to breastfeeding women receiving n-3 PUFA supplementation during gestation and/or during lactation with breastmilk as the infant's primary dietary source.Mothers must have started supplementing either during gestation or within 2 weeks of lactation.In the formula feeding group we included infants that were fed milk based n-3 PUFA supplemented formula ± AA or given an n-3 PUFA supplement directly within 2 weeks of birth.In both groups, infants were born at term or ≥37 weeks gestation.We excluded studies that were: not clinical trials, did not report on clinical infant immunity or development, the infants were born pre-mature, or the reports were abstracts or unpublished.Without language restrictions, we searched according to the standard search strategy of the Cochrane Neonatal Review Group including electronic searches of the Cochrane Central Register of Controlled Trials, EMBASE, CINAHL via EBSCO, MEDLINE, Web of Science, PubMed, as well as reference lists of published narrative and systematic reviews using the search terms listed in the supplementary materials.Randomized and semi-randomized controlled trials evaluating the effects of maternal long chain n-3 PUFA supplementation during gestation and/or lactation or the effects of long chain n-3 PUFA supplementation orally or to infant formula fulfilled the selection criteria.A trial was defined as semi-random if the method used to allocate study mothers to a n-3 PUFA group was either not statistically random or was not clearly stated.The titles and abstracts of the identified studies were screened independently by two authors.The full articles of relevant trials were assessed to determine their eligibility for inclusion in this meta-analysis with any disagreement resolved by a discussion between all authors.We included one study where the control groups did not supplement with any placebo on religious grounds.We evaluated long chain n-3 PUFA supplementation taken maternally during gestation, gestation and lactation, or lactation only compared to a control group, and long chain n-3 PUFA supplemented milk based formula or capsule compared to a non-supplemented control.The source of n-3 PUFA supplements were from fungal oils, fish oils, single-cell sources, or egg triglycerides.Studies were sub-grouped according to the outcomes in each paper.Prior to our literature search we determined that we wanted to see if the addition of AA had any effect on the DHA supplements, thus, DHA + AA supplements were analyzed separately from an n-3 PUFA supplement alone.Tables S1-S22C indicates where a bigger value is considered better or a smaller value is better for all the following outcome measures.This allows us to assign a positive score to the treatment group when they have the desirable outcome.The particular outcome as specified in the original trials is reported.Our primary outcomes assessed included: visual acuity, cardiovascular health, immunity, growth and neurodevelopment.Growth related to head circumference, height, weight and additional growth parameters such as body mass index and fat distribution.During Teller acuity card tests, infants are presented with cards of increasing special frequencies; the highest special frequency distinguished by the infant determines visual acuity.Visual evoked potentials examine corneal reflection.Flash VEPs give overall information on visual pathways; the responses include mean peak latencies of the major negative and positive components in the response waveform, which are numbered in a timed sequence."In transient VEPs, the brain's response returns to the resting state before the next stimulus, and as a result produces a waveform with distinct VEP components .In contrast, steady state VEPs do not return to resting state between stimuli.Binocular visual acuity is assessed using the sweep VEP method, which measures the amplitude of the electrical response in the occipital cortex of an infant as visual stimuli, such as bars or square wave gratings, are rapidly presented.In contrast, early treatment diabetic retinopathy study/Bailey-Lovie charts and the HOTV test, screen for monocular visual acuity.Electroretinography tests for any abnormalities in the retina and the Beery visual motor index tests for visual spatial skills.Finally, the developmental test of visual motor integration is designed to test for deficits in visual perception.A fixed-effect model is used when there was only one true effect.While many of our studies look at similar interventions, the impact of the intervention will vary between studies due to differences in location, dosage, length of follow-up and so forth.Therefore, a random-effect model was used to incorporate heterogeneity into the meta-analysis .When using a random effects model heterogeneity is expected and we ran a variance of components test to ensure inappropriate pooling had not occurred."Cochran's Q is a test of significance derived from weighted squared deviations that is greatly dependent of the number of studies analyzed.I2 is simply a ratio derived from Q. Thus when numerous effects are random we can expect higher values of heterogeneity.Heterogeneity measures are presented in Table S24.For continuous data, we measured the standardized mean difference using an inverse variance method at a 95% confidence interval.The dichotomous data event counts were measured using an odds ratio effect measure at a 95% confidence interval.Motor development included psychomotor, motor development, and general movements."These were assessed using the following methods: gross motor skills were assessed with the leg coordination subtest of the McCarthy scales of children's ability and the hand movement subset of the Kaufman assessment battery for children.The MCSA is a standardized test with predictive validity comprising of a verbal, a perceptional-performance, quantitative, memory and motor scales; the motor scales were included in the motor development category.Fine motor skills were assessed with the Purdue pegboard test and the motor component of the developmental test of visual motor integration.The PPT test examines both gross and fine motor skills by requiring the participant to place pins in the holes of a pegboard as quickly as possible.In addition, measurements of early and late gestures, general movement, and neonatal neurological classification were used."The Bayley scale of infant development is a standardized test in which scores from developmental play tasks are converted to scale and composite scores, which are used to compare the participant's performance to the norms of other children their age; the psychomotor index and motor components were used.The Gesell gross motor developmental quotient examines various gross and fine motor functions such as lying, rolling, and sitting and gives an overall score.The Knobloch, Passamanik and Sherrards test is comprised of five scales: adaptive, gross motor, fine motor language and personal-social skills; the fine and gross motor scores were used to determine motor development.Similar to the KPST, the Griffiths mental development scales uses a standardized kit to test for gross motor skills, personal-social skills, language skills, hand eye co-ordination, performance, and practical reasoning; outcomes from the gross and fine motor skills were likewise included in the assessment of motor development.The Brunet-Lézine psychomotor developmental quotient is a test for psychomotor development in early childhood, which examines posture control, hand-eye coordination, language, and personal/social relations .The Peabody picture vocabulary test provides a quick estimate of verbal skills where an examiner says a word and the individual being tested is asked to point to a picture that corresponds to that word.The clinical linguistic and auditory milestone scale tests linguistic and auditory milestones in infants .The child development inventory, is a questionnaire that measures child development in social, self-help, gross motor, fine motor, expressive language, language comprehension, letters and numbers; we included the expressive language and language comprehension in this category.The MacArther communicative development inventory is based on the idea that language and gestures are tightly coupled and early and late gestures can be used to predict future language skills .A strengths and difficulties questionnaire tests behavior by asking questions on 25 different positive and negative attributes.The infant behavior questionnaires tests six different domains of infant temperament including: smiling and laughing, activity level, the ability to be soothed, fear, distress to limitations, and duration of orienting.The Bracken basic concept scale-revised evaluates basic concepts essential for academic success.The child behavior checklist tests emotional and behavioral problems.Cognitive and neurodevelopment outcomes were measured using the following tests: the mental composite portions of the K-ABC, the BSID: mental development index, the GMDS, the information and block design subtests of the Wechsler primary and preschool scale of intelligence-revised, the Fagan test, the Leiter international performance scale, the clinical adaptive test, the Stroop test, the Woodcock Johnson test of cognitive abnormalities, the Stanford-Binet IQ test, the matching familiar figures test, the cognitive portion of the NNC, and a number of studies used a 2 or 3 step-means end problem solving test.The Fagan test is founded on the idea that there is a link between preferences for visual novelty and future intelligence; infants will look longer at a new target than one that they have previously seen.The Leiter test is an intelligence test which looks at reasoning, visualization, memory and attention, while the CAT tests for problem solving abilities.The Stroop test examines reaction time to say “sun” or “moon” when they see a picture of a sun and moon, respectively, as well as the reaction time to say the opposite image.The WJ test is an intelligence test consisting of 10 standard battery and 10 extended battery tests ranging in abilities such as fluid reasoning, processing speed, and short term memory.The Stanford-Binet IQ is used to detect intellectual deficiencies using both verbal and nonverbal subtests.The matching familiar figures test is used to test measures of the reflection-impulsivity dimension where a child is asked to identify a drawing from six possible variants which is identical to the standard drawing as quickly as possible.The two or three step-means end problem solving test involves a support step where infants have to pull an object within reach, and search step where infants have to uncover the object that is usually under a blanket.Weight, length, head circumference and other additional growth measures were reported on.Growth Z-scores, which convert raw scores into standardized scores relative to a population mean, were also reported.The effect direction in the weight and additional growth parameters have been segregated by age.Weight and growth parameters in an infant are considered beneficial between birth and two years of age.However, as the toddler ages there is a period of adiposity rebound where it is difficult to define the desired effect direction and thus we ran both scenarios between the ages of 2 and 5.We considered increased weight and additional growth parameters to be detrimental over the age of 5.Pulse wave velocity, the mean arterial, systolic and diastolic blood pressures, and heart rate were taken according to the oscillometric principle with an automatic device during cuff inflation, and using non-invasive optical methods.HR variability outcomes included: the mean of all normal intervals during recording, the mean of standard deviations of all RR intervals in 5 min segments, the SD of RR intervals measured in successive 5 min intervals, the percentage of differences between adjacent RR intervals ≥50 ms, and the square root of the mean of the sum of the squares of the differences between adjacent intervals, were measured using electrocardiograms and illustrate regulation of the heart by the autonomic nervous system.A decreased HR variability, particularly the SDNNi, is a predictor of cardiovascular death and sudden cardiac death .Immune outcomes were assessed through clinical and parental observations, biochemical measurements of blood for immune cells and cytokines, and skin prick tests in which sensitization to various allergens were measured.Ovalbumin stimulation is used to quantify immune responses.The SPT, measures the presence of IgE antibodies to allergens indicative of allergies.SCORing Atopic Dermatitis is a clinical tool used to assess the extent and severity of eczema.For immune outcomes, we considered any pro-inflammatory effects to be beneficial since normal priming of the immune system occurs during infancy.The methodological quality of the trials was assessed independently by two reviewers following the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions , and the risk of bias graph presented in Fig. 1 and summarized in the supplementary materials.Bias was ranked according to sequence generation, allocation concealment, blinding, outcome data, selective reporting, as well as other sources of bias such as study design.When reporting bias was suspected, we attempted to contact the study authors for missing outcome data.Where this was not possible, the missing data was thought as serious bias.Any disagreement was resolved by a discussion between all authors.We did not use rank correlation tests or funnel plots to assess publication bias because these analyses are invalidated by the underlying heterogeneity among effect sizes .We identified 39 infant formula and 39 breastmilk publications evaluating the effects of n-3 PUFA supplementation on infant development which fit our inclusion criteria,Out of these studies, 6 were not included in the analysis because the data was presented in a way that could not be analyzed and we were unable to obtain the raw data .One study was excluded because reported exclusively on the mother .One study was excluded because it did not report on clinical outcomes .One study was excluded because it did not have a randomized or blinded process .This resulted in a total of 32 infant formula and 37 breastmilk publications evaluating 2443 infants on n-3 PUFA enriched formula and 4553 infants that were fed from mothers that were supplemented with n-3 PUFA during gestation and/or lactation.All the trials included used a randomized or semi-randomized clinical trial design from both high and low-income countries with the experimental interventions summarized and a reference list of pooled studies.The visual acuity of infants was not affected by n-3 PUFA supplementation in either breastfed or formula fed/directly supplemented infants.In contrast to the hypothesis that n-3 PUFA would improve visual acuity, in both breastfed and formula fed infants we predominantly see insignificant results and those that are significant are inconsistently fluctuating with age.In the infants born to mothers supplementing with n-3 PUFA, the VEP amplitude at 4 and 8 months was higher the control group compared to the n-3 PUFA supplemented group; however, there was no effect on the VEP latency at either of these ages.Furthermore, while there was a significant difference in the mean peak latency in the N3 component of the flash VEP waveform at 50 weeks post-conceptual age, the other 14 flash VEP waveforms were not significantly different between the experimental and control groups.In the formula fed infants, the VEP acuity initially favored the experimental group at 2 months but switched to favor the control group at 4 and 9 months.Furthermore, between 6 and 8 months and again at 1 year, there was no significant effect observed.In contrast to the VEP test for visual acuity, the Sweep VEP method shows a better visual acuity at approximately 4 months and 1 year.With regards to the Teller acuity method of testing visual acuity, breastfed infants in the n-3 PUFA group had increased visual acuity at 4 months but this disappeared by 6 months, and by at 8 months the control group was favored.While there were no short term teller acuity outcomes in the formula group, at 39 months the control group showed significantly better outcomes than the experimental group with or without the addition of AA.In both the formula and the breastfed groups, the short term ERG outcomes were not significantly affected by n-3 PUFA supplements, but the addition of AA with n-3 PUFA did improve one parameter of ERG tests in formula fed infants.Overall, the meta-analyses revealed that n-3 PUFA supplements consumed maternally during gestation and/or lactation or through formula/direct supplements have no effect on visual acuity.The addition of AA does not change these results."The motor development of infants was not affected by n-3 PUFA supplementation in the categorical data on breastfed & Fig. 5B; SMD 0.08, 95% CI -0.381 to 0.54; one trial) infants, but did significantly decrease the infants' linear motor results. "For example, the general movement's quality of the infants in the DHA group was lower than in the control group and exhibited higher accounts of mildly abnormal movements than the placebo group did at 12 weeks of age, whereas the control group had greater normal suboptimal general movements, which were classified as normal.However, the BSID PDI at 30 months and the hand-eye coordination part of the GMDS at 2.5 years favored the experimental group, but conferred no other advantages."The addition of AA prevented the effect of the DHA supplement on the infant's motor development in the study summary effect.The motor development of infants supplemented with n-3 PUFA in the formula fed group showed no effect, either with or without the addition of AA.The n-3 PUFA supplemented group did develop quicker in some aspects, as they were able to reach an object to touch, bring a toy to their mouth, sit without support, and walk alone at a younger age than the control group.The language development of infants was not affected by n-3 PUFA supplementation in either breastfed or formula fed infants."Specific to formula fed infants, the experimental group had higher percentile ranks of the MCDI's late gestures and total gestures at 1 year and 18 months. "Likewise, the experimental group was able to articulate the first comprehensible word at a younger age than in the control group and scored higher on the BSID's language facet at 18 months. "In contrast, the control group obtained higher scores in the MCDI's vocabulary comprehension and vocabulary production scales at 14 months.In addition the control group scored higher on the PPVT-III test at 2 and 3.5 years.However, the meta-analyses determined that n-3 PUFA supplements taken maternally or in formula/directly had no significant effect on language development.The behavior of infants was not affected by n-3 PUFA supplementation in breastfed infants.Formula fed infants supplemented with n-3 PUFA likewise showed no significant effect; however, the addition of AA to the n-3 PUFA supplemented, formula fed infants, had a negative summary effect on behavior with the control group smiling and laughing more at 1 year of age.The cognition of infants was largely unaffected by n-3 PUFA supplementation in both breastfed and formula fed infants.Although 6 tests were unchanged in both the breastfed and formula fed infants during all time points, there were a few tests that favored n-3 PUFA supplementation for both breastfed and formula fed infants.In breastfed infants, the experimental groups scored higher on the 2 step-cloth intentional solutions at 9 months, received higher scores on the sustained attention portion of the Leiter test at 5 years of age and scored higher on the K-ABC mental processing at 4 years of age.Similarly, for formula fed infants, the experimental group obtained higher score on the 2 step-intention score at 9 months, higher scores of sustained attention at 9 months, as well as on the BSID-cognitive standardized score at 18 months and scored lower on the MFFT at 6 years of age which indicates a more efficient processing.However, these results appeared randomly throughout time as they were not seen at all ages even when measured in the same cohort.For example, the formula fed infants did not show any significant differences in the 2 step-problem solving test at 1 year, in the sustained attention scores at 4 or 6 months, or the BSID scores at 6 months, 1 year, or 2 years.Overall, the meta-analyses show no significant differences in either the breastfed or formula fed/directly supplemented infants.The growth of infants was not affected by n-3 PUFA supplementation in either breastfed or formula fed infants.However, the summery effect shows that the addition of AA to n-3 PUFA supplements in formula or directly to the infant has a negative effect on head circumference, z-scores and a positive effect on height z-scores.In all cases, results were combined if measurements were taken over time.In both breastfed and formula fed infants, the control group weighed more at 21–24 months of age with a significant difference in weight at 2.5 years for breastfed infants, but the weight difference disappeared by the age of 6 and 7 years old.With regards to height, in breastfed infants the control group was significantly taller than the experimental group at 1 year of age; however, this difference disappeared as the infants aged whereas in formula fed infants there was no effect.Interestingly, n-3 PUFA supplementation had opposite effects on head circumference when it was delivered through breastmilk verses formula.In breastfed infants, the experimental group had larger heads in infants at 9 months and 2 years of age and in formula infants at 3 months, the control group had larger heads.There were additional differential effects of n-3 PUFA whether it was delivered in breastmilk verses formula.For example, in breastfed infants, n-3 PUFA supplementation resulted in higher BMI at birth and 2.5 years, higher fat mass at 1 year, higher waist circumference, an increase in skinfold thicknesses, higher body fat as a percentage at 2.5 years and higher triceps skinfold thickness at 6 years.In contrast, n-3 PUFA supplementation in formula resulted in infants showing decreased subscapular skinfold thickness at 6 weeks and 3 months as well as decreased triceps skinfold thickness at 3 months.However, the triceps skinfold thickness is increased at 9 months and 1 year, the subscapular skinfold thickness is increased at 1 year, and the mid upper arc circumference is increased at 1 year in the n-3 PUFA supplemented group.Although Figs. 16C and 18D shows significance, the height and head circumference in the formula fed group was not affected; therefore, the overall results from our study analysis showed no effect on the growth parameter caused by n-3 PUFA supplements with or without the addition of AA.The cardiovascular health of infants, including the DBP and MAP was not affected by n-3 PUFA supplementation in either breastfed or formula fed infants.However, it was found that the breastfed infants had a significantly higher SBP and MAP in the DHA group than in the control group at 7 years of age P < 0.001)."This suggests that maternal DHA supplements may have a negative effect on their offspring's cardiovascular health in the long-term.Unlike the majority of meta-analyses outcomes measured showing no effect of n-3 PUFA supplementation, the immune status of infants showed significant changes in both breastfed and formula fed infants, but not in the formula fed, linear outcomes.In the infants born to mothers supplementing their diets with n-3 PUFA, infants in the control group had significantly more CD8+ T cells, CD4+IFN-γ T cells and CD8+IFN-γ T cells whereas CD45RO + CD8+ T cells, CD45RA + CD8+ T cells and CD45RA + CD4+ T cells were more abundant in the experimental group.The infants fed formula showed similar results whereby the experimental group had more CD8+T suppressor cells and CD45RA + CD8+T cells at 2 weeks, as well as higher CD8+CD28 + T cells at 2 weeks, higher CD4+CD28 + T cells, CD3+CD44 T cells, and CD3+T cells at 6 weeks and lower CD20 + B cells and TNFα with PHA at 6 weeks.We defined any anti-inflammatory responses to be a “negative” result, as shown in Table S17A and Table S18A.When looking at the linear outcomes, we found that breastfed infant control group exhibited higher counts of IFN-γ positive cord blood mononuclear cells to ovalbumin.Furthermore, the SPT reactions to egg, and egg sensitization, food allergy, IgE to eczema, any positive SPT result, eczema with sensitization, and sensitization with/without allergic disease at one year was higher in the control group consistent with the control group also having more IgE associated disease at 2 years.These results were not seen in the formula fed group as no differences were seen in the SPT to house dust mites, cat, egg, or peanut at 1 year.Furthermore in the breastfed infants, the control group had a more severe SCORAD index score at 1 year, which ranks the severity of atopic eczema and had more incidences of colds than the DHA group at 1 month.Taken together, these studies reveal that n-3 PUFA supplements promote anti-inflammatory responses.The overall outcome of this meta-analysis indicates that supplementing infants with long chain n-3 PUFA largely have no effect on visual acuity, growth or language development.Aspects of motor development were significantly reduced in the n-3 PUFA breastfed individuals.Likewise, negative behavior effects were seen in formula fed infants being supplemented with n-3 PUFA and AA.Curiously, n-3 PUFA supplementation has targeted effects on cardiovascular and immunity depending on whether the infant was breastfed or formula fed suggesting that background nutrition may be central to the interpretation of the effectiveness of n-3 PUFA.N-3 PUFA supplementation has often been associated with growth parameters.While there was no difference in the overall growth parameters when pooled by study, it was found that n-3 PUFA supplements significantly alter growth parameters when pooled by age.Larger additional growth parameters were deemed detrimental over the age of five due to the prevalence of childhood obesity.The data shows that children in the n-3 PUFA supplemented, breastfed group have larger growth parameters over the age of five.During adiposity, children supplemented with n-3 PUFA through breastmilk also have larger growth parameters.There was no effect on the additional growth parameters under the age of two.Despite the values being larger in the n-3 PUFA supplemented children, they were still within the healthy range.While n-3 PUFA supplementation has little effect on cardiovascular function in infants overall, n-3 PUFA supplementation does have the opposite effect on systolic blood pressure in formula compared to breastfed infants.While both groups fell within a normal range for their ages, n-3 PUFA supplementation did elevate blood pressure in breastfed infants, suggesting supplementation actually has an undesired effect.This is because elevated blood pressure detected in early childhood is predictive of high blood pressure in late adolescence which results in heart failure later in life .N-3 PUFA has long been associated with immune modulation.N-3 PUFA supplementation decreases IFN-γ production and increases naïve cells suggesting n-3 PUFA is associated with the reduction of pro-inflammatory responses and the induction of anti-inflammatory responses.Considering that developing infants need to undergo priming of their immune responses during the first few years of life, suppressing the normal development of immunity may have consequences considering a robust immune response is required to fight off infectious disease.In support of this, rodents fed n-3 PUFA have been shown to be more susceptible to several pathogens suffering greater morbidity and mortality .On the other hand, breastfed infants exposed to n-3 PUFA, and not formula fed infants, do have less IgE-associated disease at 2 years of age.Decreasing allergies is of clinical importance; however, understanding the potential consequences of altering the balance of immune cells in a developing infant needs to be further investigated.Previous meta-analyses, following the Cochrane guidelines, analyzed the effects of n-3 PUFA on health and development of infants .While these meta-analyses also concluded that n-3 PUFA supplementation has little effect on infant development, the Cochrane protocol does result in selection bias due to their selection inclusion approach which can alter the outcome of the meta-analysis .Here, we used the Comprehensive Meta-analysis V3 program enabling the testing of multiple time points, outcomes and intervention groups whilst avoiding multiplicity .However, this study still has some unavoidable limitations.Differences in background diets may have confounding effects on the influence of the intervention as these studies spanned a wide array of countries with different cultures and socioeconomic conditions.Another problem could be that there were no consistencies between the source, dose or duration of n-3 PUFA supplementation.In addition, varying test methods were used to compute similar outcomes.For example each study used different versions of the BSID.Furthermore, the testing methods may not be fully reliable.For example, it has been suggested the Bayley Scales do not provide an adequate measure of infant cognitive ability, but are instead appropriate for measuring incidence of developmental delay, as has been demonstrated .Likewise, the Gesell Developmental DQ is no longer considered an acceptable standard of psychometrics.While we were able to address the issue of nested multiplicity, our study still contains multiplicity from where we separated data into different categories.This could increase the type 1 error associated with these studies.Additionally, some analyses only contain one study and thus we used the analysis to present the overall summery effect from that paper.Therefore, no definitive conclusions can be drawn on these categories, thus, there needs to be more independent studies done on DHA, as well as DHA + AA for a more precise meta-analysis.Some studies had to be excluded because the data was presented in a way that was not feasible and we were unable to receive the raw data from the authors.In addition, the quality of the included studies ranged in terms of bias."With regards to language development, a possible bias in the MCDI results is that it was reported that 92.9% of the participants in the n-3 PUFA group were able to guess their infant's group allocation and because the MCDI is a parent reporting test, this could have improved the parent's perception of their child's abilities .In the studies, raw data was often transformed or adjusted for confounding variables before it was analyzed.While these adjustments likely give a more accurate depiction of the outcome being measured, not all papers performed the same adjustments.As such, we used the raw data and acknowledge that some of our outcomes reported may differ slightly from the original article.In conclusion, n-3 PUFA supplementation in infants delivered either maternally or in formula, does not improve visual acuity, growth or language development.However, some aspects of motor, cardiovascular health, behavior and immunity are differentially affected by n-3 PUFA supplementation with the more desired effect occurring often in breastfed infants compared to formula fed infants.Overall, the evidence does not support the continued supplementation of infant formula although more studies are required to understand the effects of maternal n-3 PUFA supplementation on immunity.CQ: Literature search, assessed the eligibility and quality of studies, data extraction, contacted the study authors for additional information, wrote the manuscript.BE: Data analysis, data extraction, assessed the quality of the studies, reviewed the manuscript.JL: Directed the statistical aspects of the meta-analysis, data analysis, assessment of the quality of the studies, supervised and provided funding for students, reviewed the manuscript.DLG: Conceived and designed the study, directed the biological aspects of the meta-analysis, supervised and provided funding for students, reviewed and edited the manuscript.The authors have no conflicts of interest.CQ was supported by a CIHR Frederick Banting and Charles Best Canada Graduate Scholarship.JL is supported through grants from Natural Sciences and Engineering Research Council."This work was supported by grants funded through NSERC and Crohn's and Colitis Canada to D.L.G.
Background & Aims Long chain omega-3 polyunsaturated fatty acids (n-3 PUFA), such as docosahexaenoic acid (DHA) are widely considered beneficial for infant health and development. The aim of this meta-analysis was to summarize the evidence related to the clinical outcomes of long chain n-3 PUFA supplementation maternally and in fortified formula/taken directly. Additionally, we investigate if the addition of arachidonic acid (AA) alters the effects caused by n-3 PUFA supplements. Methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), EMBASE (1974 to June 2015) CINAHL (1990 to June 2015), PubMed (1966–2015), Web of Science (1864–2015), MEDLINE (1974 to June 2015), and hand searches for randomized and semi-randomized controlled trials on term infants evaluating the effects of long chain n-3 PUFA supplementation taken maternally and in milk based formula or directly. Results We identified 39 formula and 39 breastmilk studies on the clinical outcomes of long chain n-3 PUFA supplementation, with or without AA, on infant immunity and development. Of these studies, 32 formula and 37 breastmilk studies were deemed appropriate resulting in a total of 2443 formula fed infants and 4553 breastfed infants exposed to n-3 PUFA supplementation. This meta-analysis shows that n-3 PUFA supplementation in infants delivered either maternally or in formula/directly, does not improve visual acuity, language development, or cognition. However, some aspects of growth, motor development, behavior and cardiovascular health are differentially altered in the summary effects of certain studies with the more desired effect occurring often in breastfed infants compared to formula fed infants. Moreover, this meta-analysis shows that n-3 PUFA supplements affects infant immune development and reduces pro-inflammatory responses in the supplemented breastfed and fortified formula fed/directly supplemented infants. Conclusion Overall, the evidence does not support the continued supplementation of infant formula with long chain n-3 PUFA considering the negative impact on the developing immune responses.
225
Prediction of students’ awareness level towards ICT and mobile technology in Indian and Hungarian University for the real-time: preliminary results
Data mining often called knowledge discovery in database, is known for its powerful role in uncovering hidden information from large volumes of data .Its advantages have landed its application in numerous fields including e-commerce, bio-informatics and lately, within the educational research which commonly known as Educational Data Mining .EDM is a budding discipline related with innovative methods for discovering the exclusive and increasingly big data that come from the educational background and using those methods to better understand to the stakeholders .The fundamental principle of EDM is to analyses the educational data from different angles, categorize it and finally to summarize it .The statistical analysis with F-test, T-test has been also used in the educational data mining field .But nowadays, EDM is also being a very popular area of research which uses machine learning and data mining techniques to explore more and more data from educational settings .Machine learning is trending in the educational field for data mining purposes.In addition, machine learning is used to extract patterns and relationship between data elements in the large, noisy and messy datasets .In supervised learning, we just train datasets with test and validate input with preconceived output, having the idea that there is a relationship between the input and the output.For this many machine learning classifiers are trending to classify the data patterns in various fields .The Support Vector Machine is a supervised learning model introduced for binary classification in both linear and nonlinear versions .SVM performs classification by constructing an N-dimensional hyperplane that optimally separates the data into the two categories .With the use of boosting technique, ANN generates a sequence of models to obtain more accurate predictions which are also called the ensemble model .The Binary logistic regression is confined to only 2 classes, but discriminant analysis is best suited for the multi-classification problem.The linear discriminant analysis makes predictions by estimating the probability that a new set of inputs belongs to each class.It is used for homogeneous variance-covariance matrices whereas Quadratic discriminant analysis is used for heterogeneous variance-covariance matrices.K-Nearest Neighbors is a non-parametric, lazy learning algorithm which is most suitable for multiclassification problem as well.The objective is to learn a function:X ⇒ Y in which predictors f:X can confidently be predicting the corresponding target Y which is Awareness levels.The demographic features of teachers and students were predicted in Asian and European institutions using machine learning .Also, many of researcher had worked on educational datasets using machine learning classifiers as well .The supervised machine learning classifiers play a significant role in predicting the patterns for any real-time system.The presented predictive models may help in the development of the real-time ICT based prediction system to predict future awareness in stakeholders towards the use of the latest ICT and MT resources.It may also be beneficial in the prediction of the real-time benefits of ICT tools, techniques, and equipment to the students."The concept of using machine learning can be beneficial for real-time age and real-time locality prediction of University student and also a prediction of the nationality of European school's student in real time environment are also recommended by . "Further, automation of the real-time gender prediction of European school's principal with the help of Web-server was also suggested .The presented awareness predictive models can be deployed online as the real-time module on the University websites to predict the attitude , behavior, willingness and ICT awareness in the university students with monitoring technological access the following: 1."Age wise monitoring of the student's attitude towards usability, availability, issues, and opportunities of the trending ICT and MT Resources in the universities.2.The locality of students can also be monitored towards ICT and MT awareness in education at real time.3.The faculty of study or department of study of students can also be monitored at real time as well.The responses of students may be recorded on real-time website of the university and the predictive models may be useful to predict the future attitude, awareness levels and demographic features of the students towards the technological access.A well-defined structured questionnaire is designed using Google Form to collect primary data samples with stratified random sampling.Therefore, the hybrid scaled prone questionnaire is developed with 46 attributes.A hybrid means 5 points Likert, Binary scale, nominal, etc.A research instrument has five major sections.First section belongs to 9 demographic attributes, second section belongs to the Development-Availability with 16 attributes, third section relates to the Attitude with 6 attributes, fourth section belongs to the Usability parameter with 6 attributes and last section belongs to the Educational Benefits,with 9 attributes.The participated students were studying either in bachelor, master and doctorate courses.Out of 331 students, 169 students belong to the Eötvös Loránd University of Hungary and 162 students belong to the Chandigarh University of India.Hence, initially, the primary dataset consists of 331 instances and 46 attributes which are related to the 4 major ICT parameters belong to the A, DA, Edu.Benf., and U. Out of 6, we have 4 subsets of the master dataset, 2 subsets belong to Indian University and 2 subsets belong to the Hungarian University.Later on, for the prediction for both countries, we aggregated subsets and framed 2 aggregate datasets, one for Indo-Hungarian usability and second for Indo-Hungarian educational benefits.In this paper, we focused on only two parameters such as Edu.Benf.and U. Hence, we divided the main dataset into the 6 subsets which are shown in Table 1.In online mode, only 6 missing value are handled with Weka 3.9.1 tool ReplaceMissingValue filter .Based on the self-reduction, we eliminated 9 features related to the demographic characteristics such as age, gender, locality, nationality, study level, faculty, university, affiliation status, and home country.Also, we removed 16 attributes belong to the DA parameter and 6 Attributes relate to attitude parameter.Hence, a total of 15 attributes are selected which belongs to the Edu.Benf.and Usability parameter only.Also, InfoGainAttributeEval filter is used with Ranker Search algorithm in Weka 3.9.1 tool to calculate the rank of the considered attribute.The InfoGainAttributeEval filter evaluates the worth of an attribute by measuring the information gain with respect to the class.InfoGain= - H) Where H represents the Entropy.The ranking of 9 attributes is considered by inputting full training set with a combination of InfoGainAttributeEval and Ranker Search Algorithms.The calculated ranks of the influential attributes are shown in Table 2.As our main focus on the prediction of awareness by making a new class named awareness level using the calculating mean of responses with respect to Edu.Benf.and U of ICT and MT in higher education of both countries.we framed five awareness level named Very High, High, Moderate, Low and Very Low for target datasets belongs to the U and Edu.Benf.The authors confirmed that all stakeholders have provided their consent to further performing experiments.IBM partition node is very useful utility Which splits the data into training, testing, and validation sets of model building and testing its performance.The six datasets are trained with two techniques holdout and validation method separately and collectively as well.To predict the ICT awareness level towards ICT and Mobile technology, individual and aggregate datasets are trained with four supervised machine learning classifiers with splitting; the first one is the test-train method and the second one is the test-train-validate method separately.The test-train method is applied to three training ratio such as 50-50, 60–40, and 70–30.In test-train-validate method, four ratio of dataset testing such as 40-40-20, 50-30-20, 60-20-20, and 50- 20–30.The accuracy discard policy of auto classifier node for each classifier is set as less than 95%.The predicted models are trained and tested using 4 supervised machine learning algorithms with validation in IBM Modeler.The discard policy of auto classifier is set up as less than 95% accuracy and the out of total 08 machine learning classifiers, the auto classifier algorithm suggested 4 best models SVM, ANN, KNN and DISC for the individual and combined dataset.Therefore, to predict awareness level towards usability, we used multilayer perceptron using the boosting technique.One hand, to predict awareness level towards educational benefits at individual datasets, we used ANN and SVM and other hand ANN is applied on the aggregation of datasets.In a multi-classification problem, IBM analysis node provided vital performances metrics to evaluate the results of experiments.We applied the following measures: Coincidence matrices: We used combined matrices reflects actual values by rows and predicted values are defined by columns. Performance evaluation Index: It is a measure of the average information content of the model for predicting records belonging to that category.The accurate predictions for rare categories will earn a higher performance evaluation index than accurate predictions for common categories. Accuracy: The percentage of accurately predicted awareness level counts of the student from overall prediction counts. Error: The percentage of inaccurately predicted awareness level counts of the student from overall prediction counts. Right: Counts the total no. of right predictions from overall values. Wrong: Counts the total no. of wrong predictions from overall values.In this section we trained, tested and validate the usability datasets separately and jointly.We found only ANN classifier suitable for applying on datasets as compared to others.Further, to enhance the predictive models, boosting techniques is also applied with ANN which significantly improved the accuracy of each dataset.With the use of boosting the accuracy of models increased by 4%.Afterward, the results are analyzed using combined coincidence matrices.Fig. 1 displays the classifiers accuracy to predict the Usability of ICT and mobile technology individual and collectively in both countries.We found that out of 5 classifiers, only ANN fits for prediction task and on the training ratio 50-20-30, the highest accuracy is achieved as 98.2% in Indian usability.To predict the Usability in Hungarian universities towards ICT and mobile technology, the ANN classifier provided the highest accuracy of 96.5% at training ratio 60-20-20.The ANNs accuracy decreases down with training ratio 60–40.To predict overall Usability in Indian and Hungarian Universities towards ICT and mobile technology, ANN gained accuracy of 97.3% on training ratio 50-20-30.It is concluded that the accuracy increases with the validation approach of testing datasets with ANN.Fig. 2 shows the right prediction count of awareness level of High is 57, of moderate is 78, and of Very High is 13.The minor misclassification is found in the awareness level Moderate which is 3.Hence, it is concluded that we predicted the awareness level towards the usability of ICT and mobile technology is High, Moderate and Low in Indians students.Fig. 2 shows that at validation ration 60-20-20, the maximum awareness level is predicted as High and Moderate.The prediction counts for Very High and Very low is calculated as 08 and 13 respectively."It is significantly found that the awareness level will be increasing as higher or moderate in Hungarian University's students.Therefore, ANN classifier significant evidenced that future awareness in the attitude of Hungarian students towards ICT and mobile technology in education will be higher or moderate likewise Indian students.In Fig. 3 the combined testing approach stated that the Usability predictions for both of countries shall be High or Moderate as the ANN classifier gained the highest accuracy of 97.3% at training ratio 50-20-30 and the maximum count of awareness level for High is 117 and for the Moderate is 153.On combined datasets, the ANN also predicted an accurate count for the awareness level Very High.There is no significant misclassification is found in the prediction of Usability awareness level in both countries.Consequently, ANN proved that with the datasets aggregation increases the accuracy or prediction count with validation testing approach as compared to training ratio."Table 3 shows the coincidence matrices belongs to the results provided by ANN with a boosting for the individual country's usability prediction.For Indian usability, the accurate count for the Very High, High, Moderate and Low is counted as 13, 51, 78 and 15 respectively at training ratio 50-20-30 with validation.Total no. of correct prediction is counted 159 out of 162.For Hungarian usability, correct prediction counts for the Very High, High, Moderate and Low is counted as 8, 67, 75 and 13 respectively at training ratio 60-20-20 with validation."Table 4 shows the Overall predicted count for both of country's usability as 163 out of 169.Afterward, for both countries, the maximum predicted values counted for the Very High, High, Moderate and Low as 23, 117, 153 and 29 respectively at training ratio 50-20-30 with validation.It is evidenced that the incorporation of datasets with validation approach significantly raises the results of the awareness levels prediction for India, Hungary and both.In this section, we trained, tested and validate the educational benefits datasets separately and jointly.We found KNN, ANN with a boosting and SVM classifiers are more suitable for these datasets.Further, the outcomes are analyzed using joint coincidence matrices.Data from Fig. 4 reflects no significant difference between ANN and SVM accuracy in the prediction of educational benefits for Hungarian students.We considered SVM with 50% training data, 20% test data and 30% validated data.In the case of Indian educational benefits prediction, DISC defeated KNN classifier in terms of accuracy.At training ratio 60-20-20, DISC gained 95.7% accuracy which is significant to the model."In the case of prediction for both of country's students, ANN with a boosting provided accuracy of 98.5% without validation sets and after validating datasets, the accuracy gets down by 0.9%.It is concluded that the accuracy decreases with the validation approach of testing joint datasets with ANN.From Table 5 we can see results of SVM and DISC on three datasets of educational benefits.The DISC classifier predicted a maximum number of instances such as 67, 49,32 for High, Moderate and Very High respectively for Indian educational benefits.For Hungarian educational benefits, the accurate count for the Very High, High, and Moderate is counted as 42, 85 and 36 respectively.Table 6 shows the Indo- Hungarian prediction, whereas overall accurate count is 325 out of 331 which proves the model is quite significant for deployment.For the Indo-Hungarian, the maximum predicted values are counted for the Very High, High and Moderate as 78, 155 and 86 respectively.Therefore, Multilayer perceptron outperformed the SVM and DISC in the prediction of educational benefits to the students.Fig. 5 shows the significant misclassification in level High and very High provided by DISC to predict educational benefits to the Indians students.The correct prediction count of awareness level of High is 67; of moderate is 49, and of Very High is 32 in Indians students.Fig. 5 shows that SVM achieved 100% classification for awareness level High and Moderate only.There is also minor misclassification is also found in awareness level very High.Hence, it is concluded that future awareness level will be Very High, Moderate and High about consideration of educational benefits parameters.There is no possibility for awareness level will be of Low in Hungary.Also, DISC classifier proved the future awareness about educational benefits will be also High, Very High or Moderate in Indians students.From Fig. 6 the awareness level towards Educational benefits of ICT and mobile technology in both countries will be High, Moderate, or Very High.In this combined testing approach, ANN with a boosting approach predicted the accurate count of awareness level for High is 155, for Moderate is 86 and for Very High is 78.This section explores the results of experiments conducted using statistical T-test at 0.05 level of significance with Weka Experiment environment.To evaluate the performance of classification algorithms in terms of prediction accuracy versus CPU training time with the help of statistical analysis is significant and suggested ."To present a real-time significant model, this experiment compared the induced User CPU time to predict the student's awareness level.For this, we have tested and validated 6 datasets separately using hold out method and K-Fold cross-validation with 10 iterations adoring with T-test at 0.5 significant level to keep in view two parameters named CPU Training Time and Accuracy.The Hold out method used training ratio of 66:44 and K-fold cross-validation used 10-Fold cross-validation with k = 10 to enhance the prediction accuracy.In Fig. 7, the primary y-axis denotes accurate prediction accuracy of awareness level and the secondary y-axis shows CPU time in seconds.The x-axis shows the comparison of classifiers on 6 datasets.For Indian Edu.Benf.dataset, the SVM outperformed the ANN in prediction accuracy and in CTT.For Hungarian Edu.Benf.dataset, the ANN outperformed the SVM in prediction accuracy.The ANN CTT is induced 0.14 seconds which is higher than SVM CTT.Also, ANN outperformed SVM in prediction accuracy on Hungarian U and Indian U dataset.In case of aggregate datasets, ANN outperformed SVM in prediction accuracy with 88% in U dataset and 93.3% in Edu.Benf.datasets.It is also noted that ANN has induced higher CTT as compared to SVM in every case.Fig. 8 shows the results produced using K-fold cross-validation testing methods with T-test at the 0.05 significance level.It is found that with k = 10, the prediction accuracy of SVM and ANN are enhanced as compared to the Hold out method previously shown in Fig. 7, One hand, SVM outperformed ANN on Indian Edu.Benf.dataset and another hand, ANN outperformed SVM in prediction accuracy.Also, it is found that for Hungarian U the SVM outperformed the ANN in prediction accuracy.In the case of Indian U dataset, ANN attained the highest accuracy as compare to SVM.One hand, for the overall usability, ANN outperformed with 92% accuracy with SVM having accuracy with 88.2%.Another hand, the ANN has also outperformed the ANN in overall Edu.Benf.dataset with the increasing accuracy by 1.5%."In this experiment, it is also found that SVM's CTT is lowest as compared to ANN's CTT on each dataset.Further, the STAC web platform is also used to compare the performances of ANN and SVM classifiers on each accuracy datasets with holdout and k-fold method.The normality of accuracy datasets is tested with the Shapiro-Wilk test at a significance level of 0.05.Table 9 shows the results of the Shapiro-Wilk test at the 0.05 significance level to find the normality of datasets.For this, the authors framed the first null hypothesis named “nH0: The samples follow a normal distribution".The authors did not found significant p-value using Shapiro-Wilk test at 0.05 level of significance at Hold Out and K-fold method.Therefore, the authors found that accuracy datasets are normally distributed.Subsequently, to test the homoscedasticity of accuracy datasets with the second hypothesis is framed as “hH0: All the input populations come from populations with equal variances”.Table 10 displays the results of the Levene test at 0.05 significance level to find the homoscedasticity of the accuracy datasets.The authors found that all the input populations come from populations with equal variances.Hence, parametric t-test is appropriate suitable to apply on accuracy datasets to compare the performances of machine learning algorithms.For this, the authors assumed the null hypothesis “aH0: No significant difference between the prediction accuracy of SVM and ANN”.In Table 11 we found the insignificant p-value for the null hypothesis aH0 at that at 0.05 significance level using the paired t-test.Hence, the null hypothesis aH0 is accepted which reveals that accuracy datasets of SVM and ANN have identical mean values.Hence.it is concluded that there is no meaningful difference is found between the prediction accuracy of ANN.From Table 12, it is visible that we found the insignificant p-value for the null hypothesis gH0 at 0.05 significance level using paired ANOVA test.Therefore, the null hypothesis gH0 is accepted here.We found the means of the results of SVM and ANN prediction accuracy are the same.Hence, the ANOVA test also proved not any significant difference between the accuracy given by ANN and SVM classifiers.In this section, we evaluated the performances of the presented predictive models using various metrics shown in the combined Table 7 which displays the joint evaluation metrics of ICT awareness level predictive models with individual and aggregate features of the survey.The evaluation metrics showed the results in having more than 95% accuracy.To predict Indians usability, the ANN classifier with a boosting achieved the maximum accuracy of 98.2% with 1.8% error and the correct count of prediction is 159 and 3 are incorrectly predicted.The accurate count of the Hungarians usability and overalls usability prediction is counted as 96.5% and 97.3% respectively.Further, to predict educational benefits to Indian students, the DISC gained 95.7 accuracy and the correct count is found 155.Further, SVM obtained highest accuracy such as 98.2% for the prediction of Educational benefits to Hungarians and ANN scored 98.5% accuracy for the same prediction for overall respectively.It is concluded that SVM and ANN outperformed as compared to KNN and DISC in the prediction of awareness level towards ICT and mobile technology in India and Hungary.Table 8 displays the PEI values for each class achieved by applied classifiers.One hand, for the rare category such as Very Low and Low, index values are found maximum by SVM and ANN in each dataset and other hands, for the common categories such as High, Very High and Moderate, the index values are lowest as compared to rare categories.But these values are found significant such as 1.5 and 1.3 for the class Moderate and Very High in Hungarian Educational benefits dataset.Although, for both countries, we also found 1.4 and 1.3 for Very High and Moderate receptively.Hence, it is proved that for both countries, future Educational benefits awareness levels shall be Very High or Moderate.Further, the Usability PEI values are also found significant such as 1.1 for High and 2.5 for Very High.For the Very Low class, we did not find any PEI values for Hungarian Usability, Indo-Hungarian Usability and Indian Educational benefits due to no values are found in their datasets.The idea of testing various subsets and aggregate datasets with numerous type of classification algorithms at different training ratio provided better accuracy in the prediction of students ICT & MT awareness level in both countries.In the prediction of Indian U, boosting in ANN significantly improved the accuracy of each dataset.Hence, we presented three predictive models with maximum accuracy such as Indian U with 98.2%, Hungarian U with 96.5%, Overall U with 97.3%.Hence, it is also evidenced that the accuracy increases with the validation approach of testing usability datasets with ANN.One hand, in the prediction of Edu.Benf., we found no significant difference between ANN and SVM accuracy for Hungarian students and second hand, in the case of Edu.Benf.prediction, DISC beat KNN classifier in terms of accuracy.Further, it was also concluded that machine learning with validation and boosting technique improved prediction accuracy.It is also revealed that the educational benefits to both countries will be Very High or Moderate.Also, we did not find Very Low prediction for Indian usability and Hungarian educational benefits.Further, the awareness level is predicted as High and Moderate for the usability parameter in both countries."One hand, statistical T-test with Hold out and K-Fold method did not find a significant difference in between SVM's accuracy and ANN's accuracy in the prediction accuracy.Another hand, T-test found significant difference induced CTT in the prediction of each dataset.Also, K-fold method also enhanced significant accuracy of ANN and SVM as compared to Hold out method.Also, in the STAC web platform, the T-test and ANOVA tests also proved the insignificant difference between the accuracy of ANN and SVM classifiers on each dataset."Further, we recommend presented predictive models to be implemented as a real-time awareness level prediction of the university's student.Therefore, future work is also recommended for the creation of a real-time awareness prediction system using feature extraction with deep learning.Chaman Verma: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Veronika Stoffova: Analyzed and interpreted the data.Zoltan Illes: Conceived and designed the experiments.This work was supported by European Social Fund under the project "Talent Management in Autonomous Vehicle Control Technologies".The authors declare no conflict of interest.No additional information is available for this paper.
An experimental study was conducted to predict the student's awareness of Information and Communication Technology (ICT) and Mobile Technology (MT) in Indian and Hungarian university's students. A primary dataset was gathered from two popular universities located in India and Hungary in the academic year 2017–2018. This paper focuses on the prediction of two major parameters from dataset such as usability and educational benefits using four machine learning classifiers multilayer perceptron (ANN), Support vector machine (SVM), K-nearest neighbor (KNN) and Discriminant (DISC). The multi-classification problem was solved with test, train and validated datasets using machine learning classifiers. One hand, feature aggregation with the train-test-validation technique improved the ANN's prediction accuracy of educational benefits for both countries. Another hand, ANN's accuracy decreases significantly in the prediction of usability. Further, SVM and ANN outperformed the KNN and the DISC in the prediction of awareness level towards ICT and MT in India and Hungary. Also, this paper reveals that the future awareness level for the educational benefits will be Very High or Moderate in both countries. Also, the awareness level is predicted as High and Moderate for usability parameter in both countries. Further, ANN and SVM accuracy and prediction time is compared with T-test at 0.05 significance level which distinguished CPU training time is taken by ANN and SVM using K-fold and Hold out method. Also, K-fold enhanced the significant prediction accuracy of SVM and ANN. the authors also used a STAC web platform to compare the accuracy datasets using T-test and ANOVA test at 0.05 significant level and we found ANN and SVM classifier has no significant difference in prediction accuracy in each dataset. Also, the authors recommend presented predictive models to be deployed as a real-time module of the institute's website for the real-time prediction of ICT & MT awareness level.
226
Is trade liberalisation a vector for the spread of sugar-sweetened beverages? A cross-national longitudinal analysis of 44 low- and middle-income countries
Since 2001, there has been substantial increase in sales of sugar-sweetened beverages drinks in low- and middle-income countries.Sales grew at 3.9% per annum during this period, rising from 43.4 L per capita in the year 2001 to 65.3 L per capita in 2014, an overall increase of 50.1%.In some regions, such as Latin America, annual per capita sales now exceed 100 L per capita per year.The link between consumption of SSBs and disease is now established, with increased risks of obesity, tooth decay, and diet-related non-communicable diseases.Based on two systematic reviews showing a link between a higher intake of free sugars and higher rates of overweight and dental caries, in 2015 the World Health Organisation recommended reducing intake of free sugars to less than 10% of total energy across the life course, and suggested that intake might be reduced further, to below 5% of total energy intake.It highlighted, in particular, the contribution of SSBs to intake of free sugars, and thus of excessive energy intake.However, there is intense debate about how to achieve this, with the beverage industry favouring individual approaches, such as education and provision of information, while the public health community supports structural approaches, such as those directed at price, availability and marketing.This debate should be informed by evidence.Why is the growth of sugar-sweetened beverages so much greater in some places than in others?,Both supply and demand are likely to play a role.For example, the geographical spread of manufacturing or bottling/canning plants which, when coupled with improvements in transport infrastructure and urbanisation, have greatly increased the availability and affordability of consumer products, including SSBs, in many low and middle-income countries.Economic development, with rising disposable incomes, coupled with marketing campaigns that encourage aspiration to “western” lifestyles, increase demand for products bearing aspirations to global brand names.Recently, scholars have voiced concerns that trade and investment agreements, and the resulting market integration, facilitate the spread of sugar-sweetened beverages.A systematic review by Friel and colleagues found robust evidence that liberalisation of trade and foreign investment had been linked to changes in the food environments and diets, and specifically increased availability, accessibility, affordability, desirability, and consumption of food and sugary drinks linked to obesity and diet-related NCDs.Yet, much of the existing scholarship on trade liberalisation and sugar-sweetened beverages tends to be qualitative and draw on case-study methodologies.For example, two case studies researching trade liberalisation policies in Central America, including the Central American-USA Free Trade Agreement, showed that lower tariffs and less restrictive non-tariff barriers had increased imports and overall availability of foodstuffs implicated in the nutrition transition.Similar studies have examined the North American Free Trade Agreement and trade agreements involving Pacific islands and Ghana, among others.These case studies yield important insights into the potential mechanisms involved, but may not be generalizable to different national contexts.There are relatively few quantitative studies of health impacts of trade integration.A systematic literature review of quantitative studies by Burns et al. showed an overall beneficial association between international trade or FDI and population health, but this review only addressed non-nutritional health outcomes.One cross-country longitudinal study examined the link between trade and investment liberalisation and sales of SSBs in LMICs.It found that higher levels of foreign direct investment inflows were associated with higher sales of unhealthy commodities, which included tobacco, alcohol, and processed foods and drinks.The study also found that LMICs entering into free trade agreements with the United States had 63.4% higher sales of soft drinks compared to those with similar levels of GDP and urbanisation that did not."Another recent study used a natural experimental design comparing Vietnam and the Philippines, showing that Vietnam's removal of restrictions on FDI following its accession to the World Trade Organisation was associated with an increase in sales of SSBs not seen in the Philippines, which had joined the WTO some time previously.Most of this growth in sales of SSBs benefited multinational beverage companies, which gained access to the market after lifting of investment barriers, to the disadvantage of local companies.This latter point is important; opening of markets to multinational tobacco companies with their highly effective marketing techniques and global brands, for example through privatisation of former monopolies, is typically associated with increased cigarette sales.These earlier studies made important contributions but had a number of limitations.One was the inability to differentiate imports from domestic consumption, and thus assess the extent to which greater consumption was driven by imports.Another was the inability to study factors that might mitigate the effects of opening of markets, such as the maintenance of tariff barriers designed to protect domestic manufacture.Tariffs are extremely controversial, but where the products traded are associated with adverse health effects, they could play a role by using price signals to counter the power of marketing by global producers.Unsurprisingly, those favouring free trade, including in substances hazardous to health, have sought to reduce or eliminate tariffs.However, given that there is at least a theoretical argument that they may act to protect health, in this case by reducing the growth in consumption of SSBs in LMICs, it is of interest to ask whether this is borne out by the evidence.Here, to our knowledge for the first time, we combine data on SSB imports, tariffs, and SSBs sales to test these hypotheses across 44 LMICs.We use these data to ask whether higher tariffs attenuated imports of SSBs and whether increased SSBs imports per capita are associated with greater sales of SSBs.We collected data on SSBs sales in retail and food service outlets over the period 2001 to 2014 from 44 LMICs using data from Euromonitor International, 2015 edition.Euromonitor provides market data based on private industry records for a total of 80 countries, of which 44 are LMICs during the studied period.It bases its estimates on multiple sources including information from official statistics, trade associations, the trade press, trade interviews, and its own estimates.Euromonitor is a harmonised source of data across countries.We note, however, that though it is very widely used, it is a proprietary product and, to our knowledge, has not been subject to independent evaluation of data quality."Sales of SSBs are measured in litres per capita and include carbonates, concentrates, juice, ready-to-drink coffees and teas, sports and energy drinks, and Asian specialty drinks, which include Bandung, bird's nest, tamarind juice, ginger, lemongrass, jelly drinks, and drinks containing a limited amount of yogurt, among others.One limitation of these data as a measure of consumption is that they do not account for wastage.However, they overcome bias in self-reported consumption data, which tend to underestimate quantity consumed, and also have the benefit of comparability across countries, which is especially important in LMICs where epidemiological surveillance systems are often weak.Sector-specific data on SSB imports for the period 2001–2014 were acquired from TradeMap, derived from the United Nations Commission on Trade and Development statistics.The data were compiled at the 4-digit level of the Harmonised Commodity Description and Definitions System, which is an internationally standardised system to classify traded products.Imports of SSBs include products in two tariff lines: line 2009, which includes fruit juices and vegetable juices, unfermented and not containing added spirit, whether or not containing added sugar or other sweetening matter, and line 2202, which includes waters, including mineral waters and aerated waters, containing added sugar or other sweetening matter or flavoured, and other non-alcoholic beverages.All import data were in US$, adjusted for exchange rates and inflation.We used the inflation data provided by Euromonitor, which are based on the Consumer Price Index, to convert import data into real terms using 2001 as the base year.Import data were converted in imports per capita by dividing total imports by total population.Data on total population was obtained from Euromonitor, which relies on national statistics and UN data.We evaluated trade liberalisation using data on tariffs of SSBs.Data on applied tariffs to the Most Favoured Nations, which are non-discriminatory tariffs charged on imports of WTO member countries, were compiled from the World Trade Organisation tariffs database.The data were compiled at the 4-digit level of the HS for the tariff lines 2202 and 2009.We computed the average of both tariff lines for the analyses.A higher tariff value indicates that the duties or taxes applied to imports are higher.Data were unavailable for seven out of the 44 countries, thus these were dropped from analyses including tariffs on SSBs imports.Investment liberalisation was measured using data on foreign direct investment inflows as percentage of GDP.These data were obtained from Euromonitor, which sources its FDI data from the United Nations Conference on Trade And Development.To adjust for potential confounding in all models, we controlled for economic development – defined using GDP per capita – and level of urbanisation – defined using urban population as a percentage of total population – for the reasons noted in the introduction.Data on GDP per capita, adjusted for purchasing power parity for comparability between countries, were taken from Euromonitor.Data on urban population as a percentage of total population were from the WDI Word Bank database.Table A1 in the Web Appendix presents average country-level descriptive statistics for the period 2001–2014, including GDP per capita in PPP, urban population as percentage of total population, Gini index, FDI inflows as percentage of GDP, SSBs sales in litres, SSBs imports in USD, SSBs tariffs, and diabetes prevalence rate.In addition, Figure A1 presents the yearly average imports of SSBs in USD for the period 2001–2014 and shows that there has been a substantial increase in imports of sugary drinks overtime in these LMICs.Figure A2 presents the average SSBs tariffs over the same period of time, where a lower tariff value indicates that taxes or duties applied to imports are lower.This figure shows that between 2001 and 2014 SSBs tariff levels have on average been reduced.Thus, the data presented in this latter figure is consistent with international efforts over this period of time to liberalise markets in developing countries through reducing or eliminating tariff and non-tariff barriers.Table 1 shows the impact of tariffs of SSBs on imports of SSBs.We observed that a one percent increase in tariffs was associated with a decrease of SSB imports by 5%.After adjusting for GDP per capita and percentage of urban population, this coefficient remained inversely and significantly associated with SSB imports.Table 2 then looks at how these changes in imports relate to sales of SSBs, derived from our cross-national statistical model.We observed that for every ten percent increase in SSB imports per capita, sales of SSBs increased by 0.96 L per person.After adjusting for FDI, GDP, and urbanisation, the coefficient for SSB imports was attenuated to 0.36 L, but remained significantly related to sales of SSBs.We observed that for every 1% increase in FDI as a percent of GDP, sales of SSBs increased by 0.34 L.Consistent with previous findings, we found an association of greater GDP with higher sales of SSBs.Each US$100 increase in GDP per capita was associated with sales of an additional 0.22 L of SSBs per capita.We also observed that, in this case, urbanisation had no effect on SSB sales.Although superficially surprising, this is consistent with earlier observations that markets for SSBs in urban and rural areas of LMICs are now becoming saturated.Table 3 looks at the association between tariffs of SSBs and sales of SSBs.Although we found that tariffs were inversely associated with imports, we did not observe a significant association between them, and the size of the coefficient was substantially reduced after adjusting for potentially confounding factors.The average annual rise in SSB imports per capita was 17.9%, which, based on the results of the econometric model 2 of Table 2, is estimated to be equivalent to an annual increase of 0.65 L per person in our sample of LMICs.Cumulatively, over the 14 years period between 2001 and 2014, this translates into an increase in sales of 9.1 L per person.At the same time, sales of SSBs rose, on average, by 21.9 L per person.Thus, imports contributed about 40% of the observed rise in SSB sales.We conducted a series of robustness checks, testing our sample for outliers and model specification.Although fixed effects estimators are preferred to correct for country-specific conditions that could influence the spread of SSBs, we applied a test of overidentifying restrictions for panel data based on the Sargan-Hansen statistic, which statistically compares a fixed to a random effects model.These results confirm the need for more conservative fixed effects estimates.Next, because four countries, Chile, Latvia, Lithuania, and Uruguay, were no longer classified as LMICs in 2014, we re-ran our models excluding observations for these four countries in 2014.We did not find substantial differences in the results.In addition, we included a linear time trend in our models and found that our analyses were unchanged.Lastly, we performed additional analyses using an alternative measure of trade integration, the Index of Globalization, created by Dreher et al. and published by ETH Zurich, differentiating indicators of economic, social, and political globalization.Using these indices in our model rather than SSB imports, we found that every one percent increase in economic globalisation was positively and significantly associated with sales of SSBs.Neither social globalisation nor political globalisation had a significant association with SSB sales.These analyses are presented in the Web Appendix.Our results yield two main findings.First, we observed that reduction in tariffs was associated with greater imports of SSBs to LMICs.Second, we observed a strong association between SSB imports and overall sales of SSBs.Our estimates indicate that about 40% of the observed rise in SSB sales over the past 14 years in LMICs could be accounted for by additional imports.Obviously, these findings, on their own, do not indicate causation.Imports and sales may be linked bi-directionally to each other through supply and demand, with each influencing the other in ways that cannot be discerned precisely with these data.However, while it is theoretically possible that greater imports might strengthen domestic political pressure for tariff reduction, in this case the association is very much more likely to flow from tariff reduction, typically as part of wider discussions on trade, to imports and sales of SSBs.Given that tariffs affect SSBs imports, which, in turn, affect SSBs sales, we believe that the non-significant direct link between tariffs and SSBs sales might be due to lack of statistical power to capture the effect of tariffs.The direct effect size of tariffs on sales might be relatively small as to be captured in our model, as its effect on the SSBs domestic markets occurs necessarily through import levels, which are additionally influenced by factors in addition to tariffs, which could include, for example, demand, local productivity, quality, and marketing.The simultaneous effect of international trade and foreign investment on SSBs sales points out multiple pathways within market liberalisation that lead to increased sales of SSBs.Previous research has emphasised the importance of FDI flowing from food and beverage multinational companies based in high-income countries to markets of LMICs in promoting local production and consumption of SSBs.The data in Figure A1 shows that at the same time there was also a substantial increase in imports of SSBs into LMICs.Imports of SSBs might be especially relevant in affecting local markets through regional trade and in small insular countries or in countries with growing SSBs markets that still lack a well-developed infrastructure for the production of SSBs.For instance, in Southeast Asia, Malaysia and Vietnam had a high level of SSBs imports from Thailand, which was one of the main exporters of SSBs to these countries and might be acting as a regional hub.In Latin America, Bolivia mostly imported SSBs from neighbouring countries Peru and Argentina, while the latter had high imports from the United States, Austria, Switzerland, and Brazil.Some insular countries, such as Fiji, Samoa, Nauru, and Cook Islands, impose higher tariffs in SSBs and other sugary foods as an economic tool to regulate food environments through food affordability and purchase incentives, which points that imports of SSBs might be a pathway significantly affecting the availability of sugary drinks in these markets.Nonetheless, the simultaneous impact of investment and trade liberalisation in food environments of LMICs deserves further investigation.There are several directions for future research.Trade policies and foreign direct investment could impact on the availability of SSB in a country by increasing the supply of foodstuffs required for manufacturing of SSBs, increasing domestic manufacturing facilities.Thus, future work should explore how trade liberalisation impacts on ingredients used for the manufacturing of SSBs, such as high fructose corn syrup, and in turn, on the supply of SSBs produced within countries.These types of analyses could also be extended to explore their impact on population health outcomes, such as diabetes, obesity, and oral health.As with all cross-national statistical studies, our analysis has several limitations.First, we were unable to obtain sector-specific FDI data.This would have added greater analytic specificity to our models and enabled us to build upon previous work showing a positive relationship between FDI and sales of SSB.We also included total FDI in our models, but this could have biased our estimates toward the null if there is variation in the extent to which total FDI is associated with FDI by SSB producers.Second, again due to lack of data availability, most of the countries in the sample are middle-income countries, with only four southern African countries included in the sample.Nonetheless, we would expect our findings to be generalizable to these contexts, as a recent case-study of southern African countries highlighted how trade and investment liberalisation was linked to increasing availability of SSBs.Third, another important data limitation in our analyses is that Euromonitor data excludes bottled water whereas the import data does contain bottled water, which might bias our results toward the null.Fourth, while we used models that controlled for country fixed-effects, there may be unobserved time-varying factors that influenced the sales of SSBs in these countries over this period.The introduction of a soda tax in Mexico is an example of what can happen.Fifth, this study could not examine the effect of non-tariff barriers.Some, such as restrictions on sales outlets or marketing, are unlikely in LMICs but one possibility is the presence of a strong domestic producer that dominates the market.However, as non-tariff barriers also act in ways that resemble the effects of tariffs, because the purpose of both is to impose restrictions on imports, we would expect them to have a similar effect on trade as tariffs.Therefore, as we are not examining the effect of non-tariff barriers, our results could be an overestimation of the effect of tariffs.Future research will benefit from exploring the impact of both tariffs and non-tariff barriers on the food environments.Sixth, changes in tariffs, and by extension availability and prices, could differentially affect socio-economic groups.With the aggregate data available to us we cannot answer this question.Our results could be both overestimating and underestimating the observed effects in some subpopulations.However, a new systematic review finds that the impact of price rises across socio-economic groups is essentially constant, and while tax rises are regressive, this is only to a very small extent.There is now good evidence supporting a link between taxes and sales.A recent study on the imposition of price increases on SSBs via taxes in Berkeley shows that the price elasticity was much larger than expected.This suggests that it may be possible to reap the benefits of trade liberalisation while countering the problems through targeted taxes, although clearly it will be necessary to ensure that these are non-discriminatory to comply with trade deals.Our results have important policy implications for a global environment characterised, at present, by an ever-greater consumption of SSBs, especially in LMICs, as well as an increasing number of regional and global agreements seeking further reductions in trade barriers.Given growing evidence of public health concerns associated with trade and investment liberalisation, concerns borne out by the results in this study, advocacy groups should demand that public health should be prioritised in drafting of trade agreements.These results indicate that tariffs can have a role in counteracting the entrance of harmful products in emerging markets of LMICs and therefore could be an effective policy tool to regulate food environments by discouraging the availability and affordability of unhealthy products.Some countries such as Fiji, Samoa, Nauru, French Polynesia, and Cook Islands, impose taxes on imports of SSBs, highlighting how such policies are feasible.In light of the rapid growth of consumption of SSBs and clear evidence linking consumption to worse health, there is an urgent need for more countries to align trade and investment priorities with public health.
Does trade and investment liberalisation increase the growth in sales of sugar-sweetened beverages (SSBs)? Here, for the first time to our knowledge, we test this hypothesis using a unique data source on SSB-specific trade flows. We test whether lower tariffs effectively increase imports of SSBs, and whether a higher level of imports increase sales of SSBs. Cross-national fixed effects models were used to evaluate the association between SSBs sales and trade liberalisation. SSBs per capita sales data were taken from EuroMonitor, covering 44 low- and middle-income countries from 2001 to 2014, SSBs import data were from TradeMap, Foreign Direct Investment data were from EuroMonitor, and data on applied tariffs on SSB from the World Trade Organisation tariffs database, all 2015 editions. The results show that higher tariffs on SSBs significantly decreased per capita SSB imports. Each one percent increase in tariffs was associated with a 2.9% (95% CI: 0.9%–5%) decrease in imports of SSBs. In turn, increased imports of SSBs were significantly associated with greater sales of SSBs per capita, with each 10 percent increase in imports (in US$) associated with a rise in sales of 0.36 L per person (95% CI: 0.08–0.68). Between 2001 and 2014, this amounted to 9.1 L greater sales per capita, about 40% of the overall rise seen in this period in LMICs. We observed that tariffs were inversely but not significantly associated with sales of SSBs. In conclusion, lower tariffs substantially increased imports of SSBs in LMICs, which translated into greater sales. These findings suggest that trade policies which lower tariff barriers to SSB imports can be expected to lead to increased imports and then increased sales of SSBs in LMICs, with adverse consequences for obesity and the diseases that result from it.
227
Probing the mechanism of cardiovascular drugs using a covalent levosimendan analog
To investigate the mechanism of action of levosimendan, we designed i9 based on the structures of levosimendan and its analog dfbp-o.The three molecules contain a biphenyl group followed by a hetero-substituted moiety.The biphenyl group of dfbp-o was chosen because it was shown to insert in the hydrophobic cleft of cNTnC, conserve the Ca2 + sensitization effect , and be advantageous for fluorine NMR .The hetero-substituted moiety of i9 was designed based on the proposed reactivity of the nitrile group of levosimendan.A reactive iodoacetamide group was incorporated such that the number of bonds separating the biphenyl moiety and the sulfur atom of C84 was the same as with levosimendan.The covalent analog i9 has a planar center on the amide N, which closely resembles levosimendan.To synthesize the covalent levosimendan analog i9, we followed the route outlined in Supporting Figure 1.Supporting Figure 1b shows the 1H NMR spectra of 1, 2, and i9 in DMSO-d6.With the addition of the acetyl chloride moiety the signal corresponding to N8 is shifted downfield as the result of deshielding by the newly attached carbonyl C9, also a new aliphatic signal at 4.35 ppm is observed corresponding to the new methylene protons at C10.The halogen exchange from Cl to I shifted the H10 singlet upfield due to the less electronegative character of I compared to Cl.Both reactions had only small effects on the aromatic protons of the products, which were assigned with aid from previous assignment of dfbp-o .Three proteins were independently reacted with i9: cTnC for physiological characterization, cNTnC for assessment of reaction specificity, and 13C,15N-cChimera for structure determination by NMR.cChimera is a hybrid protein which contains cNTnC and switch-cTnI that represents the cNTnC·switch-cTnI complex prevalent during the systolic state of the heart .The labeling reactions were performed in urea or NMR buffer, and verified by 19F NMR and mass spectrometry.Protein labeling is illustrated for cChimera in Fig. 2a, where the 19F NMR spectrum of i9 shows two sharp signals at − 36.2 and − 39.3 ppm corresponding to the fluorine atoms F4′ and F2′.After the reaction with cChimera the two fluorine signals shift and broaden as a result of a change in environment and molecular size, respectively.To assess the completion of the reaction, 0.2 mM-bromo-1,1,1-trifluoroacetone was added to cChimera-i9 to react with any remaining free sulfhydryl group present.The presence of the unreacted trifluoroacetone only, as a sharp singlet at − 8 ppm, indicates that the reaction with i9 was complete.Spectrum 4 shows the chemical shift of trifluoroacetone bound to cChimera in a different sample for reference.Fig. 2b displays a summary of the 19F spectra of all the reacted proteins.Full labeling of cNTnC with excess i9 in urea results in covalent binding at C35 and C84, but only the signals corresponding to C84-i9 sharpen in the presence of cTnI.The 19F spectrum of cNTnC-i9 in the absence of switch-cTnI shows two sharper signals corresponding to F2′ and F4′ of C35-i9, and broader signals for C84-i9, which indicates multiple conformations for C84-i9.Addition of a 3:1 excess switch-cTnI, which binds to cNTnC, causes the fluorine signals of C84-i9 to sharpen.This indicates that binding of switch-cTnI to cNTnC stabilizes C84-i9 in one conformation.In a similar way, the 19F spectrum of cTnC-i9 shows the same change in linewidth for F2′ and F4′ of C84-i9 upon addition of switch-cTnI.For cChimera-i9, in which switch-cTnI is bound to cNTnC, the C84-i9 signals are very similar to those of cTnC-i9 and cNTnC-i9 in the presence of switch-cTnI.Comparable spectra indicate similar electronic environments for i9 in all cases.Thus, the i9 molecule is expected to adopt the same conformation in all cNTnC·switch-cTnI systems.This validates the use of cChimera for structural characterization of their interaction.The reaction of i9 with cNTnC in aqueous NMR buffer using a small drug-to-protein excess resulted in preferential labeling on C84 as judged by the appearance of intense C84-i9 peaks and minuscule C35-i9 peaks.For an unspecific reaction, equal labeling of C35 and C84 would be expected.Selective labeling of C84 with i9 is most likely the result of a two-step process as is the case for covalent inhibitors .Initially, non-covalent binding to the target protein positions the reactive groups close in space.Then the complex undergoes bond formation.We investigated the effect i9 had on contraction in demembranated ventricular trabeculae containing cTnC-i9.Following the exchange of native cTnC for cTnC-i9 in ventricular trabeculae, an increase in the Ca2 +-sensitivity of force development was observed.The data were fitted with the Hill equation) and the pCa50 increased from 6.10 ± 0.01 to 6.22 ± 0.01.The maximum Ca2 +-activated isometric force was 25.3 ± 1.7 mN mm− 2 prior, and 24.6 ± 2.7 mM mm− 2 after exchange of cTnC.Thus, cTnC-i9 did not affect maximum Ca2 +-activated force, which is consistent with the observations made for the Ca2 +-sensitizers dfbp-o or levosimendan .We also observed a decrease in the Hill coefficient from 4.62 ± 0.53 to 2.90 ± 0.34 following exchange.A similar decrease in cooperativity has been observed for other Ca2 + sensitizers as well as several Ca2 +-sensitizing mutations .The amount of cTnC-i9 exchanged within the muscle was estimated to be 20% by LC-MS.When a higher fraction of native cTnC in trabeculae was replaced by cTnC-i9, active force developed in relaxing conditions.This suggests that i9 stabilizes a conformation of cTnC similar to that stabilized by Ca2 +.Because i9 is covalently attached to C84, the observed increase in Ca2 + sensitivity can be unequivocally attributed to i9 binding to cNTnC.This is in contrast to traditional experiments in which muscle is soaked in solutions containing a drug under study leaving the in situ target uncertain.Since cTnC shares many structural features with other contractile EF-hand regulatory proteins, such as the myosin regulatory and essential light chains , it is always a concern whether changes in contractility are due exclusively to binding to cTnC.To investigate if i9 stabilizes the open conformation of cNTnC in the absence of switch-cTnI, we compared the 15N and 1H amide chemical shifts of cNTnC-i9 with those of cNTnC in the absence and presence of switch-cTnI, which are characteristic of the closed and open states , respectively.In most cases, resonances of cNTnC-i9 lie between those of cNTnC and cNTnC·switch-cTnI, suggesting cNTnC-i9 adopts a partly open conformation.Interestingly V72 is shifted further than the resonance indicative of the open conformation.Residues such as V72, D73, and E32 display more than one signal for their amide NH resonances indicative of multiple conformations.These results support our earlier supposition that i9 is in more than one conformation in the absence of switch-cTnI.To quantify the predicted conformation of cNTnC-i9, we used ORBplus .ORBplus uses amide chemical shifts to predict the AB and CD interhelical angles, which are good indicators of the overall conformation of cNTnC.Residues in or near Ca2 + binding site I are good indicators of the AB interhelical angle, and residues in or near site II are good indicators of the CD interhelical angle .Due to the presence of multiple peaks and exchange broadening for some cNTnC-i9 residues, it was not possible to obtain complete assignment of all residues in sites I and II.Using the amide chemical shifts of L29, G30, A31, E32, G34, and S35, the AB interhelical angle is predicted to be 133°, which is 9° more open than cNTnC.Using resides E66, D67, G68, V72, D73 and F74, the CD interhelical angle is 105°, which is 4° more open than cNTnC.These results indicate than i9 partially opens cNTnC and suggest that the AB-interhelical angle is more sensitive to i9 binding than is the CD interhelical angle.A similar magnitude of change was observed for the Ca2 +-sensitizing mutation, L48Q , which suggests that altering the AB interhelical angle, even just slightly, can significantly increase Ca2 +-sensitivity.Along with the stabilization of the open state of cNTnC, enhanced switch-cTnI binding has been proposed to increase Ca2 + sensitivity .To evaluate the effect of i9 on switch-cTnI binding, we titrated Ca2 +-saturated cTnC-i9 with switch-cTnI and monitored it by 19F NMR spectroscopy.The change of area under the F4′ signal of i9 as a function of increasing switch-cTnI concentration was fit to a binding curve with 1:1 stoichiometry and a dissociation constant of 74 ± 26 μM.This corresponds to an affinity approximately three times lower than that for the binding of switch-cTnI to cNTnC in the absence of i9) .Likewise, the Ca2 +-sensitizer, bepridil, was also shown to reduce the affinity of switch-cTnI .Thus, our results indicate the Ca2 + sensitizing effect of i9, like bepridil, does not involve enhancing switch-cTnI binding.Interestingly, bepridil also impairs the cooperativity of contraction , and thus the reduced affinity of switch-cTnI may help explain the reduced cooperativity observed in the Ca2 +-sensitivity experiments .Bepridil has been shown to enhance Ca2 + affinity of cTnC and of various troponin complexes through stabilizing the open conformation of cNTnC and slowing the rate of Ca2 + dissociation .Despite enhancing Ca2 + affinity, bepridil actually increases the speed of transition from the open, cTnI-bound, conformation of cNTnC to the closed, cTnI dissociated, conformation of cNTnC .This observation is probably due to the reduced affinity of switch-cTnI for the cNTnC-bepridil complex , and seems to suggest that paradoxically, despite enhancing Ca2 + affinity, bepridil may also promote diastolic relaxation .Therefore, given the similar reduction in switch-cTnI affinity for cNTnC-i9, one may expect a similar enhancement of the relaxation rate of contraction as proposed for bepridil.It is important to note, as mentioned above, that substitution of > 20% of native cTnC with cTnC-i9 led to active force generation even in the absence of Ca2 +.Therefore, although covalently bound i9 may promote the rate of cTnI dissociation, its prevention of complete relaxation limits its use as a treatment of heart failure.On the other hand, levosimendan, which has also been shown to increase Ca2 + affinity , did not show the enhanced transition rate from an open conformation to a closed conformation that was observed for bepridil ."This suggests that levosimendan does not compete with switch-cTnI and that its mechanism for Ca2 +-sensitization may be different than bepridil's.However, in that study, C35 and C84 were mutated to serines in order to accommodate fluorophore labeling at other non-native cysteine residues .Therefore the lack of C84, which is critical for levosimendan binding , coupled with the relatively minor impact of levosimendan on Ca2 +-sensitivity , makes interpretation of this finding and how it applies to the mechanism of i9 unclear.Finally, it is also possible that the 3-fold decrease in switch-cTnI affinity in the micromolar range may not be significant in the context of the high apparent concentration of switch-cTnI in the thin filament.cNTnC and switch-cTnI are spatially confined in the thin filament such that the apparent concentration of switch-cTnI is high.We previously designed cChimera to mimic the in situ conditions of the thin filament.In this hybrid protein, cNTnC and switch-cTnI are tethered and the apparent concentration of switch-cTnI was determined to be ~ 1 mM .Based on paramagnetic relaxation enhancement-NMR data, the Brown group also suggested that switch-cTnI remains in the vicinity of cTnC in the absence of Ca2 + .To characterize the interaction of a covalent Ca2 + sensitizer with the regulatory cNTnC·switch-cTnI complex, we determined the structure of cChimera-i9.In cChimera, the high apparent concentration of switch-cTnI keeps the complex in a switch-cTnI-saturated state .This design allows for structural assessment of cNTnC·switch-cTnI as it is found during systole in the heart.The structure of cChimera-i9 is similar to the structure of the cNTnC·switch-cTnI complex observed in other structures.When cNTnC and switch-cTnI in cChimera-i9 are compared to those in the x-ray structure of the core domain of cTn , the rmsd of alpha carbons is 2.2 Å for all residues and 2.0 Å for helical residues.Compared to those in the NMR structure of dfbp-o bound to cNTnC·switch-cTnI, the rmsd of alpha carbons is 2.2 Å for all residues and 2.1 Å for helical residues.The structure has been deposited in the Protein Data Bank and the Biological Magnetic Resonance Data Bank under the ID 2N7L and 25810, respectively.Structural statistics for the final ensemble are summarized in Supporting Table 1.The typical structural features of cNTnC in the Ca2 + and switch-TnI bound state are present in the structure of cChimera-i9.It contains five α-helices, N and A through D, and a small β-sheet involving the loops of each EF-hand.Helices N and D in cChimera-i9 are extended by 2 and 5 residues, respectively.The linker region between cNTnC and switch-cTnI in cChimera remains flexible.We confirmed the flexibility of the linker using the random coil index analysis performed within TALOS +, which estimates values of the model-free order parameter S2 based on the chemical shift of CA, CB, N, HA, and NH, backbone atoms .The RCI indicates that residues 95 and 96 of the linker along with 144–147 of cTnI have S2 < 0.5 and are classified as dynamic.Switch-cTnI in cChimera-i9 forms an α-helix that is slightly shifted away from the core of the protein.The length and composition of the switch helix of TnI in cChimera-i9 is the same as that observed in the crystal structure of the troponin core domain , starting at residue A150 and continuing until residue L158 with the adjacent regions being unstructured.The switch helices of both structures are roughly parallel and localize between the A-B-D helices of cNTnC.One possible explanation for the shift in switch-cTnI position relative to cNTnC is a steric clash between i9 and A150, at the start of the switch-helix, and/or M153, which faces the hydrophobic cleft of cNTnC.Despite this steric clash between switch-cTnI and i9, cNTnC is in an open conformation.The AB interhelical angle is 99° and the CD interhelical angle is 91°, which are similar to the interhelical angles measured for cNTnC bound to switch-cTnI.Interestingly, this is in contrast to W7 and bepridil, both of which also compete with switch-cTnI binding.This difference may be due to the fact that both i9 and switch-cTnI are covalently bound to cNTnC in the cChimera structure.Fourteen NOE distance restraints define the position of i9 in the core of cChimera between helices B, C, and D of cNTnC, and the helical region of switch-cTnI.The difluorophenyl ring of i9 contacts residues I61 and V64 on helix C, and I36 and V72, which form part of the β-sheet of cNTnC; this is consistent with its position deep in the hydrophobic cleft.The middle phenyl ring of i9 contacts several residues on the middle region of the cleft such as L41, V44, and M45 on helix C and M80 on helix D.This ring also makes NOEs to M85 on helix D and V146 on switch-cTnI, which are located towards the protein surface.The binding site of i9 is comparable to that of other drugs that interact with cTnC such as bepridil, dfbp-o, and W7 .Compared to dfbp-o, i9 binds deeper in the pocket.This indicates that the length of the hetero-substituted moiety of i9 is adequate to allow for deep binding.Because this moiety of i9 was designed based on that of levosimendan, we propose that levosimendan binds in a similar fashion once it reacts with C84.The spacer between the biphenyl moiety of i9 and the reactive thiol of cNTnC has one double and four single bonds, the same as levosimendan would have once reacted with C84.However, the planarity of levosimendan in the spacer is extended compared to that of i9, which may slightly alter its conformation when bound.Although no high-resolution structure of levosimendan bound to cTnC has been published, some studies have provided structural information about its binding site .In a complex between levosimendan and Ca2 +-saturated cNTnC, NOEs between levosimendan and M85, M81 and F77 from cNTnC were tentatively assigned .More recently, the 13C chemical shifts of methionine methyl groups from Ca2 +-saturated cTnC were monitored by 1H, 13C-HSQC NMR spectroscopy before and after levosimendan binding .The residues that experienced the largest chemical shift perturbations following levosimendan binding were M85, M81 and M47, suggesting that they are in close proximity to levosimendan .In the cChimera-i9 structure in our study, i9 makes NOE contacts with M85, M80, and M45, which suggest that i9 and levosimendan have a similar binding site.It is worthwhile to note that the structural studies on levosimendan were done in the absence of cTnI; thus, the slight differences between studies may be the result of the presence of cTnI.For example, residues M81 and M47 lie at the interface formed between cNTnC and cTnI; therefore it is plausible that in the presence of cTnI, levosimendan would adopt a similar conformation as that seen for i9 in the cChimera complex.We propose that i9 has a similar effect as Ca2 + to enhance contraction.cNTnC is in equilibrium between open and closed conformations.Ca2 + binding to cNTnC shifts the equilibrium to the open state to allow the binding of switch-cTnI .Our results indicate that the Ca2 +-sensitizer i9 is sufficient to turn on contraction, regardless of its effect on switch-cTnI binding.i9 may stabilize the open conformation either through shifting the equilibrium towards the open state or through preventing complete closure of cNTnC, even following Ca2 + release.This can be extended to the mechanism of action of other Ca2 + sensitizing agents that bind to cTnC.Details of the interaction of the sensitizers bepridil, dfbp-o, levosimendan, i9, and the desensitizer W7 with cNTnC are summarized in Table 2.All of these molecules favor the open state of cNTnC, regardless of their effect on Ca2 + sensitization.This suggests that there is another downstream mechanism responsible for their differential effect on contractility.One possible explanation is that they alter the affinity of switch-cTnI for cNTnC.Although the effects of dfbp-o and W7 on switch-cTnI conform to this hypothesis, bepridil and i9 do not; both compete with switch-cTnI yet still enhance contraction.However, the decrease in switch-cTnI affinity is relatively minor when compared to W7 and is likely not physiologically relevant.In conclusion, we have shown that if we ensure that the drug under study is bound to the designated target protein in the muscle, by covalently linking it to that protein and then exchanging the complex into the muscle, then it has the effect predicted on the basis of the in vitro mechanism.We did discover that the in situ mechanism overcomes one of the kinetic limitations of the in vitro mechanism from the co-localization of the proteins involved in the final conformational cascade that triggers contraction.We anticipate that this knowledge can lead the design of novel Ca2 + sensitizers for cardiac muscle.cTnC, 13C-15N-cChimera, and 15N-cChimera were expressed in E. coli as described elsewhere .cChimera contains a histidine tag, a thrombin cleavage site, cNTnC, a TEV cleavage site, and switch-cTnI.We previously showed that cChimera resembles the cNTnC·switch-cTnI complex in the ~ 74% bound state .The cChimera proteins were purified by Ni-NTA affinity followed by gel filtration chromatography as previously reported .15N-cNTnC was obtained by TEV cleavage of 15N-cChimera .The DNA from cTnC was used as a template for the preparation of cTnC using a site-directed mutagenesis kit and cTnC was purified as previously described .The purity of the proteins was verified by reverse-phase HPLC and electrospray ionization Mass Spectrometry.The synthetic cTnI peptide was obtained from GL Biochem Ltd."Chloroacetyl chloride was from Fluka Analytical, methanamine from Amatek Chemical, and ethyldiisopropylamine was from Sigma-Aldrich.In a glass vial 50 μmol of 1 and 170 μmol of HB were dissolved in 1.2 mL of acetonitrile.In a separate glass vial, 500 μmol of chloroacetyl chloride were dissolved in 40 μL of acetonitrile and slowly added into the 1/HB solution under the extraction hood.The reaction produced gas and turned to pale yellow.Then 6 mL of water were added to produce compound 2 as a white floating solid.Compound 2 was washed with water, recovered by centrifugation, and dried under vacuum.The dry product 2 was then dissolved in 1 mL of acetone, and an excess of NaI previously dried for 2 h at 110 °C was added.The halogen exchange reaction proceeded overnight at 37 °C with color change to orange and production of bubbles and precipitation.5 mL each of ethyl acetate and H2O were added to the reaction in a separation funnel.The yellow organic phase was washed twice with water and once with 5% Na2S2O3 which turned the solution clear.The clear organic phase was collected and dried with anhydrous Na2SO4 until no clumps were observed.The final solution was evaporated, the i9 product redissolved in deuterated dimethyl formamide, aliquoted, and stored at − 20 °C wrapped in aluminum foil.The purity and identity of the products was verified by MS and NMR.Full cTnC, cNTnC, and 13C,15N-cChimera were labeled under denaturing conditions.In addition, cNTnC was labeled in aqueous buffer to assess the specificity of the reaction.The denaturing buffer contained 6 M urea, 150 mM KCl, 50 mM TRIS, and 1 mM EGTA.The aqueous buffer consisted of 100 mM KCl and 10 mM imidazole at pH 8.The corresponding protein was dissolved in denaturing or aqueous buffer, 2 mM of fresh TCEP was added, and the solution incubated for 30 min to reduce cysteine residues.A stock solution of i9 in DMF-d7 was added in aliquots to the protein solution under stirring and the pH was readjusted to 8.The protein solution remained clear before the i9: protein ratio reached 1:1, after which the solution became turbid.The final ratio was > 2:1 for the reactions in urea and 1.2:1 for aqueous buffer.The reaction proceeded in the dark with constant stirring at 27 °C for 16 h.The reaction was stopped with four times excess DTT and spun down.The supernatant of the reaction in urea was applied to a size exclusion chromatography column to purify the labeled protein cTnC-i9, cNTnC-i9, or cChimera-i9.The protein fraction was lyophilized and stored at 4 °C.Male Wistar rats were stunned and killed by cervical dislocation Act, 1986).The hearts were quickly removed and rinsed free of blood in Krebs solution containing: 118 mM NaCl, 24.8 mM NaHCO3, 1.18 mM Na2HPO4, 1.18 mM MgSO4, 4.75 mM KCl, 2.54 mM CaCl2, 10 mM glucose, bubbled with 95% O2–5% CO2 for 30–60 min; pH 7.4 at 20 °C.Unbranched trabeculae were dissected from the right ventricle in Krebs solution containing 25 mm 2,3-butanedione-monoxime.The trabeculae were permeabilized in relaxing solution containing 1% Triton X-100 for 30 min, stored in relaxing solution containing 50% glycerol at − 20 °C for experiments, and used within 2 days of dissection.Demembranated ventricular trabeculae were mounted via aluminum T-clips between a force transducer and a fixed hook in a 60 μl trough containing relaxing solution.The sarcomere length was set to 2.1 μm by diffraction pattern using a Helium-Neon laser.Experimental solutions contained 25 mM imidazole, 5 mM MgATP, 1 mM free Mg2 +, 10 mM EGTA, 0–10 mM total calcium, 1 mM dithiothreitol and 0.1% protease inhibitor cocktail.Ionic strength was adjusted to 200 mM with potassium propionate; pH was 7.1 at 20 °C.The concentration of free Ca2 + was calculated using the program WinMAXC V2.5.The calculated free Ca2 + concentration was in the range 1 nM to 41 μM.In pre-activating solution, the concentration of EGTA was 0.2 mM and no calcium was added.For all experiments, the temperature was 20–22 °C.where pCa50 is the pCa corresponding to half-maximal change and nH is the Hill coefficient.All values are given as mean ± standard error of the mean except where noted, with n representing the number of trabeculae.Following initial characterization of Ca2 +-dependent cardiac muscle contraction containing native cTnC, cTnC was partially replaced by incubating the mounted trabeculae in relaxing solution containing 30 μmol/L cTnC-i9 for 15 min at 20–22 °C.The muscle was subsequently washed 2–3 times in relaxing solution-i9) and the Ca2 +-dependent cardiac muscle contraction was measured.If the SL had changed during the exchange, it was re-set to 2.1 μm.The fraction of TnC replaced by cTnC was estimated to be approximately 20% using LC-MS.Briefly, following the cTnC-i9 exchange, the trabeculae were incubated for 1 h in 50 mM BDM, 25 mM Tris and 5 mM CDTA to extract all cTnC-i9).Due to the low concentration of cTnC, the extraction solution from three muscle fibers were combined and concentrated in a 3 K Amicon Ultra Tube.The solution was loaded on a Hewlett Packard 1100 Series LC/MSD using the electrospray ionization method and detected in positive mode.The spectrum was deconvoluted using the Agilent ChemStation software with an abundance cutoff set to 40%.The NMR samples consisted of 0.3–0.8 mM cTnC-i9, cNTnC-i9, cChimera-i9, or cNTnC-i9 in 500 or 600 μL of 100 mM KCl, 10 mM imidazole or imidazole-d4, 2 mM CaCl2, and 0.25 mM 2,2-dimethyl-2-silapentane-5-sulfonate-d6 sodium salt or trifluoroacetic acid as internal reference, at pH 6.9.The NMR experiments were acquired in 500, 600, or 800 MHz Varian spectrometers at 30 °C.All one-dimensional experiments were processed with VnmrJ v 3.2, all the multidimensional spectra were processed with NMRPipe and analyzed with NMRViewJ .The assignment of free i9 in DMSO was done based on examination of the 1H NMR spectra acquired throughout the synthesis, and on previous assignment of the levosimendan analog dfbp-o .The assignment of i9 in cChimera-i9 was achieved using the 13C, 15N filtered noesy, 13C, 15N filtered tocsy, and 1H, 19F HMQC spectra.Assignment of cChimera in cChimera-i9 was done by using typical 2d and 3d NMR experiments 1H, 15N- and 1H, 13C-HSQC, HNCACB, CBCANH, HNHA, HCCONH, and CCONH detailed in Supporting Table 2.A solution of 530 μM cTnC-i9 in NMR buffer was titrated with increasing amounts of switch-cTnI using a stock solution of 10.6 mM in DMSO-d6.The concentration of the protein solution was determined by amino acid analysis.The concentration of the stock solution was determined by NMR spectral integration of the methyl signals relative to that of a DSS-d6 standard.The concentration of switch-cTnI at each titration point was 0, 86, 169, 251,334, 415, 575, 810, 1,115, and 1,540 μM.The diluting effect of each switch-cTnI addition was taken in consideration when calculating the concentration of protein and peptide at each titration point.1H and 19F NMR spectra were acquired after each addition of switch-cTnI.The change of area under the F4′ signal of i9 as a function of the switch-cTnI/cTnC ratio was fit using a one-to-one stoichiometry with xcrvfit.The structure of i9 bound to cChimera was determined using Xplor-NIH v. 2.35 with experimental backbone dihedral and distance restraints.Parameter and topology files for i9 covalently attached to cysteine were generated using the PRODRG server .The dihedral angles φ and ψ were predicted with the Talos + server based on the chemical shift of HN, N, CA, CB, and HA backbone atoms of cChimera-i9.Intramolecular distance restraints within the protein component of cChimera-i9 were obtained from noesyNhsqc and noesyChsqc NMR spectra, NOEs were calibrated using the bin method of NMRViewJ and classified as strong, medium, and weak.Intramolecular NOEs within i9 were obtained from the 13C, 15N filtered noesy spectrum in which signals from the 13C, 15N labeled protein moiety are filtered out to obtain NOEs from the unlabeled drug moiety only.Pseudo-intermolecular NOEs between cChimera and i9 were obtained from the three-dimensional noesyChsqc_CNfilt NMR spectrum; all of these were classified as weak.We used statistical torsion angle potential to improve the quality of backbone and side chain conformations; this is based on over a million residues from high quality crystal structures from the PDB.We also used the gyration volume potential term to restrain the volume associated with the gyration tensor also based on values observed in the PDB.We used the anneal protocol of Xplor-NIH to generate 140 structures from which the lowest energy structure was used in the subsequent refine protocol.The final ensemble consists of the 20 lowest energy structures generated in the refinement step with no NOE violations > 0.4 Å or dihedral violation > 5°.This ensemble was validated with PROCHECK using the Protein Structure Validation Suite server.
One approach to improve contraction in the failing heart is the administration of calcium (Ca2+) sensitizers. Although it is known that levosimendan and other sensitizers bind to troponin C (cTnC), their in vivo mechanism is not fully understood. Based on levosimendan, we designed a covalent Ca2+ sensitizer (i9) that targets C84 of cTnC and exchanged this complex into cardiac muscle. The NMR structure of the covalent complex showed that i9 binds deep in the hydrophobic pocket of cTnC. Despite slightly reducing troponin I affinity, i9 enhanced the Ca2+ sensitivity of cardiac muscle. We conclude that i9 enhances Ca2+ sensitivity by stabilizing the open conformation of cTnC. These findings provide new insights into the in vivo mechanism of Ca2+ sensitization and demonstrate that directly targeting cTnC has significant potential in cardiovascular therapy.
228
Multiscale synchrotron scattering studies of the temperature-dependent changes in the structure and deformation response of a thermoplastic polyurethane elastomer
Thermoplastic polyurethanes are a versatile class of polymeric block copolymers that show an exceptional range of thermomechanical properties.Owing to their inherent adaptability, they are extensively used in applications such as textiles, sport shoes, biomedical implants, and temperature sensors.The typical morphological structure of TPUs consists of contiguous chains containing compliant soft segments and much stiffer hard segments.The nanoscale arrangement of these chains leads to the formation of hard and soft regions with different packing densities, as illustrated in Fig. 1a, showing a high-resolution transmission electron microscopy image of a polyurethane structure.The SS appears white, and the HS appears dark under the electron beam owing to the reaction with ruthenium tetraoxide on staining .Three precursor components that made up this TPU are shown in Fig. 1b: 4,4′-dibenzyl diisocyanate, polyethylene adipate, and ethylene glycol.The self-organized structure of hard and soft nanoregions, together with the ‘fuzzy’ interfaces between them, can be thought of as a polymer nanocomposite .The overall chemical, mechanical, physical, and thermal properties of TPUs originate in their nanoscale phase separation by the relative volume fraction of the HS and SS and intrinsic properties of each of the phases.The demand for high performance of polyurethanes led to increasing attention in investigating their structural features and functions, in particular, thermomechanical responses across multiple length scales.Existing challenges concern understanding the correlation between the conformation of polymeric chains, chain dynamics, nanoscale morphology, and macroscopic architecture, on the one hand, and the thermomechanical behavior and in-service performance sought in specific applications, e.g., stretchability and strain-induced crystallization, temperature-induced shape memory effects, and so on, on the other hand.Various experimental techniques allow characterizing the thermal transition and structural changes of TPUs, such as differential scanning calorimetry and thermogravimetric analysis .Fourier-transform infrared spectroscopy has been used to help understand hydrogen bonding interactions in segmented polyurethanes .Dynamic mechanical analysis has been used to probe the micro-scale-/nanoscale phase separation.This is through changes in the thermal signature of the components and measuring the ability of the material to store or dissipate energy as a function of temperature.Extensive reports of research into the deformation behavior of TPUs are available .However, these studies so far only focused on the macroscopic deformation and thermal response as a basis for hypothesizing about the structure and strain accommodation within TPUs at finer scales.Limited effort has been devoted to exploring the detailed response of each phase at the microscale and at the nanoscale, as well as to understanding the relationship between the multiscale architecture and multiphase mechanical behavior.In fact, even when coupled stress-strain-temperature studies of semicrystalline polyurethane were conducted, they appear to have been limited only to the macroscopic scale .The combination of different synchrotron X-ray techniques allows combined in situ analysis and elucidation of the fine connections between the evolution of material architecture and deformation at different dimensional scales.However, from the available studies on polyurethanes in the literature, it appears that X-ray techniques have only been used so far to characterize separately either the mechanical properties or structural changes purely as a function of temperature .A popular multimodal technique, the combined small-angle X-ray scattering and wide-angle X-ray scattering has been used to evaluate the structural evolution of polyurethane and other elastomers.However, it is often only used for ex situ sample state evaluation before and after simple monotonic deformation or heating/cooling history, rather than in situ observation throughout the thermomechanical loading history.Surprisingly, studies reporting in situ stress-strain-temperature evolution are very rare.The thermomechanical behavior of shape-memory polymers has been explored , but only to quantify the strain recovery rate or strain fixity rate by a specific thermomechanical loading history.It is evident, therefore, that there is a need to conduct investigations particularly with a view to understanding the underlying multiphysics phenomena, e.g., Mullins effect, SME, and other phenomena characteristic of this class of elastomers.In our previous studies, we have investigated the mechanical behavior of TPUs subjected to in situ uniaxial monotonic and in situ incremental cyclic loading conditions at room temperature, using advanced synchrotron-based X-ray scattering techniques.The evolution of the multilevel structure of a polyurethane during tensile loading provided improved insight into length scale–dependent straining characterization and physical mechanisms responsible for observing the Mullins effect.The relationship between the structure and thermomechanical properties of TPUs under repeated cyclic loading is investigated through systematic experimental synchrotron X-ray techniques.In particular, the present study examines the relationship between DSC and stress relaxation measurements at the macroscopic scale and elucidates how the structure and mechanical behavior are coupled at those temperature ranges at the nanoscale and atomic scale.This would provide the fundamental basis for the development of new high-performance polymer systems.The TPU formulation for the present study was produced at IMC.The material was produced by mixing the following three precursor components: a diisocyanate HS, namely, DBDI; a macrodiol SS, namely, PEA MD; and a small molecule diol chain extender, namely, EG.The molar proportions used in the synthesis were 4:1:3, making ∼40% mass fraction of the HS and an isocyanic index of I = 100, giving rise to a truly thermoplastic condition of the resulting polymer.First, DI and MD components were allowed to react by thorough mixing for 24 h at 100 °C in vacuum, to obtain a prepolymer consisting of MD terminated at each end with DI.The prepolymer was then thoroughly mixed for 24 h with the CE at 90 °C.The details of this particular synthesis route are given by Prisacariu et al. .The material obtained in this way was placed into a closed mold and cured at 110 °C for 24 h to produce a sheet of ∼1.0-mm thickness.After around 30 days of storage at room temperature, a strip sample denoted PU185H with the cross-sectional dimensions of 1.20 × 1.91 mm2 and the total gauge length of 3 mm was cut from the sheet.DSC measurement was performed at the Laboratory for In situ Microscopy and Analysis, Oxford, using TA Instruments DSC Q2000.The TPU sample was heated from zero heat flow of −90 °C at a heating rate of 20 °C/min to 250 °C , held isothermal for 3 min, followed by cooling to −60 °C at 10 °C/min.The stress relaxation test was carried out at Oxford using a compact tensile loading rig provided by Deben with a >heating rate of 2 °C/min from 40 °C to 90 °C provided by the Peltier heating-cooling stage.In situ multiscale observation of elastic and inelastic deformation at elevated temperatures of polyurethane was performed on B16 beamline at Diamond Light Source.This was performed via the Deben heating-cooling stage with the Deben tensile loading rig and multimodal X-ray techniques.The X-ray flux was maximized by the appropriate choice of a multilayer monochromator.A beam energy of 16.5 keV was used and collimated to 0.2 mm × 0.2 mm spot size.To observe the multiscale structural changes and thermomechanical response, the Deben Peltier heating-cooling stage was used at the following temperatures: 10 °C, 0 °C, 30 °C, 60 °C, 90 °C, and 120 °C.For each temperature, in situ loading-unloading was applied to the specimen using a Deben tensile loading rig with a 200-N calibrated load cell, with grips specially designed for this purpose, as shown in Fig. 3.The loading was applied in the following sequence: 0.05 N, 0.5 N, 1 N, 1.5 N, 2 N, 2.5 N, 2 N, 1.5 N, 1 N, 0.5 N, and 0.05 N.The load and crosshead displacement were recorded by the Deben Microtest package and were further transformed to obtain sample stress, extension, and strain.The Peltier heating-cooling stage was constrained such that the initial sample size could not be small; thus, the total displacement/strain is limited by the travel limit of the Deben rig.Therefore the structural-mechanical-thermal response at small strains is the primary focus in this study.Radiography images were acquired using a scientific CMOS camera, also known as the ‘X-ray Eye,’ making sure the beam spot was on the samples and the beam traveled from the side of the sample.At each loading increment, the ImageStar 9000 2D WAXS detector was placed in the beam path, and diffraction patterns were acquired at the detector-to-sample distance of 178.8 mm.The precise distance was determined by obtaining patterns from NIST SRM 640d silicon powder and from NIST SRM 660a lanthanum hexaboride powder.Translating the WAXS detector out of the beam path allowed exposing the Pilatus 300 K SAXS detector for transmission-mode SAXS pattern acquisition.The sample-to-detector distance was found to be 4545 mm by using the pattern from dry chicken collagen for calibration ."The effect of viscoelasticity is not considered in this study as both SAXS and WAXS data acquisition took a few seconds and the overall data acquisition per strain point was less than a minute, taking into account the detectors' translation time.First, the data exported from Microtest software from Deben were used to calculate the macroscale strain of each sample, with individual dimensions taken into account.Then, the SAXS and WAXS patterns were postprocessed separately to obtain strains at different length scales.The results of the DSC measurement shown in Fig. 2a reveal the following.Multiple DSC measurements were performed to verify reproducibility.Because the results were reproducible within measurement uncertainties, we present a single curve.The glass transition in this polymer is associated with a steep temperature dependence of the heat flow seen around −63 °C.An endothermal peak was observed around 0 °C during heating and cooling.In the literature, this peak is reported to be associated with the melting and recrystallization process .However, as will be clear from the results of material characterization reported in the following section, TPU is semicrystalline at all temperatures higher than the glass transition temperature.This apparent contradiction led us to conclude that the ‘melting’ of the material reported in the literature should be appropriately qualified as the crystalline-to-amorphous transition that occurs only in the soft regions of the material.In contrast, the hard regions remain crystalline in the entire temperature range between Tg and complete melting of the polymer.In addition, a broad shallow endothermal peak is found in the DSC curve between ∼60 °C and 90 °C.To make the peak more visible, Fig. 2b provides a zoomed-in view and helps identify the peak against a sloping background so that the peak center position around 70 °C becomes evident.The appearance of this peak must be associated with other conformational changes that occur at these temperatures.To reveal further connection between structural changes and mechanical behavior, macroscopic stress relaxation testing was carried out between 60 °C and 90 °C.In Fig. 2c, the experimental curve for stress relaxation as a function of temperature, is accompanied by cubic polynomial fit and its derivative.The clear conclusion is drawn that the fastest stress relaxation occurs at ∼68 °C, indicating the tight link between the thermally driven structural changes and the mechanical response.To elucidate the structure-thermomechanical relationship at finer scales of those particular temperature ranges, in situ thermomechanical experimental investigation was conducted combined with imaging, SAXS and WAXS.The schematic diagram of the multipurpose beamline setup developed by the present researchers and used in a range of experimental in situ synchrotron studies is illustrated in Fig. 3a.The Deben Peltier heating-cooling stage combined with the Deben loading rig was used to study the thermomechanical response subjected to a sequence of uniaxial incremental tensile loading and unloading tests at different temperatures.The chosen load values were 0.05 N, 0.5 N, 1 N, 1.5 N, 2 N, 2.5 N, 2 N, 1.5 N, 1 N, 0.5 N, and 0.05 N, while the chosen temperatures were −10 °C, 0 °C, 30 °C, 60 °C, 90 °C, and 120 °C.An illustration of 2D SAXS and WAXS patterns at maximum load at different temperatures is shown in Fig. 3b.The evolution of the SAXS patterns with temperature is presented in Fig. 3b. Upon cooling, SAXS intensity is seen to become extremely low at 0 °C and remain weak at −10 °C.As demonstrated in a previous study , small-angle scattering in TPU is sensitive to the nature and volume fraction of the ‘fuzzy interfaces’ between the hard and soft regions, and SAXS intensity is proportional to the density contrast between the SRs and HRs in the material.Reduced SAXS intensity, as illustrated in Fig. 3b, is consistent with a decrease in the density differences.This can be explained by the reduction in the density gradients within the soft regions of the material or by a decrease in the amount of crystalline domains within the material.One-dimensional SAXS profiles of the most prominent peak are shown in Fig. 3d, and the strain values were found from peak center positions.This discrete Bragg scattering peak appears at a position of ∼0.2–0.5 nm−1 of q range, which is consistent with works reported in the past .The evolution of the WAXS patterns with temperature is presented in Fig. 3b. Isolated bright spots are observed in the patterns at −10 °C, which become more numerous at 0 °C, indicating the presence of large crystallites.It is a long established and generally reported observation that the intensity of WAXS peaks from crystalline phases grows sharply with temperature as melting is approached.This is likely to be associated with the increase in the thermal motion amplitude of the scattering lattice planes .The continuous Debye-Scherrer rings originate from nanocrystalline HRs that appear to remain substantively unchanged throughout the temperature range, with the exception of small changes in the lattice spacing associated with thermal and deformation phenomena.One-dimensional WAXS profiles of the most prominent peak is shown in Fig. 3c, and the accurate strain values were found from the peak center positions.Further discussion of the details of the observations made during in situ synchrotron scattering experiments and their interpretation are given in the following section.The evolution of macroscopic strain over the cyclic loading history at incremental temperature conditions measured by the Deben rig is presented in Fig. 4.Fig. 4a plots stress vs. strain at different temperatures, revealing the energy dissipation between each loading-unloading hysteresis loop and also the residual strain of TPU after unloading at each temperature.No big difference in the gap has been observed as the temperature increases, indicating an almost constant dissipated energy.The residual strain after each load cycle varies with temperature, but the trend is found not to be monotonic.In general, tensile residual strain is observed at the macroscopic scale after each cycle, whereby a slight compressive residual strain occurs only at 60 °C after unloading.The original data are replotted in Fig. 4b to reveal temperature vs. strain and its variation at different loading and unloading states.A valley consistently appears at 60 °C at each stress state and becomes more and more pronounced as the load increases.The evolution of strain of the fuzzy interface between the SR and HR at the nanoscale level is calculated from SAXS patterns and is shown in Fig. 5.Fig. 5a displays the evolution of stress vs. strain at all the examined temperatures.Significant change of the strain can be observed during the load cycles at −10 °C and 0 °C, compared with those at other temperatures.In particular, the modulus at 0 °C is even observed to be negative on loading.The diffraction spots appeared in Fig. 5b at −10 °C and 0 °C affect data interpretation, while the significant electron density changes at nanometer domains at 0 °C with the entire intensity of SAXS patterns reduced.Some positive residual strain occurs at 30 °C, whereas some negative residual strain with equal magnitude but opposite sign occurs at 90 °C after unloading.Fig. 5b then demonstrates the evolution of nanoscale strain with respect to the temperature at different loading and unloading stages.Strongly nonlinear evolution behavior can be seen particularly at −10 °C and 0 °C.However, no further trend can be clearly identified at temperatures higher than 0 °C.Fig. 6 presents the variation of the atomic-scale strain from the crystalline regions or HRs within the material with temperature, as calculated from the WAXS patterns.The stress-strain curve during each load cycle is shown in Fig. 6a with two evident features.First, the strain is significantly lower than the strains at the nanoscale and macroscopic scales at all the tested temperatures as the strongly cross-linked HRs could be up to 100 times harder than the soft regions.Second, a positive residual strain accumulates at each unloading stage as the temperature increases until 30 °C, where the maximum positive residual strain reaches approximately 0.03%.However, a reverse trend is observed around 60 °C, at which the residual strain recovers and starts to accumulate in the opposite direction.The maximum negative residual strain is reached after unloading at 60 °C, with the value approximately −0.04%.Such a negative residual strain, however, is found to recover and accumulate in the positive direction again as the temperature increases further higher than 90 °C.Fig. 6b illustrates different atomic strain levels with increasing temperature, and an apparent valley can be observed at 60 °C at each stress state.This trend is consistent with that observed macroscopically, but the feature of the valley revealed by WAXS is more pronounced in the strain vs temperature plot.SAXS and WAXS observations allow explaining the different physical changes that occur in the temperature ranges of −10 °C to 0 °C and 60 °C–90 °C, in which interesting features were captured by DSC measurement and macroscopic stress relaxation tests.On the basis of these considerations, a schematic TPU model with the HR and SR is proposed in Fig. 7, which helps explain the full complement of data pertaining to the temperature-dependent evolution of the structure and deformation behavior.At around −10 °C, some SRs become more ordered that contain HRs within them and preserve the orientation within the SR matrix over micron lengths.It is interesting to note that a degree of coherence appears between these SR crystallites and nanoscale HRs.Evidence comes from the lattice parameter correlation between individual reflections and continuous Debye-Scherrer rings in the two leftmost WAXS patterns in Fig. 3b. Upon heating through 0 °C, these soft microcrystals melt, as shown in Fig. 7b, causing the appearance of the endothermal peak.The SR matrix becomes amorphous, leading to the increase in the SAXS intensity, which is also evident from Fig. 3b.Another broad endothermal peak appeared in the DSC graph.The fast stress recovery that happened in the stress relaxation test at temperatures around 60 °C is ascribed to the dynamic SME in such a family of polyurethane.SME can appear within the SRs around this temperature , which results in the mechanical interaction between the HR s and SRs.This also results in the negative nature of residual strain macroscopically as upon load removal, the polymer samples assume the length that is shorter than the original value.The origins of SME in polymers vary, and diverse mechanisms have been hypothesized .In the case of polyurethane, most researchers agree that HRs act to store the elastic strain energy during deformation that is released when the surrounding amorphous matrix undergoes softening at temperature higher than the recovery temperature TR, accommodating the required macroscopic strain.In our experiments, this hypothesis is confirmed via the evidence of significant strain recovery in the HRs, revealed by WAXS measurement.While structurally, the HRs remain unchanged, the broad endothermal peak at ∼60 °C is clearly associated with enhanced mobility in the SR, as well as in the crucially important ‘fuzzy interface’ transition region .Fig. 7c explains the mechanism of the reverse accumulation trend and occurrence of significant compressive residual strain around 60 °C at the atomic scale by WAXS and the macroscopic scale.The SRs with enhanced mobility around 60 °C will expand and redistribute around the HRs subjected to load.Some SRs may even aggregate at the boundary of HRs and lead to temporary hardening or stiffening effect for the HRs, resulting in the observed resistance to load or valley of the strain values appearing at this temperature from both atomic and macroscopic scales.The redistribution of the regions also modifies the mismatch between the HR and SR that generates the compressive residual strain in the HRs and affects the overall macroscopic residual strain state.In addition, the conformational mobility of DBDI causes a wide range of characteristic properties, which are associated with the possibility of pronounced phase separation of inclusion-matrix morphology and with a high tendency to crystallization and self-association by hydrogen bonding.This is not available with the conventional DIs in traditional melt-cast polyurethanes.DBDI contains two methylene groups between the aromatic rings.The possibility of rotation around the central –C−C– bond allows more compact packing and crystallization between the DBDI HS blocks, thus producing substantial changes in properties.In contrast, MDI is intrinsically kinked in shape, reducing conformational mobility and thereby hindering close packing and achievement of hydrogen bonding.Tensile strength and residual elongation values were also found to be significantly higher for the DBDI-based PUs than those derived from MDI.This is because of a higher flow stress in the presence of DBDI, as can be associated with increased hydrogen bonding in DBDI-based polymers .In summary, in this work, we explored the evolution of the hierarchical architecture of a polyurethane at the macroscale, nanoscale, and crystal lattice scales during cyclic thermomechanical processing by combined SAXS/WAXS techniques.The work gives in situ observational basis for improved insight into the structure-property relations for structural polymers such as TPUs.From this series of experiments, we identified the influence of temperature on the mobility of molecular chains that affects the macroscopic deformation response through changes in the conformation around HRs surrounded by the SR matrix.Findings include melting and recrystallization in the nanometer domains at low temperature, e.g., between −10 °C and 0 °C, which has been detected by DSC and confirmed by finer scale methods of SAXS and WAXS.The phenomenon, however, disappears at higher temperatures.The transition observed at temperatures higher than room temperature both in DSC and macroscopic stress relaxation tests was elucidated by X-ray scattering analysis.The increased mobility of soft regions at this temperature generates significant compressive residual strain of the HRs after unloading, which can be ascribed to the conformational mobility of DBDI.The findings open the way toward improved design and extended functionality for TPUs for future applications.T.S. and A.M.K. formulated and planned the investigation.T.S., H.Z., E.S., and I.P.D. carried out in situ scattering experiments.T.S. carried out data analysis and wrote the manuscript with A.M.K. All authors contributed to editing the submission.The raw data required to reproduce these findings as well as processed data required to reproduce these findings are available from the authors on request.
The distinct molecular architecture and thermomechanical properties of polyurethane block copolymers make them suitable for applications ranging from textile fibers to temperature sensors. In the present study, differential scanning calorimetry (DSC) analysis and macroscopic stress relaxation measurements are used to identify the key internal processes occurring in the temperature ranges between −10 °C and 0 °C and between 60 °C and 70 °C. The underlying physical phenomena are elucidated by the small-angle X-ray scattering (SAXS) and wide-angle X-ray scattering (WAXS) study of synchrotron beams, allowing the exploration of the structure-property relationships as a function of temperature. In situ multiscale deformation analysis under uniaxial cyclic thermomechanical loading reveals a significant anomaly in the strain evolution at the nanoscale (assessed via SAXS) in the range between −10 °C and 0 °C owing to the ‘melting’ of the soft matrix. Furthermore, WAXS measurement of crystal strain within the hard regions reveals significant compressive residual strains arising from unloading at ∼60 °C, which are associated with the dynamic shape memory effect in polyurethane at these temperatures.
229
Loss-of-function nuclear factor κB subunit 1 (NFKB1) variants are the most common monogenic cause of common variable immunodeficiency in Europeans
The NIHR BioResource–Rare Diseases study was established in the United Kingdom to further the clinical management of patients with rare diseases by providing a national resource of whole-genome sequence data.All participants provided written informed consent, and the study was approved by the East of England–Cambridge South national institutional review board.At the time of our analysis, the NIHRBR-RD study included whole-genome sequence data from 8066 subjects, of whom 1299 were part of the PID cohort.These were predominantly singleton cases, but additional affected and/or unaffected family members of some of the patients were also sequenced; in total, there were 846 unrelated index cases.Patients with PIDs were recruited by specialists in clinical immunology from 26 hospitals in the United Kingdom and a smaller number came from The Netherlands, France, and Italy.The recruitment criteria included the following: clinical diagnosis of CVID according to the European Society for Immunodeficiencies ESID criteria, extreme autoimmunity, or recurrent infections suggestive of severely defective innate or cell-mediated immunity.Exclusion of known causes of PID was encouraged, and some of the patients were screened for 1 or more PID genes before enrollment in the PID cohort.The ethnic makeup of the study cohort represented that of the general United Kingdom population: 82% were European, 6% were Asian, 2% were African, and 10% were of mixed ethnicity based on the patients’ whole-genome data.Given that PID is a heterogeneous disease, with overlap in phenotypes and genetic causes across different diagnostic categories, we decided to perform an unbiased genetic analysis of all 846 unrelated index cases.Whole-genome sequence data were additionally available for 63 affected and 345 unaffected relatives.Within a broad range of phenotypes, CVID is the most common disease category, comprising 46% of the NIHRBR-RD PID cohort.Whole-genome sequencing of paired-end reads was performed by Illumina on their HiSeq X Ten system.Reads of 100, 125, or 150 bp in length were aligned to the GRCh37 genome build by using the Isaac aligner, variants across the samples were jointly called with the AGG tool, and large deletions were identified by using Canvas and Manta algorithms, as described previously.30,Average read depth was 35, with 95% of the genome covered by at least 20 reads.Single nucleotide variants and small insertions/deletions were filtered based on the following criteria: passing standard Illumina quality filters in greater than 80% of the genomes sequenced by the NIHRBR-RD study, having a variant effect predictor31 effect of either moderate or high, having a minor allele frequency less than 0.001 in the Exome Aggregation Consortium data set, and having a minor allele frequency of less than 0.01 in the NIHRBR-RD cohort.Large deletions called by both Canvas and Manta algorithms, passing standard Illumina quality filters, overlapping at least 1 exon absent from control data sets,32 and having a frequency of less than 0.01 in the NIHRBR-RD genomes were included in the analysis.All variants reported as disease causing in this study were confirmed by using Sanger sequencing with standard protocols."Large deletions were inspected in the Integrative Genomics Viewer plot, and breakpoints were confirmed by sequencing the PCR products spanning each deletion.To evaluate genes for their association with PID, we applied the BeviMed inference procedure12 to the NIHRBR-RD whole-genome data set.BeviMed evaluates the evidence for association between case/control status of unrelated subjects and allele counts at rare variant sites in a given locus.The method infers the posterior probabilities of association under dominant and recessive inheritance and, conditional on such an association, the posterior probability of pathogenicity of each considered variant in the locus.BeviMed was applied to rare variants and large rare deletions in each gene, treating the 846 unrelated PID index cases as cases and the 5097 unrelated subjects from the rest of the NIHRBR-RD cohort as control subjects.All genes were assigned the same prior probability of association with the disease of .01, regardless of their previously published associations with an immune deficiency phenotype.Variants with a VEP effect labeled as high were assigned higher prior probabilities of pathogenicity than variants with a moderate effect, as described previously.12,PBMCs were isolated by using standard density gradient centrifugation techniques with Lymphoprep."Absolute numbers of lymphocytes, T cells, B cells, and natural killer cells were determined with Multitest 6-color reagents, according to the manufacturer's instructions. "For PBMC immunophenotyping, we refer to the Methods section in this article's Online Repository at www.jacionline.org. "PBMCs were resuspended in PBS at a concentration of 5 to 10 × 106 cells/mL and labeled with 0.5 μmol/L carboxyfluorescein succinimidyl ester, as described previously33 and in the Methods section in this article's Online Repository, to analyze the ex vivo activation of T and B cells.Proliferation of B and T cells was assessed by measuring CFSE dilution in combination with the same mAbs used for immunophenotyping.Analysis of cells was performed with a FACSCanto II flow cytometer and FlowJo software.Patient samples were analyzed simultaneously with PBMCs from healthy control subjects.Secretion of immunoglobulins by mature B cells was assessed by testing supernatants for secreted IgM, IgG, and IgA with an in-house ELISA using polyclonal rabbit anti-human IgM, IgG, and IgA reagents and a serum protein calibrator, all from Dako, as described previously.33,Blood was separated into neutrophils and PBMCs.Neutrophils were used for protein lysates, separated by means of SDS-PAGE, and transferred onto a nitrocellulose membrane.Individual proteins were detected with antibodies against NF-κB p50, against IκBα, and against human glyceraldehyde-3-phosphate dehydrogenase.Secondary antibodies were either goat anti-mouse-IgG IRDye 800CW, goat anti-rabbit IgG IRDye 680CW or goat anti-mouse IgG IRDye 680LT.Relative fluorescence quantification of bound secondary antibodies was performed on an Odyssey Infrared Imaging system and normalized to glyceraldehyde-3-phosphate dehydrogenase.A previously resolved crystal structure of the p50 homodimer bound to DNA was used to gain structural information on the NF-κB1 RHD.34,Ankyrin repeats of NF-κB1 were modeled by using comparative homology modeling with the ankyrin repeats crystal structure of NF-κB2 as a template.35,36,There is no structural information on the region between the sixth and seventh ankyrin repeats,36 and therefore these were omitted in the model.Differences between groups with 1 variable were calculated with a nonpaired Student t test or 1-way ANOVA with the Bonferroni post hoc test, differences between groups with 2 or more variables were calculated with 2-way ANOVA with the Bonferroni post hoc test by using GraphPad Prism 6 software.A P value of less than .05 was considered significant.In an unbiased approach to analysis, we obtained BeviMed posterior probabilities of association with PID for every individual gene in all 848 unrelated patients with PID in the NIHRBR-RD study.Genes with posterior probabilities of greater than .05 are shown in Fig 1, showing that NFKB1 has the strongest prediction of association with disease status.All 13 high-effect variants in NFKB1 were observed in cases only, resulting in the very high posterior probabilities of pathogenicity for this class of variants.On the other hand, moderate-effect variants were observed both in cases and control subjects.The majority had near-zero probability of pathogenicity, but 3 substitutions were observed in the patients with PID only and had posterior probabilities of greater than 0.15, suggesting their potential involvement in the disease.Genomic variants with a high Combined Annotation Dependent Depletion score were found within both the PID and control cohorts, suggesting that this commonly used metric of variant deleteriousness cannot reliably distinguish disease-causing from benign variants in NFKB1.All 16 predicted likely pathogenic variants were private to the PID cohort, and further investigation revealed that all 16 subjects were within the diagnostic criteria of CVID."Assessment of all 390 CVID cases in our cohort for pathogenic variants showed that the next most commonly implicated genes are NKFB2 and Bruton tyrosine kinase, with 3 explained cases each.Importantly, based on the gnomAD data set of 135,000 predominantly healthy subjects, none of the NFKB1 variants reported here are observed in a single gnomAD subject, even though 90% of our CVID cohort and all of the NFKB1-positive cases had European ancestry.Therefore our results suggest that LOF variants in NFKB1 are the most commonly identified monogenic cause of CVID in the European population, with 16 of 390 patients with CVID explaining up to 4.1% of our cohort.None of the variants identified here had been reported in the previously described NFKB1 cases.6-11,The NFKB1 gene encodes the p105 protein, which is processed to produce the active DNA-binding p50 subunit.13,The 16 potentially pathogenic variants we identified were all located in the N-terminal p50 part of the protein.The effects of the 3 rare substitutions on NF-κB1 structure were less clear than those of the truncating and gene deletion variants, and therefore we assessed their position in the crystal structure of the p50 protein."Their location in the inner core of the RHD suggested a potential effect on protein stability, whereas other rare substitutions in the NIHRBR-RD cohort were found in locations less likely to affect this.Twelve patients with truncating variants, 1 patient with gene deletion, and 3 patients with putative protein destabilizing missense variants were investigated for evidence of reduced protein level."Assessment of the NF-κB1 protein level in PBMCs or neutrophils in 9 index cases and 7 NFKB1 variant–carrying relatives demonstrated a reduction in all subjects.Relative fluorescence quantification of the bands confirmed this and demonstrated a protein level of 38% ± 4.3% compared with healthy control subjects.There was no difference between clinically affected and clinically unaffected subjects.Our observations indicate that the pathogenic NFKB1 variants result in LOF of the NF-κB1 p50 subunit because reduction in protein levels was seen in all carriers regardless of their clinical phenotype and was absent in family members who were noncarriers."Seven subjects had evidence of familial disease, prompting us to investigate genotype-phenotype cosegregation and disease penetrance in cases for which pedigree information and additional family members were available. "The age at which hypogammaglobulinemia becomes clinically overt is highly variable, as shown by pedigree C in which grandchildren carrying the c.160-1G>A splice-site variant had IgG subclass deficiency, in one case combined with an IgA deficiency.Although not yet overtly immunodeficient, the clinical courses of their fathers and grandmother predict this potential outcome and warrant long-term clinical follow-up of these children.We also observed variants in subjects who were clinically asymptomatic.Pedigree A highlights variable disease penetrance: the healthy mother carries the same Arg284* variant as 2 of her clinically affected children.Identification of this nonsense variant prompted clinical assessment of the extended kindred and demonstrated that her sister had recurrent sinopulmonary disease and nasal polyps with serum hypogammaglobulinemia consistent with a CVID diagnosis.Overall, based on the clinical symptoms observed at the time of this study across 6 pedigrees, the penetrance of NFKB1 variants with respect to the clinical manifestation of CVID is incomplete, with varied expressivity not only of age at disease onset but also of specific disease manifestations, even within the same pedigree.The clinical disease observed among the NFKB1 variant carriers is characteristic of progressive antibody deficiency associated with recurrent sinopulmonary infections by encapsulated microbes, such as Streptococcus pneumoniae and Haemophilus influenzae.The clinical spectrum of NFKB1 LOF includes massive lymphadenopathy, unexplained splenomegaly, and autoimmune disease, either organ-specific and/or hematologic in nature.The percentage of autoimmune complications is based on the presence of autoimmune cytopenias, alopecia areata/totalis, vitiligo, and Hashimoto thyroiditis among the clinically affected cases.Granulomatous-lymphocytic interstitial lung disease and splenomegaly were considered lymphoproliferation.Enteropathy, liver disease, colitis, and a mild decrease in platelet count were neither included in those calculations nor scored separately.Histologic assessment of liver disease found in 3 patients showed no evidence of autoimmune or granulomatous liver disease, although fibrosis and cirrhosis were observed in these male patients.Finally, the number of oncological manifestations, predominantly hematologic, was noticeable.There were 2 cases with solid tumors and 4 cases with hematologic malignancies, which add up to 6 of 21 cases.Index cases and family members carrying NFKB1 variants were approached for repeat venipuncture for further functional assessment."In clinically affected subjects, B-cell numbers and phenotypes were indistinguishable from those described for patients with CVID.37",However, in clinically unaffected subjects the absolute B-cell count was often normal or increased.In all subjects with NFKB1 LOF variants, the numbers of switched memory B cells were reduced, whereas a broad range of nonswitched memory B cells was observed.This demonstrates that although the clinical phenotype of NFKB1 LOF variants is partially penetrant, all carriers have a deficiency in class-switched memory B-cell generation.The presence of increased numbers of the CD21low population described in patients with CVID discriminates between clinically affected and unaffected subjects with NFKB1 LOF variants."B cells from subjects with NFKB1 LOF variants demonstrated impaired proliferative responses to anti-IgM/anti-CD40/IL-21 and CpG/IL-2; this corresponded with the inability to generate plasmablasts, which was most pronounced in the more extreme phenotypes.Similarly, ex vivo IgG production was reduced in subjects with LOF variants, whereas IgM levels in the supernatants were normal, which is compatible with hypogammaglobulinemia."The T-cell phenotype was largely normal in the subset distribution.Similar to the knockout mouse model,38 we found an aberrant number of invariant natural killer T cells in clinically affected subjects."T-cell proliferation was intact on anti-CD3/anti-CD28 or IL-15 activation.Because invariant natural killer T cells have been implicated in diverse immune reactions,39 this deficiency might contribute to the residual disease burden in immunoglobulin replacement–treated patients, some of whom had acute or chronic relapsing infection with herpes virus and, in one case, JC virus.In our study we show that LOF variants in NFKB1 are present in 4% of our cohort of patients with CVID, being the most commonly identified genetic cause of CVID.Furthermore, we highlight specific features of these patients that distinguish them within the diagnostic category of CVID, which otherwise applies to an indiscrete phenotype acquired over time that is termed common and variable.The majority of the genetic variants we report here truncate or delete 1 copy of the gene; together with pedigree cosegregation analyses, protein expression, and B-cell functional data, we conclude that NFKB1 LOF variants cause autosomal dominant haploinsufficiency.This has now been recognized as the genetic mode of inheritance for at least 17 known PIDs, including those associated with previously reported variants in NFKB1.6,40-42,In monogenic causes of PID, incomplete penetrance has been more frequently described in haploinsufficient relative to dominant negative PID disease, having been reported in more than half of the monogenic autosomal dominant haploinsufficient immunologic conditions described.40,This might be because dominant negative gain-of-function mutations cause disease by expression of an abnormal protein at any level, whereas, as seen in this study, haploinsufficiency is predicted to lead to 50% residual function of the gene product.By definition, incomplete penetrance of a genetic illness will be associated with substantial variation in the clinical spectrum of disease, and the spectrum seen in this study is consistent with prior reports; in 3 pedigrees with 20 subjects6 harboring heterozygous mutant NFKB1 alleles, the age of onset varied from 2 to 64 years, with a high variety of disease severities, including 2 mutation carriers who were completely healthy at the ages of 2 and 43 years.It is important to temper skepticism of partial penetrance of immune genetic lesions with our knowledge that individual immune genes might have evolved in response to selection pressure for host protection against specific pathogens.43,Consequently, within the relatively pathogen-free environment of developed countries, the relevant pathogen for triggering disease might be scarce, and reports documenting partial penetrance of the clinical phenotype will increase.This makes the traditional approaches to genetics for determining causality difficult.The BeviMed algorithm used in this study prioritized both the gene NFKB1 and individual variants within NFKB1 for contribution to causality; the power of methods like this will increase with greater data availability.Identification of a number of rare NFKB1 variants with high Combined Annotation Dependent Depletion scores in both the PID and control data sets highlights the potential for false attribution of disease causality when the genetics of an individual case are considered outside the context of relevant control data.Currently healthy family members carrying the same NFKB1 LOF variant demonstrated similar reductions in p50 expression and low numbers of switched memory B cells as their relatives with CVID.The longitudinal research investigation of these subjects could help identify the additional modifiers, including epigenetic or environmental factors, that influence the clinical penetrance of these genetic lesions.The similarity of results seen in patients with large heterozygous gene deletions and in those with more discrete substitutions is consistent with haploinsufficiency as the shared disease mechanism.In patients with mild antibody deficiency, it is often difficult to decide when to initiate replacement immunoglobulin therapy; this might be the case for subjects and their family members identified with LOF NFKB1 variants.Two measures seem to correlate well with clinical disease.First, the class-switch defect and lower IgG and IgA production ex vivo was examined.Immunoglobulin class-switching is known to be regulated by NF-κB.Mutations in NF-κB essential modulator cause a class-switch–defective hyper-IgM syndrome in human subjects,20 as well as in p50 knockout mice.13,44,45,Haploinsufficiency of NF-κB might result in defective class-switch recombination because of poor expression of activation-induced cytidine deaminase, a gene regulated by NF-κB, which, when absent, is also associated with immunodeficiency.46,Second, the ability to measure the CD21low B-cell population is widespread in diagnostic immunology laboratories, and our study identifies this marker to correlate with NF-κB disease activity.Although the function of these cells remain to be fully elucidated,47 this laboratory test might be useful for longitudinal assessment of clinically unaffected subjects identified with LOF NFKB1 variants.Apart from having recurrent and severe infections for which these patients had been given a diagnosis of PID in the first place, autoimmunity and unexplained splenomegaly are very common manifestations in our patient cohort similar to the other heterozygous NFKB1 cases described.6-11,Although autoimmunity has been subject to variable percentages per cohort study,3,48,49 it seems that these complications occur more frequently in NFKB1-haploinsufficient patients compared with unselected CVID cohorts.In contrast to IKAROS defects but similar to cytotoxic T lymphocyte–associated protein 4 haploinsufficiency, we observed that NFKB1 haploinsufficiency can also result in chronic and severe viral disease, as noted for cytomegalovirus and JC virus infections in 3 of our patients.In the study of Maffucci et al,11 one of the NFKB1-affected patients also experienced Pneumocystis jirovecii infection and progressive multifocal leukoencephalopathy, which is suggestive for JC virus infection.Whether the B-cell defect in NFKB1 haploinsufficiency is responsible for these nonbacterial infections is unclear.50,51,PML is most often discovered in the context of an immune reconstitution inflammatory syndrome, as seen in patients with HIV receiving antiretroviral therapy and in patients with multiple sclerosis after natalizumab discontinuation.52,Although the exact contribution of B-cell depletion in PML pathogenesis is unknown, the increased PML risk in rituximab-treated patients53 suggests a protective role for B cells.Three subjects in this cohort had liver failure, and an additional 3 had transaminitis.Although autoimmunity is suspected, a nonhematopoietic origin of liver disease cannot be excluded in the absence of autoantibodies and nodular regenerative disease.Mouse models have suggested a nonimmune role for NF-κB signaling in patients with liver failure.13,54-56,In the cohort of patients with NFKB1 variants, we identified a number of malignancies.Malignancies in patients with PIDs have been cited as the second-leading cause of death after infection,57,58 and murine models have demonstrated that haploinsufficiency of NF-κB1 is a risk factor for hematologic malignancy.59,In a large CVID registry study of 2212 patients, 9% had malignancies, with one third being lymphomas, some presenting before their CVID diagnosis.49,Despite the fact that our cohort is relatively small, we found oncologic manifestations in 29% of our cases, suggesting that malignancies in patients with NFKB1 haploinsufficiency can occur more often than in unselected patients with CVID.In a study in 176 patients with CVID, among the 626 relatives of patients with CVID, no increase in cancer risk was observed,60 suggesting that when this does occur, as in this study, it might be due to a shared genetic lesion.Therefore in a pedigree with an LOF variant in NFKB1, any relatives with cancer should be suspected of sharing the same pathogenic variant.In conclusion, previous publications61,62 have suggested that CVID is largely a polygenic disease.Our results provide further evidence that LOF variants in NFKB1 are the most common monogenic cause of disease to date, even in seemingly sporadic cases.In these patients there is a clear association with complications, such as malignancy, autoimmunity, and severe nonimmune liver disease; this is important because the excess mortality seen in patients with CVID occurs in this group.48,The screening for defined pathogenic NFKB1 variants accompanied by B-cell phenotype assessment has prognostic value and is effective in stratifying these patients.Pathogenic variants in NFKB1 are currently the most common known monogenic cause of CVID.There is a clear association with complications, such as autoimmunity and malignancy, features associated with worse prognosis.These patients can be stratified by NFKB1 protein level and B-cell phenotype.
Background: The genetic cause of primary immunodeficiency disease (PID) carries prognostic information. Objective: We conducted a whole-genome sequencing study assessing a large proportion of the NIHR BioResource–Rare Diseases cohort. Methods: In the predominantly European study population of principally sporadic unrelated PID cases (n = 846), a novel Bayesian method identified nuclear factor κB subunit 1 (NFKB1) as one of the genes most strongly associated with PID, and the association was explained by 16 novel heterozygous truncating, missense, and gene deletion variants. This accounted for 4% of common variable immunodeficiency (CVID) cases (n = 390) in the cohort. Amino acid substitutions predicted to be pathogenic were assessed by means of analysis of structural protein data. Immunophenotyping, immunoblotting, and ex vivo stimulation of lymphocytes determined the functional effects of these variants. Detailed clinical and pedigree information was collected for genotype-phenotype cosegregation analyses. Results: Both sporadic and familial cases demonstrated evidence of the noninfective complications of CVID, including massive lymphadenopathy (24%), unexplained splenomegaly (48%), and autoimmune disease (48%), features prior studies correlated with worse clinical prognosis. Although partial penetrance of clinical symptoms was noted in certain pedigrees, all carriers have a deficiency in B-lymphocyte differentiation. Detailed assessment of B-lymphocyte numbers, phenotype, and function identifies the presence of an increased CD21low B-cell population. Combined with identification of the disease-causing variant, this distinguishes between healthy subjects, asymptomatic carriers, and clinically affected cases. Conclusion: We show that heterozygous loss-of-function variants in NFKB1 are the most common known monogenic cause of CVID, which results in a temporally progressive defect in the formation of immunoglobulin-producing B cells.
230
Simulation of char-pellet combustion and sodium release inside porous char using lattice Boltzmann method
In computational fluid dynamics, the flow evolution is usually described by Navier–Stokes equations, from the macro scale down to the micro scale.Over the past three decades, a simulation method based on the mesoscopic fluid dynamics, the lattice Boltzmann method, has attracted great interest and attention in the CFD community.Unlike traditional methods, the LBM does not deal with the discretized NS equations, but models flow dynamics by using the discrete Boltzmann equation and through the evolution of the distribution functions .There are several advantages of the LBM as compared to traditional CFD methods : the scheme is simple.The LBM simulates flow with linear equations with relaxation processes; it is easy to deal with complex geometry by changing only the form of distribution functions in the simulation; it is convenient for parallel coding and computing.The advantages of the LBM made this approach widely applied to a variety of flow research areas such as: multiphase and multispecies flow ; microchannel flow and heat transfer ; nanofluid and porous medium flow .The oxidization of porous coal/biomass/char at high temperature is a fundamental process of solid fuel combustion.However, the porous char features a very complex geometry, embedding flow microchannels and micropores , and it is challenging for traditional Navier-Stokes based methods to simulate the reaction of char in such a complex geometry.On the other hand, due to its capability of dealing with complex geometry, early applications of the Lattice Boltzmann method have focused on flow dynamics in porous medium, e.g., flow seeping and wetting of the porous structure .Buckle et al. simulated the permeability of flow in a sandstone based on the X-ray image of the porous sandstone structure.Their LBM simulation results agreed well with experimental ones.Based on what was observed in the simulation, they improved the model of flow penetration in a porous structure.In recent years, the LBM has been upgraded for simulating more complex flow and transport phenomena in porous medium.Kang et al. employed the LBM in the simulation of crystal and colony growth.Chen et al. investigated dissolution and precipitation of solid in a solution and analyzed the effect of precipitation on the surface reaction.Gao et al. employed the LBM to simulate acid treatment on mineral in coal, and compared different reactions of minerals and the effect of acid treatment on the coal structure.Research on the application of the LBM approach to combustion has also been developed over the past two decades.Yamamoto et al. simulated propane combustion in Ni-Cr porous medium and observed asymmetric burning inside the porous medium."The higher temperature than the metal's melting point may then destroy the solid structure.Recently Lin and Luo employed the Boltzmann model to investigate hydrodynamic and thermodynamic nonequilibrium effects around a detonation wave and the effects of different relaxation times on the chemical reactants, products and the relative height of the detonation peak were obtained.Boivin et al. presented a variable-density LBM solver, which successfully simulated a classical freely propagating flame as well as a counterflow diffusion flame, with strains up to extinction.To numerically solve premixed, nonpremixed, or partially premixed nonequilibrium multi-component reactive flows, a discrete Boltzmann model has been developed by Lin et al. , which presents not only more accurate hydrodynamic quantities, but also detailed nonequilibrium effects that are essential yet long-neglected by traditional fluid dynamics.Furthermore, a multiple-relaxation-time discrete Boltzmann model was developed for compressible thermal reactive flow .Via the Chapman–Enskog analysis, the developed multiple-relaxation-time DBM is demonstrated to recover reactive Navier–Stokes equations in the hydrodynamic limit.Improvement of overall mass conservation in LBM simulation of multicomponent combustion problems has been achieved in .Even though there have been considerable developments in the LBM community on solving multiphysics flow phenomena, including porous medium flow, premixed, non-premixed combustion and detonation cases, according to the best of our knowledge, LBM simulation of gas-solid two-phase porous-char combustion at the pore scale has not been attempted yet and is one objective of the present work.During the combustion of sodium-rich coal or potassium-rich biomass, vapor of alkali metal minerals are released under heating.The released alkali can condense on boiler walls, causing fouling and corrosion .For instance, the sodium vapor is easy to condense on heat exchanger surfaces and causes serious corrosion.Many researches have focused on the characteristics of alkali release using online and offline measurements .An offline method samples the fuel and emission in different combustion stages and analyzes the sample using a chemical method, such as inductively coupled plasma-atomic emission spectrometry and X-ray diffraction.An online method employs optical instruments to measure time-resolved concentrations of gaseous species released from the burning solid-fuel, such as laser-induced breakdown spectroscopy , and planar laser induced fluorescence .In our previous work, alkali release from a burning coal pellet was investigated using optical methods .The alkali release, pellet mass and pellet temperature were simultaneously measured.The results show that alkali release can be divided into three stages, including the devolatilization, char burnout and ash stages, and most sodium is released during the char burnout stage.Based on those experimental results, a preliminary kinetics of sodium release during coal-pellet burning was developed.The authors have also performed high-fidelity simulation of laboratory-scale pulverized-coal flames to investigate sodium emissions in these turbulent gas-solid two-phase flames .In , a subset of a detailed sodium chemistry was tabulated and coupled with a large-eddy simulation solver to predict sodium emissions in a spatially developing pulverized-coal flame.In , direct numerical simulation of a temporally developing pulverized-coal flame was performed.The predictions of sodium emissions were compared between tabulation based on one-dimensional premixed flamelets and DNS results.In these turbulent two-phase reacting flow simulations, pulverized-coal particles were approximated by point particles with mass, but no volume.Since there were no reliable data sources available on species compositions of the sodium compounds vapor released from a pulverized-coal particle under heating, it was also assumed in and that atomic sodium was released together with coal volatile during coal-particle pyrolysis.This constitutes a major assumption, which remains to be relaxed to improve predictions of sodium emissions in pulverized-coal combustion in future numerical studies.Along these lines, this study constitutes a first step to examine how alkali metal compounds are released from inside a burning porous coal/char particle, which is the second objective of this work.In the present study, models for the solid-phase char combustion and sodium release are developed and implemented into the open-source LBM flow solver DL_MESO .We first validate the LBM approach against our in-house experimental data of the combustion of a single 4-mm pellet .In these experiments, the temperature varies by only 10% between 1600 K and 1760 K.As a first step, therefore, the simulations are performed assuming a constant gas density, which means the impact of temperature variation on the flow field is neglected, i.e., one-way coupling.Then, the validated numerical tool is used to simulate the combustion of a porous char particle in microscale, to explore char oxidation, sodium release, and their intricate connections.In LBM simulation, both the grid size and time step are unity.So, the physical values need to be normalized and rescaled in the lattice space for simulation.The rule for the conversion is to keep a dimensionless number to be identical in both the lattice space and the physical system.The most common dimensionless numbers are the Reynolds number for flow, the Prandtl number for heat transfer and the Schmidt number for species transport.In the present study, the normalization of the source terms follows the procedure discussed in Refs. .The numerical implementation in DL_MESO has been validated on a constant-density, counterflow premixed propane flame.More details about the conversion method and validation results can be found in the supplementary material.Well-established models for solid-char combustion and alkali release are combined with the LBM to simulate the combustion of a Zhundong coal pellet, previously studied experimentally .In practice, Qrad is normalized by the char mass in the cell to advance in time the temperature distribution function.Five types of grid points exist in the computational domain in addition to the boundaries, as illustrated in Fig. 2.They are organized as follows : grid points inside the solid phase, e.g., A in Fig. 2, on which only temperature conduction is calculated, and both species mass fractions and velocities are set to zero; grid points on the solid-phase boundary, e.g., B, E, and G in Fig. 2, on which the bounce back boundary condition was applied for flow velocity and species compositions, and the corresponding temperature source term) is added according to the reaction; grid points in the inner layer of the gas-solid interaction region, e.g., M, C, K, H, and F in Fig. 2, on which flow velocity and the scalars are solved, and the source terms are generated from both the solid-phase char combustion and the gas-phase CO reaction; grid points in the outer layer of the gas-solid interaction region, e.g., D, N, W, R, P and O in Fig. 2.According to the buffer-zone interaction-layer approaches of immersed boundary theory , a two-layer interaction region has been used in the present study.For both the inner- and outer-layer interaction regions, the flow velocity and scalars are solved, and the source terms are generated from both the solid-phase char combustion and the gas-phase CO reaction; grid points in the gas phase, e.g., V in Fig. 2, on which general LBM simulations were performed, and the source term was generated from CO combustion only.The above five different types of grid points are distinguished in the simulation by constructing a phase function.In Fig. 2, fphase = 4, fphase = 3, fphase = 2, fphase = 1 and fphase = 0.For lattice points with the phase function ‘0’, ‘1’ or ‘2’, the velocity, temperature, species compositions and density are solved; for grid points with the phase functions of ‘3’ or ‘4’, only temperature conduction is solved.In addition, the difference of the heat diffusivity coefficient between the solid and gas phases was simulated by using different relaxation times for solid and gas grid points.In the experiment reported in , a 4-mm Zhundong coal pellet with an initial mass of 50 mg is suspended 1 cm above a laminar, fuel-lean methane flame.The diameter of the burner is 3 cm .The composition of the gas flowing through the pellet is estimated from CHEMKIN calculations, which is 3.9% O2, 7.6% CO2, 15.4% H2 and 72.8% N2 when the flow velocity is 0.15 m/s.The flame temperature is 1892 K, and the gas temperature is about 1600 K at the pellet location because of heat loss .The coal pellet undergoes devolatilization, char burnout and ash cooking.The data for the char burnout stage was specifically extracted for the present work.The char burning started 50 s after ignition .The initial mass and diameter of the char pellet are 35 mg and 4 mm, respectively.The swelling effect is neglected .During the whole course of the char burning, the temperature varies from 1600 K to 1760 K, and the flow Reynolds number based on the char pellet and flow velocity is 3.84.The computational domain is 4.5 cm in height and 3 cm in width.Following the conclusion of the grid independence study, the detail of which can be found in the supplemental material, a 427 × 285 grid is used.The bottom boundary is the inlet of the domain with fixed values of velocities and scalars.The top boundary is the outlet with zero normal gradients.The left and right boundaries are periodic.The inlet velocity is 0.15 m/s, and the inlet temperature is 1600 K.At the solid surface, the bounce back boundary condition is set for the flow and species mass fractions.To match physical flow properties, the relaxation time in the BGK model is set to 0.989.Both Pr and Sc are assumed to be unity.The instantaneous simulation results of the shrinking char pellet, velocity and scalars at Cchar = 60% are shown in Fig. 4-.The image of the initial char pellet is shown in for comparison.As the combustion progresses, the mass of the char pellet is continuously consumed and the pellet size shrinks .Since oxygen is fed from the bottom of the pellet, the bottom surface of the char pellet is consumed faster than the top, making the char particle non-spherical.As expected, a wake develops downstream of the char pellet.CO combustion increases gas temperature around the pellet.The heat capacity of the char being much higher than the gas one, the solid temperature stays below the gas temperature.The temperature distribution inside the solid particle stays almost uniform, which is consistent with the low Biot number estimated from the experimental study .The oxygen reaching the solid surface reacts with char, generating CO and CO2, so the O2 mass fraction decreases around the pellet while the CO and CO2 mass fractions increase.The generated CO further reacts with O2 in the gas phase, thereby its mass fraction decreasing sharply.Sodium vapor is released from the burning char pellet without further accounting for gas phase alkali reactions.Accordingly, the sodium concentration decreases as the sodium vapor travels away from the char pellet.Comparisons against measurements of the char-particle mass and temperature evolutions are shown in Fig. 5, confirming the validity of the proposed approach.Because of the ash inhibition effect, the slope of the carbon conversion decreases with time.This phenomenon is well captured by the simulation.During the intermediate stage, however, the simulation slightly underpredicts the carbon burnout.This is attributed to ash inhibition over limiting the char burning because of the fixed value of θ used in Eq.Indeed, the porosity of the ash should start at a lower value and then gradually increase towards θ = 0.2, to match the carbon conversion measurement in the final stage.Instead of looking for an ad-hoc θ response, it is discussed in a subsequent section how the internal char pellet topology can be included in the simulation.Overall, the temperature result is consistent with the experimental one.At the initial stage the simulation underpredicts the char-particle temperature, but at the final stage overpredicts it.The difference at the initial stage is mainly attributed to the measurement itself.The volatile combustion, prior to the char combustion stage, heats the char pellet by flame radiation.However, this radiative effect of volatile burning was not considered in the simulation.The difference at the final stage is attributed to the evolution of the heat capacity and surface emission of the pellet, which are initially determined by carbon and later by ash.Even though a linear fitting was adopted in the simulation for the two parameters, it is still not accurate enough to reproduce the change of physical characteristics of the pellet, featuring a shrinking char core imbedded in a thickening ash layer.The elemental sodium flux at 1 cm above the char pellet is collected as in the experiment.The elemental sodium is defined summing over all sodium species.The radial distributions of the elemental sodium concentration at 1 cm above the pellet are also shown in Fig. 6, for three different times when the carbon conversion ratio is 30%, 60% and 90%, respectively.Considering the complexity of the processes at play, the predicted sodium fluxes are in quite good agreement with the LIBS measurements.Because of the lower temperature predictions, the sodium mass fractions at early times are lower than the experimental data.Similarly, the overprediction of temperature at the final stage leads to an overprediction of sodium release, to which should be added the uncertainty in the experimental determination of the sodium flux from the LIBS measurement.Moreover, the measured velocity distribution at the initial stage was used to calculate the integral of the sodium fluxes in the char burnout stages.In fact, the velocity distribution evolves because of the shrinking of the pellet.So, the sodium flux is expected to be larger than the value measured by LIBS.The prediction of the radial distribution of the elemental sodium concentration shows a similar trend to that of the sodium flux.It is lower than the experimental profile at the initial stage, and becomes higher at the final stage of the char combustion, still with a very encouraging overall agreement.The simulation underpredicts the radial span of sodium diffusion.All the simulation results at the radial distance of 0.009 m from the centerline are lower than the measurements.First, the released sodium includes the sodium element from a variety of sodium species, which may undergo differential diffusion.But in the simulation, the Schmidt number was set to unity.Second, the presence of ash is likely to introduce fluctuations in the sodium release and thereby variability in the gas phase distributions, a phenomenon which is neglected in the simulation.The developed LBM simulation framework is now used to explore porous-char combustion and sodium release from porous char.The porous char structure is taken from Minkina et al. on pyrolysis of a subbituminous coal particle under 1073 K.The computational configuration is shown in Fig. 7.The physical size of the porous-char image is 500 × 500 µm.The selected microstructure features pore blockage, flow channel bifurcation and closed pores inside the char, with an overall porosity of 21.5%.The objective is to model the combustion of a char particle transported by a flow and entering a high temperature zone.Imposing a 2/3 pressure drop between the left and right boundaries of the two-dimensional lattice, a flow develops through the char with a Reynolds number of the order of 1.2, a value which is representative of flow regimes usually observed in such porous media.Initially, the porous char is filled with cold air.The hot air entering the simulated domain at a temperature of 1600 K contains 23.3% oxygen and 76.7% nitrogen in mass fractions.After checking the grid independence, a 300 × 300 lattice grid is used.A zero-gradient boundary condition is used for all the scalars at the outlet.The top and bottom boundaries of the domain are set as periodic.As above, the bounce back boundary for flow velocity and species mass fractions is applied at the solid surface, and heat conduction is solved inside the solid phase.The characteristic parameters of Zhundong char are applied as they are very close to those of the subbituminous coal sample used to build the porous structure for the simulation .The density, reaction parameters and sodium proportion are therefore those of Section 4.3, which have been used to validate the LBM methodology against experimental measurements.In the present simulation, reactions due to CO in the gas phase and char on the solid phase surface are both considered.Because the amount of ash generated in this porous structure is expected to be small, the ash inhibition effect on oxygen diffusion is also neglected.Figure 8 shows the results observed at the time when 20% of the carbon conversion is reached.The flow proceeds from left to right and, because of the higher temperature carried by the flow, the left side of the porous char structure is firstly consumed.Two main flow channels are established.As these channels become narrowed or bended, the gas flow accelerates or changes its direction, with significant acceleration in the narrower zones.It can be observed that CO generated at the char surface will burn at the front side of the domain facing the incoming flow, causing local high temperature that further heats the char structure.The evolution of the porous structure and velocity distribution during the porous char combustion processes are shown in Fig. 9 when the carbon conversion ratio is 0%, 20%, 40%, 60%, 80% and 100%.Closed pore spaces are opened and narrow flow paths are expanded during char combustion, followed by an increase of the porosity.This increase in porosity stimulates the overall burning rate of char, and the destruction of internal pores releases the gas phase components that were previously stored in the closed pores, such as the alkali metal vapor.Figure 10 shows that a significant proportion of the oxygen entering the domain is consumed upfront of the char by combustion both in the gas phase and on the solid surface.The remaining oxygen diffuses through flow channels to react at the solid-char surface to produce carbon monoxide and carbon dioxide.These species are then transported into the porous medium.Due to the complicated flow paths in the porous structure and the smaller gas flow rate, the generated CO and CO2 tend to accumulate at the reaction front, with also a certain ability to diffuse upstream.Some parts of CO will also reacted with the coming O2 and forming CO2.At early times, CO and CO2 presented deep inside the porous structure are due to flow transport and molecular diffusion, since there is a lack of oxygen there.Interestingly, the distribution characteristics of sodium vapor released into the gas phase differ from those of the combustion products.The regions with high combustion-product concentrations mainly feature the highest momentum levels, while the regions with a high concentration of sodium are mainly those with a weak flow velocity.Although sodium vapor is released from the solid phase by heating, there is not that much of sodium at the flame locations.Indeed, because of the large differences in transport properties between the gas and solid phases, once released, the sodium vapor rapidly diffuses inside the microchannels, while the solid char needs to be heated for its consumption to propagate.On the other hand, in retarded flow channels and closed pores, where flow transport is limited, the continuous accumulation of sodium vapor released makes its concentration higher.It should be pointed out that LBM simulation has also been performed for the burning of a Zhundong coal char particle with a lower overall porosity 11.1%.The porous structure was taken from in-house SEM images.Except that the time scale for the Zhundong coal char particle to complete burning is longer due to a lower porosity, we observed an identical trend of the distribution characteristics of combustion products and sodium vapor in the porous structure.The time evolution of the average over the computational domain of the carbon conversion and solid-phase temperature is shown in Fig. 11.As the combustion progresses, the slope of the carbon conversion ratio slightly decreases after a first increase, and then increases again.This sequence is explained by the fact that the char close to the left boundary, thus directly encountering the incoming oxygen, burns very quickly; as the combustion progresses, the flame front of the char reaction is gradually moving away from the left boundary, and the accumulation of the combustion products and CO combustion in the gas phase weakens the diffusion of oxygen to the reaction front, thereby decreasing the char burning rate.As the combustion of char further continues, narrow flow channels expand, oxygen diffusion proceeds and the increased reaction specific surface area favors char combustion again.At the initial stage of the combustion, the solid phase temperature starts to rise due to the heating by both char combustion on the solid surface and CO combustion in the gas phase.As the reaction proceeds, the reaction front shifts towards the right, the unreacted proportion of the solid phase decreases, making the average temperature increase.At the final stage of the char combustion, the solid phase reaction rate is enhanced by the accelerated gas diffusion and the reduced reaction area, so a further increase of the average temperature is observed.Char combustion and alkali metal release in a porous-char particle were investigated using a lattice Boltzmann method.The standard LBM flow formalism is modified to account for a solid fuel phase, including gaseous and surface chemical reactions.Single-step global kinetics are used and the decomposition of ash was not considered, but the inhibition of ash on oxygen diffusion is included.The LBM simulation results were first validated against previous char combustion and sodium release measurements.The pellet mass, particle temperature and sodium release are in quite good agreement with the experimental data, demonstrating the prediction capabilities of the proposed approach.The combustion of an actual sub-bituminous porous char particle is then addressed, simulating the flow dynamics inside the particle.The production and diffusion of CO and CO2 inside the porous char structure follows the internal flow direction; due to CO combustion in the gas phase and char burning on the solid surface, most oxygen is consumed at the reaction front bordering the particle, thus limiting the diffusion of O2 inside the porous medium; because of the different heat and mass transfer and chemical time scales at play, the distribution of the volatile products, such as sodium vapor, differs from that of the combustion products CO/CO2.
Char-pellet combustion is studied with the lattice Boltzmann method (LBM) including sodium release and the ash inhibition effect on oxygen diffusion in the porous char. The sodium release and the shrinking of the char pellet are simulated by accounting for the reactions occurring both in the solid and gas phases. The combustion of a single char pellet is considered first, and the results are compared against measurements. The simulation of the pellet mass, pellet temperature and sodium release agreed well with in-house optical measurements. The validated lattice Boltzmann approach is then extended to investigate the combustion of porous char and sodium release inside the porous medium. The pore-structure evolution and the flow path variation are simulated as combustion proceeds. The simulations reproduce the expected different behaviors between the combustion products (CO and CO2) and the released volatile, here the sodium vapor. The combustion products are mostly generated at the flame front and then transported by the flow and molecular diffusion inside the complex porous char structure. However, the volatile sodium vapor forms in the entire porous char and tends to accumulate in regions where the flow motion stays weak, as in internal flow microchannels, or blocked, as in closed pores. These results confirm the potential of the LBM formalism to tackle char-pellet combustion accounting for the topology of the porous medium.
231
Response to ‘Burden of proof: A comprehensive review of the feasibility of 100% renewable-electricity systems’
There is a broad scientific consensus that anthropogenic greenhouse gas emissions should be rapidly reduced in the coming decades in order to avoid catastrophic global warming .To reach this goal, many scientific studies have examined the potential to replace fossil fuel energy sources with renewable energy.Since wind and solar power dominate the expandable potentials of renewable energy , a primary focus for studies with high shares of renewables is the need to balance the variability of these energy sources in time and space against the demand for energy services.The studies that examine scenarios with very high shares of renewable energy have attracted a critical response from some quarters, particularly given that high targets for renewable energy are now part of government policy in many countries .Critics have challenged studies for purportedly not taking sufficient account of: the variability of wind and solar , the scaleability of some storage technologies , all aspects of system costs , resource constraints , social acceptance constraints , energy consumption beyond the electricity sector , limits to the rate of change of the energy intensity of the economy and limits on capacity deployment rates .Many of these criticisms have been rebutted either directly or are addressed elsewhere in the literature, as we shall see in the following sections.In the recent article ‘Burden of proof: A comprehensive review of the feasibility of 100% renewable-electricity systems’ the authors of the article analysed 24 published studies of scenarios for highly renewable electricity systems, some regional and some global in scope.Drawing on the criticisms outlined above, the authors chose feasibility criteria to assess the studies, according to which they concluded that many of the studies do not rate well.In this response article we argue that the authors’ chosen feasibility criteria may in some cases be important, but that they are all easily addressed both at a technical level and economically at low cost.We therefore conclude that their feasibility criteria are not useful and do not affect the conclusions of the reviewed studies.Furthermore, we introduce additional, more relevant feasibility criteria, which renewable energy scenarios fulfil, but according to which nuclear power, which the authors have evaluated positively elsewhere , fails to demonstrate adequate feasibility.In Section 2 we address the definition and relevance of feasibility versus viability; in Section 3 we review the authors’ feasibility criteria and introduce our own additional criteria; in Section 4 we address other issues raised by ; finally in Section 5 conclusions are drawn.Early in their methods section, the authors define feasibility to mean that something is technically possible in the world of physics ‘with current or near-current technology’.They distinguish feasibility from socio-economic viability, which they define to mean whether it is possible within environmental and social constraints and at a reasonable cost.While there is no widely-accepted definition of feasibility , other studies typically include economic feasibility in their definition , while others also consider social and political constraints .For the purposes of this response article, we will keep to the authors’ definitions of feasibility and viability."One reason that few studies focus on such a narrow technical definition of feasibility is that, as we will show in the sections below, there are solutions using today's technology for all the feasibility issues raised by the authors.The more interesting question, which is where most studies rightly focus, is how to reach a high share of renewables in the most cost-effective manner, while respecting environmental, social and political constraints.In other words, viability is where the real debate should take place.For this reason, in this paper we will assess both the feasibility and the viability of renewables-based energy systems.Furthermore, despite their declared focus on feasibility, the authors frequently mistake viability for feasibility.Examples related to their feasibility criteria are examined in more detail below, but even in the discussion of specific model results there is confusion.The authors frequently quote from cost-optimisation studies that ‘require’ certain investments.For example they state that ‘required 100 GWe of nuclear generation and 461 GWe of gas’ and ‘require long-distance interconnector capacities that are 5.7 times larger than current capacities’.Optimisation models find the most cost-effective solutions within technical constraints.An optimisation result is not necessarily the only feasible one; there may be many other solutions that simply cost more.More analysis is needed to find out whether an investment decision is ‘required’ for feasibility or simply the most cost-effective solution of many.For example, the 100 GWe of nuclear in is fixed even before the optimisation, based on existing nuclear facilities, and is therefore not the result of a feasibility study.However, the authors do acknowledge that their transmission feasibility criteria ‘could arguably be regarded as more a matter of viability than feasibility’.Finally, when assessing economic viability, it is important to keep a sense of perspective on costs.If Europe is taken as an example, Europe pays around 300–400 billion € for its electricity annually.1,EU GDP in 2016 was 14.8 trillion € .Expected electricity network expansion costs in Europe of 80 billion € until 2030 may sound high, but once these costs are annualised, it amounts to only 2% of total spending on electricity, or 0.003 €/kWh.The authors define feasibility criteria and rate 24 different studies of 100% renewable scenarios against these criteria.According to the chosen criteria, many of the studies do not rate highly.In the sections below we address each feasibility criterion mentioned by the authors, and some additional ones which we believe are more pertinent.In addition, we discuss the socio-economic viability of the feasible solutions.We observe that the authors’ choice of criteria, the weighting given to them and some of the scoring against the criteria are somewhat arbitrary.As argued below, there are other criteria that the authors did not use in their rating that have a stronger impact on feasibility; based on the literature review below, the authors’ criteria would receive a much lower weighting than these other, more important criteria; and the scoring of some of the criteria, particularly for primary energy, transmission and ancillary services, seems coarse and subjective.Regarding the scoring, for demand projections the studies are compared with a spectrum from the mainstream literature, but no uncertainty bound is given, just a binary score; for transmission there is no nuance between studies that use blanket costs for transmission, or only consider cross-border capacity, or distribution as well as transmission networks; and no weighting is given to the importance of the different ancillary services.Finally, note that while some of the studies chosen by the authors consider the electricity sector only, other studies include energy demand from other sectors such as transport, heating and industry, thereby hindering comparability between the studies.The authors criticise some of the studies for not using plausible projections for future electricity and total energy demand.In particular, they claim that reducing global primary energy consumption demand is not consistent with projected population growth and development goals in countries where energy demand is currently low.Nobody would disagree with the authors that any future energy scenario should be compatible with the energy needs of every citizen of the planet.A reduction in electricity demand, particularly if heating, transport and industrial demand is electrified, is also unlikely to be credible.For example, both the Greenpeace Energy evolution and WWF scenarios, criticised in the paper, see a significant increase in global electricity consumption; another recent study of 100% renewable electricity for the globe foresees a doubling of electricity demand between 2015 and 2050, in line with IEA estimates for electricity ."However, the authors chose to focus on primary energy, for which the situation is more complicated, and it is certainly plausible to decouple primary energy consumption growth from meeting the planet's energy needs.Many countries have already decoupled primary energy supply from economic growth; Denmark has 30 years of proven history in reducing the energy intensity of its economy .There are at least three points here: i) primary energy consumption automatically goes down when switching from fossil fuels to wind, solar and hydroelectricity, because they have no conversion losses according to the usual definition of primary energy; ii) living standards can be maintained while increasing energy efficiency; iii) renewables-based systems avoid the significant energy usage of mining, transporting and refining fossil fuels and uranium.Fig. 1 illustrates how primary energy consumption can decrease by switching to renewable energy sources, with no change in the energy services delivered.Using the ‘physical energy accounting method’ used by the IEA, OECD, Eurostat and others, or the ‘direct equivalent method’ used by the IPCC, the primary energy consumption of fossil fuel power plants corresponds to the heating value, while for wind, solar and hydro the electricity output is counted.This automatically leads to a reduction in the primary energy consumption of the electricity sector when switching to wind, solar and hydro, because they have no conversion losses."In the heating sector, fossil-fuelled boilers dominate today's heating provision; here, primary energy again corresponds to the heating value of the fuels.For heat pumps, the heat taken from the environment is sometimes counted as primary energy , sometimes not ; in the latter case the reduction in primary energy consumption is 60–75% , depending on the location and technology, if wind, solar and hydro power are used.Cogeneration of heat and power will also reduce primary energy consumption.In addition, district heating can be used to recycle low-temperature heat that would otherwise be lost, such as surplus heat from industrial processes .For biomass, solar thermal heating and resistive electric heating from renewables there is no significant reduction in primary energy compared to fossil-fuelled boilers.In transport, the energy losses in an internal combustion engine mean that switching to more efficient electric vehicles running on electricity from wind, solar and hydro will reduce primary energy consumption by 70% or more for the same service.If statistics from the European Union in 2015 are taken as an example, taking the steps outlined in Fig. 1 would reduce total primary energy consumption by 49%2 without any change in the delivered energy services.,A reduction of total primary energy of 49% would allow a near doubling of energy service provision before primary energy consumption started to increase.This is even before efficiency measures and the consumption from fuel processing are taken into account.The primary energy accounting of different energy sources presented in this example is already enough to explain the discrepancies between the scenarios plotted in Fig. 1 of , where the median of non-NGO global primary energy consumption increases by around 50% between 2015 and 2050, while the NGOs Greenpeace and WWF see light reductions.As an example of a non-NGO projection with high primary energy demand, many IPCC scenarios with reduced greenhouse gas emissions rely on bioenergy, nuclear and carbon capture from combustion , whereas the NGOs Greenpeace and WWF have high shares of wind and solar.The IPCC scenarios see less investment in wind and solar because of conservative cost assumptions, with some assumptions for solar PV that are 2–4 times below current projections ; with improved assumptions, some authors calculate that PV could dominate global electricity by 2050 with a share of 30–50% .Another study of 100% renewable energy across all energy sectors in Europe sees a 10% drop in primary energy supply compared to a business-as-usual scenario for 2050, with bigger reductions if synthetic fuels for industry are excluded.The authors chose to concentrate on primary energy consumption, but for renewables, as argued above, it can be a misleading metric.The definitions of both primary and final energy are suited for a world based on fossil fuels."What really matters is meeting people's energy needs while also reducing greenhouse gas emissions.Next we address energy efficiency that goes beyond just switching fuel source.There is plenty of scope to maintain living standards while reducing energy consumption: improved building insulation and design to reduce heating and cooling demand, more efficient electronic devices, efficient processes in industry, better urban design to lower transport demand, more public transport and reductions in the highest-emission behaviour.These efficiency measures are feasible, but it is not clear that they will all be socio-economically viable.For example, in a study for a 100% renewable German energy system scenarios were considered where space heating demand is reduced by between 30% and 60% using different retrofitting measures.Another study for cost-optimal 100% renewables in Germany shows similar reductions in primary energy in the heating sector from efficiency measures and the uptake of cogeneration and heat pumps.The third point concerns the upstream costs of conventional fuels.It was recently estimated that 12.6% of all end-use energy worldwide is used to mine, transport and refine fossil fuels and uranium ; renewable scenarios avoid this fuel-related consumption.One final, critical point: even if future demand is higher than expected, this does not mean that 100% renewable scenarios are infeasible.As discussed in Section 3.6, the global potential for renewable generation is several factors higher than any demand forecasts."There is plenty of room for error if forecasts prove to underestimate demand growth: an investigation into the United States Energy Information Administration's Annual Outlook showed systematic underestimation of total energy demand by an average of 2% per year after controlling for other sources of projection errors; over 35 years this would lead to an underestimate of around factor 2; reasonable global potentials for renewable energy could generate on average around 620 TW , which is a factor 30 higher than business-as-usual forecasts for average global end-use energy demand of 21 TW in 2050 .The authors stress that it is important to model in a high time resolution so that all the variability of demand and renewables is accounted for.They give one point to models with hourly resolution and three points to models that simulate down to 5 min intervals.It is of course important that models have enough time resolution to capture variations in energy demand and variations in wind and solar generation, so that balancing needs, networks and other flexibility options can be dimensioned correctly.However, the time resolution depends on the area under consideration, since short-term weather fluctuations are not correlated over large distances and therefore balance out.This criterion should rather read ‘the time resolution should be appropriate to the size of the area being studied, the weather conditions found there and the research question’.Models for whole countries typically use hourly simulations, and we will argue that this is sufficient for long-term energy system planning.After all, why do the authors stop at 5 min intervals?,For a single wind turbine, a gust of wind could change the feed-in within seconds.Similarly, a cloud could cover a small solar panel in under a second.Individuals can change their electricity consumption at the flick of a switch.The reason modelling in this temporal detail is not needed is the statistical smoothing when aggregating over a large area containing many generators and consumers.Many of the studies are looking at the national or sub-national level.By modelling hourly, the majority of the variation of the demand and variable renewables like wind and solar over these areas is captured; if there is enough flexibility to deal with the largest hourly variations, there is enough to deal with any intra-hour imbalance.Fig. 2 shows correlations in variations in wind generation at different time and spatial scales.3,Changes within 5 min are uncorrelated above 25 km and therefore smooth out in the aggregation.Further analysis of sub-hourly wind variations over large areas can be found in .For solar photovoltaics the picture is similar at shorter time scales: changes at the 5-min level due to cloud movements are not correlated over large areas.However, at 30 min to 1 h there are correlated changes due to the position of the sun in the sky or the passage of large-scale weather fronts.The decrease of PV output in the evening can be captured at one-hour resolution and there are plenty of feasible technologies available for matching that ramping profile: flexible open-cycle gas turbines can ramp up within 5–10 min, hydroelectric plants can ramp within minutes or less, while battery storage and demand management can act within milliseconds.For ramping down, solar and wind units can curtail their output within seconds.The engineering literature on sub-hourly modelling confirms these considerations.Several studies consider the island of Ireland, which is particularly challenging since it is an isolated synchronous area, is only 275 km wide and has a high penetration of wind.One power system study for Ireland with high share of wind power varied temporal resolution between 60 min and 5 min intervals, and found that the 5 min simulation results gave system costs just 1% higher than hourly simulation results; however, unit commitment constraints and higher ramping and cycling rates could be problematic for older thermal units.Similarly, see not feasibility problems at sub-hourly time resolutions, but a higher value for flexible generation and storage, which can act to avoid cycling stress on older thermal plants.In the difference between hourly and 15-min simulations in small district heating networks with high levels of wind power penetration was considered and it was found that ‘the differences in power generation are small’ and ‘there is need for higher resolution modelling’.To summarise, since at large spatial scales the variations in aggregated load, wind and solar time series are statistically smoothed out, none of the large-scale model results change significantly when going from hourly resolution down to 5-min simulations.Hourly modelling will capture the biggest variations and is therefore adequate to dimension flexibility requirements.,Sub-hourly modelling may be necessary for smaller areas with older, inflexible thermal power plants, but since flexible peaking plant and storage are economically favoured in highly renewable systems, sub-hourly modelling is less important in the long-term.Simulations with intervals longer than one hour should be treated carefully, depending on the research question .The authors reserve a point for studies that include rare climatic events, such as long periods of low sun and wind, or years when drought impacts the production of hydroelectricity.Periods of low sun and wind in the winter longer than a few days can be met, where available, by hydroelectricity, dispatchable biomass, demand response, imports, medium-term storage, synthetic gas from power-to-gas facilities or, in the worst case, by fossil fuels.From a feasibility point of view, even in the worst possible case that enough dispatchable capacity were maintained to cover the peak load, this does not invalidate these scenarios.The authors write “ensuring stable supply and reliability against all plausible outcomes…will raise costs and complexity”.Yet again, a feasibility criterion has become a viability criterion."So what would it cost to maintain an open-cycle gas turbine fleet to cover, for example, Germany's peak demand of 80 GW?",For the OCGT we take the cost assumptions from : overnight investment cost of 400 €/kW, fixed operation and maintenance cost of 15 €/kW/a, lifetime of 30 years and discount rate of 10%.The latter two figures given an annuity of 10.6% of the overnight investment cost, so the annual cost per kW is 57.4 €/kW/a.For a peak load of 80 GW, assuming 90% availability of the OCGT, the total annual cost is therefore 5.1 billion €/a. Germany consumes more than 500 TWh/a, so this guaranteed capacity costs less than 0.01 €/kWh.This is just 7.3% of total spending on electricity in Germany.We are not suggesting that Germany builds an OCGT fleet to cover its peak demand.This is a worst-case rhetorical thought experiment, assuming that no biomass, hydroelectricity, demand response, imports or medium-term storage can be activated, yet it is still low cost.Solutions that use storage that is already in the system are likely to be even lower cost.However, some OCGT capacity could also be attractive for other reasons: it is a flexible source of upward reserve power and it can be used for other ancillary services such as inertia provision, fault current, voltage regulation and black-starting the system.A clutch can even be put on the shaft to decouple the generator from the turbine and allow the generator to operate in synchronous compensator mode, which means it can also provide many ancillary services without burning gas.Running the OCGT for a two-week-long low-sun-and-wind period would add fuel costs and possibly also net CO2 emissions.Any emissions must be accounted for in simulations, but given that extreme climatic events are by definition rare, their impact will be small.A recent study of seven different weather years, including extreme weather events, in Europe for a scenario with a 95% CO2 reduction compared to 1990 in electricity, heating and transport came to similar conclusions.The extreme events do not affect all countries simultaneously so, for example, Germany can cover extreme events by importing power from other countries.If for political reasons each country is required to cover its peak load on a national basis, the extra costs for capacity are at most 3% of the total system costs.For systems that rely on hydroelectricity, the authors are right to point out that studies should be careful to include drier years in their simulations."Beyond the examples they cite, Brazil's hydroelectric production has been restricted over the last couple of years due to drought, and there are periodic drier years in Ethiopia, Kenya and Scandinavia, where in the latter inflow can drop to 30% below the average .However, in most countries, the scenarios rely on wind and solar energy, and here the dispatchable power capacity of the hydro is arguably just as important in balancing wind and solar as the total yearly energy contribution, particularly if pumping can be used to stock up the hydro reservoirs in times of wind and solar abundance .Note that nuclear also suffers from planned and unplanned outages, which are exacerbated during droughts and heatwaves, when the water supplies for river-cooled plants are either absent or too warm to provide sufficient cooling .This problem is likely to intensify given rising demand for water resources and climate change .The authors criticise many of the studies for not providing simulations of the transmission and distribution grids.Again, this is important, but not as important as the authors assume.Feasibility is not the issue, but there are socio-economic considerations.Many studies that do not model the grid, do include blanket costs for grid expansion.On a cost basis, the grid is not decisive either: additional grid costs tend to be a small fraction of total electricity system costs, and optimal grid layouts tend to follow the cheapest generation, so ignoring the grid is a reasonable first order approximation.Where it can be a problem is if public acceptance problems prevent the expansion of overhead transmission lines, in which case the power lines have to be put underground or electricity has to be generated more locally.Public acceptance problems affect cost, i.e. economic viability, not feasibility.How much the distribution grid needs to be expanded also depends on how much the scenario relies on decentralised, rooftop PV generation.If all wind and utility-scale PV is connected to the transmission grid, then there is no need to consider distribution grids at all.Regardless of supply-side changes, distribution grids may have to be upgraded in the future as electricity demand from heating and electric vehicles grows.Now to some examples of transmission and distribution grid costing.A study by Imperial College, NERA and DNV GL for the European electricity system to 2030 examined the consequences for both the transmission and distribution grid of renewable energy penetration up to 68%.For total annual system costs of 232 billion €/a in their Scenario 1, 4 billion €/a is assigned to the costs of additional transmission grid investments and 18 billion €/a to the distribution grid.If there is a greater reliance on decentralised generation-DG), additional distribution grid costs could rise to 24 billion €/a.This shows a typical rule of thumb: additional grid costs are around 10–15% of total system costs.But this case considered only 68% renewables.The distribution grid study of 100% renewables in the German federal state of Rhineland-Palatinate also clearly demonstrates that the costs of generation dwarf the grid costs.Additional grid investments vary between 10% and 15% of the total costs of new generation, depending on how smart the system is.Again, distribution upgrade costs dominate transmission costs.In its worst case the Germany Energy Agency sees a total investment need of 42.5 billion € in German distribution grids by 2030 for a renewables share of 82% .Annualised to 4.25 billion €/a, this is just 6.2% of total spending on electricity in Germany.Another study for Germany with 100% renewable electricity showed that grid expansion at transmission and distribution level would cost around 4–6 billion €/a .Many studies look at the transmission grid only.The 2016 Ten Year Network Development Plan of the European Transmission System Operators foresees 70–80 billion € investment needs in Europe for 60% renewables by 2030, which annualises to 2% of total electricity spending of 400 billion €/a.The authors criticise the Greenpeace Energy evolution scenario for excluding grid and reliability simulations, but in fact Greenpeace commissioned transmission expansion studies for Europe using hourly simulations, one for 77% renewables by 2030 and one for 97% renewables by 2050 .Beyond Europe, other studies with similar results look at the United States , South and Central America , and Asia ."The authors quote studies that look at optimal cross-border transmission capacity in Europe at very high shares of renewables, which show an expansion of 4–6 times today's capacities .It is worth pointing out that these studies look at the international interconnectors, not the full transmission grid, which includes the transmission lines within each country."The interconnectors are historically weak compared to national grids4 and restricted by poor market design and operation ; if a similar methodology to is applied to a more detailed grid model with nodal pricing, the expansion is only between 25% and 50% more than today's capacity .Furthermore, cost-optimal does not necessarily mean socially viable; there are solutions with lower grid expansion and hence higher public acceptance, but higher storage costs to balance renewables locally .Finally, we come to ancillary services.Ancillary services are additional services that network operators need to stabilise and secure the electricity system.They are mostly provided by conventional dispatchable generators today.Ancillary services include reserve power for balancing supply and demand in the short term, rotating inertia to stabilise the frequency in the very short term, synchronising torque to keep all generators rotating at the same frequency, voltage support through reactive power provision, short circuit current to trip protection devices during a fault, and the ability to restart the system in the event of a total system blackout.The authors raise concerns that many studies do not consider the provision of these ancillary services, particularly for voltage and frequency control.Again, these concerns are overblown: ancillary services are important, but they can be provided with established technologies, and the cost to provide them is second order compared to the costs of energy generation.We consider fault current, voltage support and inertia first.These services are mostly provided today by synchronous generators, whereas most new wind, solar PV and storage units are coupled to the grid with inverters, which have no inherent inertia and low fault current, but can control voltage with both active and reactive power.From a feasibility point of view, synchronous compensators could be placed throughout the network and the problem is solved, although this is not as cost effective as other solutions.Synchronous compensators, also called synchronous condensers, are essentially synchronous generators without a prime mover to provide active power.This means they can provide all the ancillary services of conventional generators except those requiring active power, i.e. they can provide fault current, inertia and voltage support just like a synchronous generator.Active power is then provided by renewable generators and storage devices.In fact, existing generators can be retrofitted to be SC, as happened to the nuclear power plant in Biblis, Germany , or to switch between generation mode and SC mode; extra mass can be added with a clutch if more inertia is needed.SC are a tried-and-tested technology and have been installed recently in Germany , Denmark, Norway, Brazil, New Zealand and California .They are also used in Tasmania , where ‘Hydro Tasmania, TasNetworks and AEMO have implemented many successful initiatives that help to manage and maintain the security of a power system that has a high penetration of asynchronous energy sources…Some solutions implemented in Tasmania have been relatively low cost and without the need for significant capital investment’ .In Denmark, newly-installed synchronous compensators along with exchange capacity with its neighbours allow the power system to operate without any large central power stations at all .In 2017 the system operated for 985 h without central power stations, the longest continuous period of which was a week .SC were also one of the options successfully shown to improve stability during severe faults in a study of high renewable penetration in the United States Western Interconnection .The study concluded ‘the Western Interconnection can be made to work well in the first minute after a big disturbance with both high wind and solar and substantial coal displacement, using good, established planning and engineering practice and commercially available technologies’.In a study for the British transmission system operator National Grid it was shown that 9 GVAr of SC would stabilise the British grid during the worst fault even with 95% instantaneous penetration of non-synchronous generation.,So how cost-effective would synchronous compensators be?,There is a range of cost estimates in the literature , the highest being an investment cost of 100 €/kVAr with fixed operating and maintenance costs of 3.5 €/kVAr/a .For Great Britain, the 9 GVAr of SC would cost 129 million € per year, assuming a lifetime of 30 years and a discount rate of 10%.That annualises to just 0.0003 €/kWh.,Synchronous condensers are an established, mature technology, which provide a feasible upper bound on the costs of providing non-active-power-related ancillary services.The inverters of wind, solar and batteries already provide reactive power for voltage control and can provide the other ancillary services, including virtual or synthetic inertia, by programming the functionality into the inverter software .Inverters are much more flexible than mechanics-bound synchronous generators and can change their output with high accuracy within milliseconds .The reason that wind and solar plants have only recently been providing these services is that before there was no need, and no system operators required it.Now that more ancillary services are being written into grid codes , manufacturers are providing such capabilities in their equipment.Frequency control concepts for inverters that follow a stiff external grid frequency and adjust their active power output to compensate for any frequency deviations are already offered by manufacturers .Next generation ‘grid-forming’ inverters will also be able to work in weak grids without a stiff frequency, albeit at the cost of increasing the inverter current rating.A survey of different frequency-response technologies in the Irish context can be found in .Recent work for National Grid shows that with 25% of inverters operating as Virtual Synchronous Machines, the system can survive the most severe faults even when approaching 100% non-synchronous penetration.The literature in the control theory community on the design and stability of grid-forming inverters in power systems is substantial and growing, and includes both extensive simulations and tests in the field .Protection systems often rely on synchronous generators to supply fault current to trip over-current relays.Inverters are not well-suited to providing fault current, but this can be circumvented by replacing over-current protection with differential protection and distance protection , both of which are established technologies.Next, we consider balancing reserves.Balancing power can be provided by traditional providers, battery systems, fast-acting demand-side-management or by wind and solar generators.There is a wide literature assessing requirements for balancing power with high shares of renewables.In a study for Germany in 2030 with 65 GW PV and 81 GW wind, no need is seen for additional primary reserve, with at most a doubling of the need for other types of reserves .It is a similar story in the 100% renewable scenario for Germany of Kombikraftwerk 2 .,There is no feasibility problem here either.Another ancillary service the authors mention is black-start capability.This is the ability to restart the electricity system in the case of a total blackout.Most thermal power stations consume electricity when starting up, so special provisions are needed when black-starting the system, by making sure there are generators which can start without an electricity supply.Typically system operators use hydroelectric plants, diesel generators or battery systems, which can then start a gas turbine, which can then start other power plants.Maintaining conventional capacity for black-start is inexpensive compared to system costs, as shown in Section 3.3; in a study for Germany in 2030 with 52% renewables, no additional measures for black-starting were deemed necessary, contrary to the interpretation in ; finally, decentralised renewable generators and storage could also participate in black-starting the system in future .The use of battery storage systems to black-start gas turbines has recently been demonstrated in Germany and in a commercial project in California .Nuclear, on the other hand, is a problem for black-starting, since most designs need a power source at all times, regardless of blackout conditions, to circulate coolant in the reactor and prevent meltdown conditions.This will only exacerbate the need for backup generation in a total blackout.Nuclear is sometimes not used to provide primary reserves either, particularly in older designs, because fast changes in output present operational and safety concerns."Here we suggest a feasibility criterion not included on the authors’ list: The technology should have a fuel source that can both supply all the world's energy needs and also last more than a couple of decades.Traditional nuclear plants that use thermal-neutron fission of uranium do not satisfy this feasibility criterion.In 2015 there were 7.6 million tonnes of identified uranium resources commercially recoverable at less than 260 US$/kgU .5,From one tonne of natural uranium, a light-water reactor can generate around 40 GWh of electricity.In 2015, world electricity consumption was around 24,000 TWh/a .Assuming no rise in electricity demand and ignoring non-electric energy consumption such as transport and heating, uranium resources of 7.6 million tonnes will last 13 years.Reprocessing, at higher cost, might extend this by a few more years.Including non-electric energy consumption would more than halve this time.For renewables, exploitable energy potentials exceed yearly energy demand by several orders of magnitude and, by definition, are not depleted over time.Even taking account of limitations of geography and material resources, the potentials for the expansion of wind, solar and storage exceed demand projections by several factors .As for ‘following all paths’ and pursuing a mix of renewables and nuclear, they do not mix well: because of their high capital costs, nuclear power plants are most economically viable when operated at full power the whole time, whereas the variability of renewables requires a flexible balancing power fleet .Network expansion can help the penetration of both renewables and inflexible plant , but this would create further pressure for grid expansion, which is already pushing against social limits in some regions.This feasibility criterion is not met by standard nuclear reactors, but could be met in theory by breeder reactors and fusion power.This brings us to our next feasibility criterion.Here is another feasibility criterion that is not included on the authors’ list: Scenarios should not rely on unproven technologies.We are not suggesting that we should discontinue research into new technologies, rather that when planning for the future, we should be cautious and assume that not every new technology will reach technical and commercial maturity.The technologies required for renewable scenarios are not just tried-and-tested, but also proven at a large scale.Wind, solar, hydro and biomass all have capacity in the hundreds of GWs worldwide .The necessary expansion of the grid and ancillary services can deploy existing technology.Heat pumps are used widely .Battery storage, contrary to the authors’ paper, is a proven technology already implemented in billions of devices worldwide.Compressed air energy storage, thermal storage, gas storage, hydrogen electrolysis, methanation and fuel cells are all decades-old technologies that are well understood.,On the nuclear side, for the coming decades when uranium for thermal-neutron reactors would run out, we have breeder reactors, which can breed more fissile material from natural uranium or thorium, or fusion power.Breeder reactors are technically immature, more costly than light-water reactors, unreliable, potentially unsafe and they pose serious proliferation risks .Most fast-neutron breeder reactors rely on sodium as a coolant, and since sodium burns in air and water, it makes refueling and repair difficult.This has led to serious accidents in fast breeder reactors, such as the major sodium fire at the Monju plant in 1995.Some experts consider fast breeders to have already failed as a technology option .The burden of proof is on the nuclear industry to demonstrate that breeder reactors are a safe and commercially competitive technology.Fusion power is even further from demonstrating technical feasibility.No fusion plant exists today that can generate more energy than it requires to initiate and sustain fusion.Containment materials that can withstand the neutron bombardment without generating long-lived nuclear waste are still under development.Even advocates of fusion do not expect the first commercial plant to go online before 2050 .Even if it proves to be feasible and cost-effective, ramping up to a high worldwide penetration will take decades more.That is too late to tackle global warming .In this section we address other issues raised by the authors of during their discussion of their feasibility criteria.The authors write “widespread storage of energy using a range of technologies”.Regarding battery storage, it is clear that there is the potential to exploit established lithium ion technology at scale and at low cost .The technology is already widely established in electronic devices and increasingly in battery electric vehicles, which will in future provide a regular and cheap source of second-life stationary batteries.A utility-scale 100 MW plant was installed in the South Australian grid in 2017 and there was already 700 MW of utility-scale batteries in the United States at the end of 2017 .Further assessments of the potential for lithium ion batteries can be found in .Costs are falling so fast that hybrid PV-battery systems are already or soon will be competitive with conventional systems in areas with good solar resources .Many other electricity storage devices have been not just demonstrated but already commercialised , including large-scale compressed air energy storage.Technologies that convert electricity to gas, by electrolysing hydrogen with the possibility of later methanation, are already being demonstrated at megawatt scale .Hydrogen could either be fed into the gas network to a certain fraction, used in fuel cell vehicles, converted to other synthetic fuels, or converted back into electricity for the grid.Fuel cells are already manufactured at gigawatt scale, with 480 MW installed in 2016 .By using the process heat from methanation to cover the heat consumption of electrolysis, total efficiency for power-to-methane of 76% has recently been demonstrated in a freight-container-sized pilot project, with 80% efficiency in sight .Moreover, in a holistic, cross-sectoral energy systems approach that goes beyond electricity to integrate all thermal, transport and industrial demand, it is possible to identify renewable energy systems in which all storage is based on low-cost well-proven technologies, such as thermal, gas and liquid storage, all of which are cheaper than electricity storage .These sectors also provide significant deferrable demand, which further helps to integrate variable renewable energy .Storage capacity for natural gas in the European Union is 1075 TWh as of mid 2017 .The authors criticise a few studies for their over-reliance on biomass, such as one for Denmark and one for Ireland .There are legitimate concerns about the availability of fuel crops, environmental damage, biodiversity loss and competition with food crops .More recent studies, including some by the same researchers, conduct detailed potential assessments for biomass and/or restrict biomass usage to agricultural residues and waste .Other studies are even more conservative and exclude biomass altogether , while still reaching feasible and cost-effective energy systems.Capturing carbon dioxide from industrial processes, power plants or directly from the air could also contribute to mitigating net greenhouse gas emissions.The captured carbon dioxide can then be used in industry or sequestered.While some of the individual components have been demonstrated at commercial scale, hurdles include cost, technical feasibility of long-term sequestration without leakage, viability for some concepts, the lowest cost version of which is rated at Technology Readiness Level 3–5 ), other air pollutants from combustion and imperfect capture when capturing from power plants, lower energy efficiency, regulatory issues, public acceptance of sequestration facilities and systems integration.Studies at high time resolution that have combined renewables and power plants with carbon capture and sequestration suggest that CCS is not cost effective because of high capital costs and low utilisation .However, DAC may be promising for the production of synthetic fuels and is attractive because of its locational flexibility and minimal water consumption .Negative emissions technologies, which include DAC, bioenergy with CCS, enhanced weathering, ocean fertilisation, afforestation and reforestation, may also be necessary to meet the goals of the Paris climate accord .Relying on NET presents risks given their technical immaturity, so further research and development of these technologies is required .In the sections above we have shown that energy systems with very high shares of renewable energy are both feasible and economically viable with respect to primary energy demand projections, matching short-term variability, extreme events, transmission and distribution grids, ancillary services, resource availability and technological maturity.We now turn to more general points of social and economic viability.With regard to social viability, there are high levels of public support for renewable energy.In a survey of European Union citizens for the European Commission in 2017, 89% thought it was important for their national government to set targets to increase renewable energy use by 2030 .A 2017 survey of the citizens of 13 countries from across the globe found that 82% believe it is important to create a world fully powered by renewable energy .A 2016 compilation of surveys from leading industrialised countries showed support for renewables in most cases to be well over 80% .Concerns have been raised primarily regarding the public acceptance of onshore wind turbines and overhead transmission lines.Repeated studies have shown that public acceptance of onshore wind can be increased if local communities are engaged early in the planning process, if their concerns are addressed and if they are given a stake in the project outcome .Where onshore wind is not socially viable, there are system solutions with higher shares of offshore wind and solar energy, but they may cost fractionally more .The picture is similar with overhead transmission lines: more participatory governance early in the planning stages and local involvement if the project is built can increase public acceptance .Again, if overhead transmission is not viable, there are system solutions with more storage and underground cables, but they are more expensive .The use of open data and open model software can help to improve transparency .Next we turn to the economic viability of bulk energy generation from renewable sources.On the basis of levelised cost, onshore wind, offshore wind, solar PV, hydroelectricity and biomass are already either in the range of current fossil fuel generation or lower cost .Levelised cost is only a coarse measure , since it does not take account of variability, which is why integration studies typically consider total system costs in models with high spatial and temporal resolution.Despite often using conservative cost assumptions, integration studies repeatedly show that renewables-based systems are possible with costs that are comparable or lower than conventional fossil-fuel-based systems , even before aspects such as climate impact and health outcomes are considered.For example, focusing on results of our own research, a global switch to 100% renewable electricity by 2050 would see a drop in average system cost from 70 €/MWh in 2015 to 52 €/MWh in 2050 .This study modelled the electricity system at hourly resolution for an entire year for 145 regions of the world.Considering all energy sectors in Europe, costs in a 100% renewable energy scenario would be only 10% higher than a business-as-usual scenario for 2050 .The low cost of renewables is borne out in recent auctions, where, for example, extremely low prices have been seen for systems that include storage in the United States due to come online in 2023.Following the authors, we have focussed above on the technical feasibility of nuclear.For discussions of the socio-economic viability of nuclear power, i.e. the cost, safety, decomissioning, waste disposal, public acceptance, terrorism and nuclear-weapons-proliferation issues resulting from current designs, see for example .At the time the authors submitted their article there were many other studies of 100% or near-100% renewable systems that the authors did not review.Most studies were simulated with an hourly resolution and many modelled the transmission grid, with examples covering the globe , North-East Asia , the Association of South-East Asian Nations , Europe and its neighbours , Europe , South-East Europe , the Americas , China , the United States , Finland , Denmark , Germany , Ireland , Portugal and Berlin-Brandenburg in Germany .Since then other 100% studies have considered the globe , Asia , Southeast Asia and the Pacific Rim , Europe , South-East Europe , South and Central America , North America , India and its neighbours , Australia , Brazil , Iran , Pakistan , Saudi Arabia , Turkey , Ukraine the Canary Islands and the Åland Islands .The authors state that the only developed nation with 100% renewable electricity is Iceland.This statement ignores countries which come close to 100% and smaller island systems which are already at 100%, which the authors of chose to exclude from their study.Countries which are close to 100% renewable electricity include Paraguay, Norway, Uruguay, Costa Rica, Brazil and Canada .Regions within countries which are at or above 100% include Mecklenburg-Vorpommern in Germany, Schleswig-Hostein in Germany, South Island in New Zealand, Orkney in Scotland and Samsø along with many other parts of Denmark.This list mostly contains examples where there is sufficient synchronous generation to stabilise the grid, either from hydroelectricity, geothermal or biomass, or an alternating current connection to a neighbour.There are also purely inverter-based systems on islands in the South Pacific which have solar plus battery systems.We could also include here any residential solar plus battery off-grid systems.Another relevant example is the German offshore collector grids in the North Sea, which only have inverter-based generators and consumption.Inverter-interfaced wind turbines are connected with an alternating current grid to an AC-DC converter station, which feeds the power onto land through a High Voltage Direct Current cable.There is no synchronous machine in the offshore grid to stabilise it, but they work just fine.Off-planet, there is also the International Space Station and other space probes which rely on solar energy.The authors implicitly blame wind generation for the South Australian blackout in September 2016, where some wind turbines disconnected after multiple faults when tornadoes simultaneously damaged two transmission lines.According to the final report by the Australian Energy Market Operator on the incident “Wind turbines successfully rode through grid disturbances.It was the action of a control setting responding to multiple disturbances that led to the Black System.Changes made to turbine control settings shortly after the event removed the risk of recurrence given the same number of disturbances.,AEMO still highlights the need for additional frequency control services, which can be provided at low cost, as outlined in Section 3.5.In ‘Burden of proof: A comprehensive review of the feasibility of 100% renewable-electricity systems’ the authors called into question the feasibility of highly renewable scenarios.To assess a selection of relevant studies, they chose feasibility criteria that are important, but not critical for either the feasibility or viability of the studies.We have shown here that all the issues can be addressed at low economic cost.Worst-case, conservative technology choices are not only technically feasible, but also have costs which are a magnitude smaller than the total system costs.More cost-effective solutions that use variable renewable generators intelligently are also available.The viability of these solutions justifies the focus of many studies on reducing the main costs of bulk energy generation.As a result, we conclude that the 100% renewable energy scenarios proposed in the literature are not just feasible, but also viable.As we demonstrated in Section 4.4, 100% renewable systems that meet the energy needs of all citizens at all times are cost-competitive with fossil-fuel-based systems, even before externalities such as global warming, water usage and environmental pollution are taken into account.The authors claim that a 100% renewable world will require a ‘re-invention’ of the power system; we have shown here that this claim is exaggerated: only a directed evolution of the current system is required to guarantee affordability, reliability and sustainability.
A recent article ‘Burden of proof: A comprehensive review of the feasibility of 100% renewable-electricity systems’ claims that many studies of 100% renewable electricity systems do not demonstrate sufficient technical feasibility, according to the criteria of the article's authors (henceforth ‘the authors’). Here we analyse the authors’ methodology and find it problematic. The feasibility criteria chosen by the authors are important, but are also easily addressed at low economic cost, while not affecting the main conclusions of the reviewed studies and certainly not affecting their technical feasibility. A more thorough review reveals that all of the issues have already been addressed in the engineering and modelling literature. Nuclear power, which the authors have evaluated positively elsewhere, faces other, genuine feasibility problems, such as the finiteness of uranium resources and a reliance on unproven technologies in the medium- to long-term. Energy systems based on renewables, on the other hand, are not only feasible, but already economically viable and decreasing in cost every year.
232
Risk factors for eating disorder symptoms at 12 years of age: A 6-year longitudinal cohort study
The incidence of eating disorders rises from childhood to early adolescence, defined as 10–13 years of age.Consequently, most previous research into the developmental psychopathology of eating disorders has begun at around 12 years old.By this age, eating disorder symptoms are already present in non-clinical populations at levels similar to those found in late adolescence, which suggests that the antecedent conditions for such disorders arise before adolescence.Eating disorder symptoms do not correspond, in severity or specificity, to full-syndrome eating disorders.Instead they encompass a broad array of dimensional maladaptive cognitions and behaviours relating to eating and weight.These cognitions and behaviours are found across the range of full-syndrome eating disorder diagnoses as well as in sub-syndromal variants."An understanding of causal risk factors for eating disorder symptoms is important because such symptoms increase children's risk of subsequent weight gain, depression, weight cycling and full-syndrome eating disorders in adolescence.A recent comprehensive evidence synthesis highlighted a clear research need for prospective examinations of risk factors for disordered eating, including non-clinical samples of both boys and girls and commencing at a younger age than previous studies i.e. 6–10 years of age.In the current study, we prospectively examined potential risk factors, starting at age 7, to determine which variables contributed to the development of eating disorder symptoms at 12 years of age.These variables included body dissatisfaction, depression, dietary restraint, body mass index, and previous eating disorder symptoms.Given the absence of an established theoretical framework within which to situate the longitudinal development of eating disorder symptoms over preadolescence - mainly due to an absence of prospective data - these predictor variables were selected from intrapersonal risk factors for disordered eating in older adolescents and adults, broadly in line with the dual pathway model.It is widely accepted that greater evaluative body dissatisfaction, also referred to as lower body esteem, contributes to the emergence and maintenance of eating disorder symptoms.This consensus is based primarily upon findings with older, predominantly female samples.In contrast, findings with younger, mixed-sex groups are equivocal: some studies suggest that body dissatisfaction is directly correlated with, but does not predict, eating disorder symptoms over pre-adolescence, instead emerging as a predictor at around 13 years old.Conversely, other studies have found an effect in preadolescence, but only for boys or only for girls."The effects and developmental dynamics of body dissatisfaction's relation to eating disorder symptoms may vary depending on age and gender.In these terms, body dissatisfaction and eating disorder symptoms may co-emerge in pre-adolescence but the former may drive the latter during adolescence itself.Body dissatisfaction is extremely common in preadolescence, affecting around 40% of children aged 6 to 11, whereas elevated eating disorder symptoms are less prevalent,.There is a degree of conceptual and statistical overlap between the two.However, the aforementioned prevalence data suggest that it is unlikely that the former should be considered an exclusive subcategory of the latter as opposed to a potential risk factor in its own right."Further investigation is needed regarding body dissatisfaction's role as a risk factor for eating disorder symptoms in younger mixed-sex groups.In addition to a direct relationship, body dissatisfaction may lead to eating disorder symptoms via the mediating influences of elevated depressive symptoms and greater dietary restraint, as proposed in the dual pathway model of eating pathology.Limited evidence suggests that depressive symptoms predict eating disorder symptoms in early adolescence among girls, although a reciprocal relationship has also been reported.Dietary restraint has been found to predict disordered eating in adolescents and similarly, dietary restraint at 7 years of age was found to predict subsequent eating disorder symptoms at 9 years of age in a previous study with this cohort.It is possible that dietary restraint constitutes a precursor or subcomponent of eating disorder symptoms although, like body dissatisfaction, dietary restraint is highly prevalent whereas eating disorder symptoms are less so.A cross-sectional study recently demonstrated that depressive symptoms and dietary restraint fully mediated the relationship between body dissatisfaction and eating disorder symptoms in girls aged 7–11 years.Higher body mass index is another proposed risk factor for higher eating disorder symptoms, although previous studies have not found evidence of this relationship in childhood with rare exceptions.Research with adults and adolescents suggests that body dissatisfaction may fully mediate the inconsistently observed relationships between eating disorder symptoms and BMI.Continuity of eating disorder symptoms over time has been observed amongst children and young adolescents in most previous longitudinal studies, suggesting that eating disorder symptoms become at least partially established at an early age.This highlights the importance of controlling for initial levels of the outcome variable when predicting an outcome across time, something that has not always been done in previous studies of eating disorder symptoms with preadolescents.Both eating disorder symptoms and depressive symptoms are overrepresented in girls from 12 to 13 years of age onwards, and it has been proposed that these internalising symptoms constitute female-specific reciprocal risk factors whose mutual influence escalates with time.However, other studies suggest that depressive symptoms also play a direct causal role in boys’ eating disorder symptoms from around the age of 13.The influence of body dissatisfaction on eating disorder symptoms, too, may vary with sex: Ferreiro et al. found that body dissatisfaction and eating disorder symptoms were directly correlated in boys and girls at age 11 but that body dissatisfaction emerged as a causal risk factor for girls only, from age 13 onwards.However, none of these studies looked at preadolescent risk factors, and dietary restraint does not appear to have been examined as a predictor in this context.Questions remain regarding sex differences in the emergence of eating disorder symptoms, and further longitudinal studies of these phenomena in preadolescent girls and boys are clearly merited.The present study set out to examine risk factors by identifying earlier predictors and within-time correlates of eating disorder symptoms in a population-based birth cohort of boys and girls at 12 years of age.The first aim was to identify within-time associations of eating disorder symptoms with measures of body dissatisfaction, depressive symptoms, and BMI all at 12 years of age.Associations were examined separately for boys and girls.The second aim was to identify prospective predictors of eating disorder symptoms at 12 years of age, again separately for boys and girls, taking into account prior eating disorder symptoms at 9 years of age.Putative across-time predictors included prior body dissatisfaction measured at 7 and 9 years of age, and dietary restraint measured at 7 years.BMI was also measured at 7 and 9 years.Our expectations for the within-time associations at 12 years of age were that there would be a direct association between eating disorder symptoms and body dissatisfaction for both boys and girls, based on existing findings to this effect.In contrast, we expected that depressive symptoms would be directly associated with eating disorder symptoms for girls but not boys, based on the balance of evidence for their reciprocal relationship in girls but not boys of this age.Previous research also led us to expect that BMI would be strongly directly associated with body dissatisfaction but not with disordered eating in participants of both sexes.Our expectations for the prospective predictors of eating disorder symptoms at 12 years of age were that prior eating disorder symptoms at 9 years of age would be a strong direct predictor for both boys and girls, given evidence of continuity of such symptoms across preadolescence and beyond.We further expected higher eating disorder symptoms at 12 years of age to be predicted by greater dietary restraint at age 7 for boys and girls, as has been found previously at 9 years of age.The balance of the existing evidence, which suggest that body dissatisfaction emerges as a causal factor at around the age of 13 years, led us to expect that body dissatisfaction at 7 or 9 years of age would not predict eating disorder symptoms at age 12 for boys or girls, acting only as a correlate rather than as a risk factor, contrary to the pattern seen in adolescent populations.The data reported are from the Gateshead Millennium Study cohort in which mothers of infants born between June 1999 and May 2000 were approached to permit their infant to join a longitudinal study of feeding and growth.All infants born to mothers resident in Gateshead, an urban district in northeast England, in 34 pre-specified weeks were eligible and 1029 infants joined the study.Mothers were primarily from the white ethnic majority group, which represented the ethnic composition of the region at the time.The principal aim was to examine prospectively the joint influence of infant feeding behaviour and maternal characteristics on weight gain in a population birth cohort.Full details are published.The cohort has been followed up at intervals since recruitment; at each follow-up assessment all children whose families had not previously asked to leave the study were eligible to participate.For the present study, assessments of the children were taken at three follow ups: 6–8 years referred to as 7 years in this paper; 8–10 years referred to as 9 years; and 11–13 years referred to as 12 years.The mean interval between the 7 and 9 year assessments was 1.9 years and the mean time interval between the 9 and 12 year assessments was 3.2 years.Mothers gave written consent for their own participation and for the child to participate in the study.The children/adolescents gave written assent.Favourable ethical opinions were granted by Gateshead and South Tyneside Local Research Ethics Committee and by Newcastle University Ethics Committee.The data were collected by researchers trained in anthropometry and the other study procedures.At each follow up the children were visited in schools, or at home, to collect anthropometric and questionnaire data.If necessary the researchers helped the children with comprehension of the questionnaires, using the standardised study assessment protocol.i) The Dutch Eating Behaviour Questionnaire child version was adapted for 7–12 year olds from the Dutch Eating Behaviour Questionnaire.The cohort completed the seven-item Restraint subscale at 7 years old.It assesses the tendency to eat reduced amounts in order to lose or maintain weight.Participants respond ‘no’, ‘sometimes’ or ‘yes’ to each item, scoring 1, 2 or 3, with higher scores indicating greater dietary restraint.The subscale has established reliability and adequate construct validity in boys and girls aged 7–12 years.In the current cohort, α = 0.7 at the 7 year follow up assessment."ii) The Children's Eating Attitudes Test, a modified version of the adult Eating Attitudes Test, was completed by the cohort at 9 and 12 years of age.The ChEAT is a 26-item measure of dimensional eating disorder symptoms including concerns about being overweight, binging and purging, and food pre-occupation.Items are scored between 1 and 6; the three most symptomatic responses are scored 1, 2 and 3 respectively, whilst the other three responses are scored zero.The scale has satisfactory test-retest reliability and internal consistency of α = 0.9 in boys and girls aged 7–12 years.Higher scores indicate greater symptomatology.Participants completed the ChEAT at 9 years and 12 years old.In the current cohort, α = 0.8 at 9 years and α = 0.8 at 12 years of age."i) The Children's Body Image Scale consists of photographic figures of pre-pubescent children, seven each of boys and girls ranging from very thin to obese.The CBIS scale asks: ‘Looking at the pictures below, which body shape looks most like your own?’,; and ‘Looking at the same pictures, which body shape would you most like to have?’,.The CBIS categories were assigned scores of 1–7 to give an ordered numerical scale of increasing size."Body dissatisfaction was calculated by subtracting the perceived figure from the preferred figure to produce a directional discrepancy score, where a negative score indicated a preference for a smaller body than one's own and a positive score indicated a preference for a larger figure.The scale demonstrates acceptable construct validity and test-retest reliability in boys and girls aged 7–11 years.The cohort completed the CBIS at 7 years and 9 years of age.ii) The Impact of Weight on Quality of Life-Kids assesses self-perceptions of weight-specific quality of life across the entire weight spectrum for children and adolescents aged 11–19 years."The scale is comprised of 4 sub-scales, including a 9-item body esteem subscale that assesses evaluative satisfaction with one's physical body.Respondents report the frequency with which they experience a given negative body-related cognition on a scale from 1 to 5.The subscale provides a reliable and valid measure of body esteem in overweight and non-overweight boys and girls aged 11–19 years and excellent test-retest reliability of r = 0.9."The cohort completed the IWQOL-Kids at 12 years of age, and an internal reliability of 0.9 was obtained.The CBIS is unsuitable for 12-year-olds because the figures depict pre-pubertal children, so the age-appropriate IWQOL-Kids body esteem sub-scale was selected for the 12 year follow-up assessment.Although these scales differ in format, both capture global attitudinal body dissatisfaction.They are referred to as body dissatisfaction and body esteem in the analyses to distinguish between the different measures used.Lower/more negative scores on both the CBIS and the IWQOL-Kids scale indicate greater body dissatisfaction.The Child Depression Inventory – Short Form is a simplified version of the Beck Depression Inventory.It was used to measure depressive symptoms at 12 years."It consists of 10 items, each comprising three statements about the respondent's feelings in the preceding two weeks from which one is selected per item. "A higher total score, calculated by summing each item's score, indicates greater symptomatology.The CDI-S has a satisfactory internal consistency of 0.8 in its validation sample of boys and girls aged 7–17 years.The internal consistency of the scale in the current sample was α = 0.8 at 12 years of age.Weight and height were measured by the study researchers at 7 years, 9 years and 12 years of age, using equipment purchased from Chasmors, London.Weight was measured to 0.1 kg using Tanita scales TBF-300MA, and height was measured to 0.1 cm with the head in the Frankfurt plane using a Leicester portable height measure.On each assessment occasion, measurements were taken at least twice, until two consistent values were obtained.The mean value of the two measurements for weight and for height was calculated."The child's body mass index was calculated from the averaged height and weight measurements.BMI z-scores for age were calculated using data from the UK90 reference dataset.Socio-economic status measures were collected from the mother at recruitment shortly after birth."The family's postcode was transformed into the Townsend deprivation score.We used the Townsend deprivation score as an index of SES when evaluating the effect of attrition at 12 years upon the representativeness of the cohort.Data analysis was conducted on all cases for whom an eating disorder symptom score at 12 years was obtained.In total, 525 participants were assessed at the 12 year follow up, and eating disorder symptom scores were available for 516 of these.To examine the representativeness of the cohort in the light of attrition, the current cohort composition was compared to the original cohort composition according to Townsend deprivation index quintile."Next, descriptive statistics were calculated for all measured variables, the proportion of missing data for each variable was recorded and initial comparisons of boys' and girls’ characteristics were made.Anthropometric and scale data were not normally distributed, so the median and the semi-interquartile range was used to summarise them, and non-parametric methods were used to compare values for boys and girls."The first aim of the study was to examine within-time associations between eating disorder symptoms and putative correlates; correlations were calculated separately for boys and girls.A significance threshold of p ≤ 0.005 was applied to the data in Tables 2 and 3 to correct for multiple comparisons.Significant correlates of eating disorder symptoms were used in the subsequent multivariate regressions for boys and girls.The second aim of the study was to develop multivariate predictive models for eating disorder symptoms with variables measured both within and across-time, using OLS linear regression.Disordered eating symptoms were regressed, separately for boys and girls, on variables significant in the preceding correlation analyses.SPSS version 21 was used for statistical analysis.The original sample comprised a total of 1011 mothers and it was comparable with the northeast region of England in terms of socio-economic deprivation apart from slight under-representation of the most affluent quintile.Overall, non-participation has been higher in the least affluent families than in the most affluent.This means that by 12 years the distribution across all the deprivation quintiles was fairly even and the sample is representative of the north of England.Eating disorder symptom data were collected from 516 adolescents at the 12 year follow-up; 4.8% were 11 years old, 89.8% were 12 years old and 5.4% were 13 years old.Descriptive data are shown by sex at each follow up assessment.The proportion of complete data for each variable ranged from 79 to 83% at age 7 years to 99–100% at 12 years.At 12 years girls had significantly higher depressive symptom scores than boys and they had significantly higher body dissatisfaction than boys at 7, 9 and 12 years of age.There was no significant difference between boys and girls on the eating disorder symptom scores at either 9 or 12 years; however the scores overall at 9 years were significantly higher than at 12 years."Overall, for both girls and boys, children's BMI z-scores at 9 years were higher than those at 7 years, and BMI z-scores at 12 years were higher than those at 9 years i.e. their relative adiposity increased at each time-point.Table 3 shows a zero-order non-parametric correlation matrix for study variables at 12, 9 and 7 years of age.For boys, eating disorder symptoms at age 12 were inversely associated with body esteem at age 12 and directly associated with eating disorder symptoms at age 9 and dietary restraint at age 7.For girls, eating disorder symptoms at age 12 were directly associated with depressive symptoms and BMI at age 12, inversely associated with body esteem at age 12 and directly associated with eating disorder symptoms at age 9.Variables with a significant correlational relationship with eating disorder symptoms at 12 years were entered into separate multiple regressions for boys and girls."Variables entered into the boys' model were dietary restraint, previous eating disorder symptoms and body esteem.Variables entered into the girls’ model were previous eating disorder symptoms, BMI, depressive symptoms and body esteem.To examine the combined effects of the aforementioned variables upon eating disorder symptoms at 12 years, OLS linear regression analyses were run separately for boys and girls.The resultant regression coefficients are shown in Table 4.For boys, higher dietary restraint, higher eating disorder symptoms and lower body esteem all accounted for significant variance in eating disorder symptoms at 12 years.For girls, higher eating disorder symptoms, higher depression and lower body esteem all accounted for variance in eating disorder symptoms at 12 years of age, but BMI did not.The initial contribution of BMI to variance in eating disorder symptoms was fully cancelled out by the addition of body esteem to the model, rendering the contribution of BMI non-significant.The final regression model accounted for a greater proportion of variance in eating disorder symptoms at 12 years of age in girls than in boys.To examine the extent to which the regression models evidenced multicollinearity, tolerance and variance inflation factor scores were inspected for all variables in the final models."In the boys' model, tolerance values ranged from 0.93 to 0.95 and VIF values ranged from 1.05 to 1.08, suggesting that multicollinearity was not a concern. "In the girls' model, tolerance values ranged from 0.49 to 0.86 and VIF values ranged from 1.17 to 2.03. "The relationship between BMI and body esteem, as shown in Table 3, likely accounts for these less optimal collinearity statistics, even though the latter statistics are within the acceptable range. "Because BMI did not account for significant variance in girls' eating disorder symptoms, its removal from the model reduced the range of tolerance values without reducing the R2 value at all.The revised range for VIF was from 0.63 to 0.86 and, for tolerance, from 1.15 to 1.66.This study shows that higher eating disorder symptoms at 9 years significantly predicted higher eating disorder symptoms at 12 years for both boys and girls, whilst greater dietary restraint at 7 years was a significant predictor for boys.Lower 12 year body esteem and higher 12 year depressive symptoms were also associated with higher 12 year eating disorder symptoms.A number of variables, which had been included based on adult risk factors for disordered eating, did not function as predictors of eating disorder symptoms, notably previous body dissatisfaction and concurrent BMI.Low body esteem appeared to have developed alongside eating disorder symptoms rather than acting as a predictor.Girls’ body esteem at 12 years fully accounted for the initially observed univariate relationship between 12 year BMI and eating disorder symptoms.Initial eating disorder symptom score at 9 years of age was the strongest predictor of subsequent eating disorder symptoms, a finding consistent with the overwhelming majority of previous studies of children and young adolescents."Such attitudes are moderately stable over time, appearing to belie assertions that children's eating disorder symptoms are temporally and conceptually unstable.This finding reinforces the importance of targeting children with higher eating disorder symptoms in preadolescence for possible intervention before they enter adolescence itself.In keeping with initial hypotheses and previous research, prior body dissatisfaction did not prospectively predict higher eating disorder symptoms in this cohort, whereas concurrent body esteem at 12 years old did.This provides additional weight to the premise that body dissatisfaction is not a reliable causal risk factor for eating disorder symptoms in childhood.The significant associations between lower body esteem and higher eating disorder symptoms at 12 years of age for both boys and girls suggest that body dissatisfaction may co-develop with eating disorder symptoms in preadolescence rather than the former preceding the latter.Similar prospective findings have been obtained previously although only one study – involving an older sample - gathered data over a comparable 6-year period.A clear sex difference was found in this cohort in depressive symptoms and their association with eating disorder symptoms at age 12; concurrent depressive symptom scores were a highly significant correlate of higher eating disorder symptoms at 12 years of age in girls but not boys.This fits with previous findings that sex differences in links between depressive and disordered eating symptoms emerge around the age of 13 years.It has been proposed that at this age, girls but not boys are more likely to use disordered eating behaviours to relieve depressive symptoms and, reciprocally, disordered eating gives rise to negative self-evaluations and depressed affect.In the current study, boys’ depressive symptom scores were low and showed little variance; this may also explain the absence of a significant effect in the predictive model.Conversely, dietary restraint at 7 years directly predicted subsequent eating disorder symptoms at 12 years for boys but not girls, despite predicting eating disorder symptoms for children of both sexes at the earlier 9 year follow up.This suggests that early dietary restraint may play a more pivotal role in developing eating disorder symptoms in boys than girls."Notably, however, the final regression model for girls accounted for almost twice as much variance in eating disorder symptoms as the model for boys, suggesting that additional, untested predictive variables may have been missing from the boys' model.Such variables may include the pursuit of muscularity and athletic internalisation, which reflect the male ideal body more closely than the ‘thin ideal’.However, the overall proportion of variance accounted for by the predictive models for boys and girls compares favourably to that of previous studies with slightly older children, using similar variables.These findings suggest that preventative measures against both depression and disordered eating need to take into account the sex differences on internalising symptoms found in this cohort and previously.The dual pathway model, based on research with older populations, suggests that dietary restraint and depression provide two routes via which body dissatisfaction is enacted through behavioural and cognitive symptoms of disordered eating."In the current sample, this was not found to be the case: dietary restraint acted as a predictor of eating disorder symptoms in boys but not girls, whilst depressive symptoms had an effect on girls' eating disorder symptoms but not boys'.Similar relationships have been found before in adolescent populations, in relation to specific subsets of eating disorder symptoms, but patterns of sex differences vary."It is plausible that these variables act as specific risk factors at different points throughout childhood and adolescence, and that their influence may, to some extent, depend upon the child's sex. "Regarding gender differences in levels of eating disorder symptoms, older adolescents' eating disorder symptoms are typically higher in girls than boys. "In this cohort at 9 years, boys' eating disorder symptoms exceeded girls' whereas at 12 years we have found no significant difference between their scores.This finding is consistent with previous research in which gender differences emerged around 13 years of age."Over the time between the 9 and 12 year assessments, both girls' and boys’ eating disorder symptom scores decreased, a trend seen in several similar studies.Others have noted age-related increases in girls but not boys or found age-related declines in boys but not girls.Evidence in this field increasingly suggests that preadolescent psychological variables provide a critical context within which the seismic changes of puberty occur.Yet there remains a lack of longitudinal research into the childhood antecedents of eating disorder symptoms in early adolescence, despite the physical and psychological developmental risks these phenomena pose.Our research presents a picture of these antecedents at 12 years of age, alongside within-time correlates, using data gathered at 7 years, 9 and 12 years.It benefits from a unique, representative UK population-based birth cohort of boys and girls, making the findings generalisable to similar populations.It focuses on children aged 7–12 years of age, a key developmental window for understanding eating psychopathology.Indeed, a recent review highlighted the importance of longitudinal research, involving boys, into causal risk factors for sub-syndromal disordered eating over the preadolescent period: the current study meets all of these criteria.No previous study has drawn on data from three time-points spanning preadolescence, spaced over six years, addressing this particular constellation of empirically-justifiable risk factors.However, the current study has several limitations.First, sample attrition may have introduced biases over time, although lower socio-economic families were over-represented in the initial cohort, and attrition has been higher in such families, making the remaining cohort more representative over time.The current sample therefore remains socially diverse and representative of the north of England.Second, use of self-report instruments, specifically depressive symptoms and eating disorder symptoms, may have led to under or over-reporting of symptoms; this is a particular hazard with young children.Third, key putative sociocultural predictors of eating disorder symptoms – such as thin-ideal internalisation and perceived pressure to be thin – were not measured; it is possible that such unmeasured latent variables account for some of the findings.Future research should seek to theoretically model the relationships among intrapersonal predictors and correlates of disordered eating, along sociocultural and biological factors over the developmental course of pre- and early-adolescence.This could be operationalised through a framework such as the biopsychosocial model, and would enable a more nuanced understanding of developmental ‘cause’ in this context.A key question to address will be whether body dissatisfaction functions as a causal risk factor in childhood for subsequent eating disorder symptoms.Evidence from the current study and others suggests not.However, to conclusively answer this question, additional studies with similar preadolescent age groups are needed which adopt Stice’s recommendations to adjust for initial levels of the outcome variable when attempting to ascertain temporal precedence, as done in studies of this cohort."Consideration should also be given to different theoretical models for the development of boys' eating disorder symptoms, since most existing models draw on data from adolescent female samples and focus upon pressure to be thin, rather than pressures towards hyper-muscularity as well.In summary, the findings of this study add to the small but growing body of prospective research into the emergence and consolidation of eating disorder symptoms in children and young adolescents, taking into account sex differences.Such research has the potential to inform an understanding of factors that place children at the greatest risk of disordered eating, paving the way towards effective interventions."Our findings strongly suggest the importance of early interventions to address children's eating disorder symptoms, since a higher level of symptoms at 9 years of age was the strongest risk factor for a higher level of symptoms at 12 years old.Efforts might be profitably aligned with interventions to prevent excess weight gain and/or depression, given the behavioural and possible aetiological overlap between these phenomena.In particular, our results indicate that a focus on childhood dietary restraint in boys, on body dissatisfaction concurrent with eating disorder symptoms for both boys and girls, and on concurrent depression in early adolescence may help identify children with a phenotype suggestive of an elevated risk for future eating disorder symptoms.The authors have no conflicts of interest to declare.
Eating disorders pose risks to health and wellbeing in young adolescents, but prospective studies of risk factors are scarce and this has impeded prevention efforts. This longitudinal study aimed to examine risk factors for eating disorder symptoms in a population-based birth cohort of young adolescents at 12 years. Participants from the Gateshead Millennium Study birth cohort (n = 516; 262 girls and 254 boys) completed self-report questionnaire measures of eating disorder symptoms and putative risk factors at age 7 years, 9 years and 12 years, including dietary restraint, depressive symptoms and body dissatisfaction. Body mass index (BMI) was also measured at each age. Within-time correlates of eating disorder symptoms at 12 years of age were greater body dissatisfaction for both sexes and, for girls only, higher depressive symptoms. For both sexes, higher eating disorder symptoms at 9 years old significantly predicted higher eating disorder symptoms at 12 years old. Dietary restraint at 7 years old predicted boys' eating disorder symptoms at age 12, but not girls'. Factors that did not predict eating disorder symptoms at 12 years of age were BMI (any age), girls’ dietary restraint at 7 years and body dissatisfaction at 7 and 9 years of age for both sexes. In this population-based study, different patterns of predictors and correlates of eating disorder symptoms were found for girls and boys. Body dissatisfaction, a purported risk factor for eating disorder symptoms in young adolescents, developed concurrently with eating disorder symptoms rather than preceding them. However, restraint at age 7 and eating disorder symptoms at age 9 years did predict 12-year eating disorder symptoms. Overall, our findings suggest that efforts to prevent disordered eating might beneficially focus on preadolescent populations.
233
Multi-origin mucinous neoplasm: Should we prophylactically remove the appendix in the setting of mucinous ovarian tumors?
Tumors of the ovary and appendix have been well documented in the setting of pseudomyxoma peritonei with constant debate over tumor origin.Generally, these tumors are found to have a single primary origin, most commonly the appendix, with metastatic spread to the ovaries.In this report, we discuss a patient who presented to the emergency room with right lower quadrant abdominal pain and a palpable mass for 2 months.Her past medical and surgical history included primary mucinous ovarian carcinoma with TAH-BSO and no chemotherapy/radiation.A computed tomography scan of the abdomen and pelvis demonstrated a 7.6 x 4 cm mass with peripheral calcifications suspicious for a mucocele versus malignancy.The patient underwent surgical exploration with excision of the mass that was well tolerated.Histopathology confirmed a primary low-grade appendiceal mucinous neoplasm.There are few documented cases of synchronous primary ovarian and appendiceal mucinous neoplasms; however, there have been very few recorded cases of two non-synchronous primary neoplasms and their appropriate diagnosis and treatment prompting this case report and literature review.Our work has been reported in line with the SCARE criteria .Our patient is a 61-year-old Jamaican female who recently immigrated to the United States, with a past medical history significant for mucinous ovarian carcinoma status post TAH-BSO in Jamaica, 2018.The patient did not undergo any post-operative chemotherapy due to the low malignant potential of her initial pathology.Approximately one year later, she presented to our emergency department with right sided abdominal discomfort for the past two months.The rest of patient’s history was unremarkable.There were no other accompanying symptoms, namely change in bowel habits or weight loss.On physical exam, the patient had a palpable mass on the right side of her abdomen that was tender on palpation.A CT scan of the abdomen and pelvis demonstrated a 7.6 x 4 cm mass with peripheral calcification suspicious for a mucocele versus mucoid epidermoid carcinoma .Subsequent pelvic and abdominal magnetic resonance imaging also demonstrated findings suspicious for a mass abutting the cecum concerning for an appendiceal mucocele.Additionally, the cancer antigen 125-5 was found to be elevated at 49.6 U/mL.With a presumed diagnosis of appendiceal mucocele versus malignancy, the patient agreed to surgical intervention.The patient was taken to the operating room for an exploratory laparotomy.Upon inspection of the abdominal cavity, the patient was found to have a mass in the right lower quadrant which appeared to be originating from the appendix.Numerous omental implants were identified warranting an omentectomy, as well as several deep pelvic gelatinous deposits which were removed with sharp dissection.The right lower quadrant mass appeared intact, freely mobile, and limited to the distal tip of the appendix.There was limited surrounding inflammation, with mild fibrous adherence to the right paracolic gutter.After careful dissection, the appendix with the associated distal mass were removed and sent for pathology.The omentum was found to have a focus of low-grade mucinous neoplasm consistent with an appendiceal origin.The appendix was found to be a low grade appendiceal mucinous neoplasm with peritoneal involvement .The proximal end of the appendix was free of neoplasm.The pelvic implants were also noted to have foci of low-grade mucinous neoplasm consistent with an appendiceal origin.Immunohistochemical staining of ovarian primary were completely absent.The patient had an uneventful postoperative course and recovered without complications.She was discharged on postoperative day three with continued follow up in our clinic.Although a distended, mucus-filled appendix is often called a mucocele, this term is ambiguous and best used to describe a radiological finding rather than a pathologic entity.In 2012, the Peritoneal Surface Oncology Group International developed a consensus classification that has helped to resolve much of the confusion surrounding diagnostic terminology .In the realm of non-neoplastic versus neoplastic mucinous lesions, our presenting case was classified as a low-grade appendiceal mucinous neoplasm.These lesions are rare, with only one to two thousand cases diagnosed annually in the United States .Whether benign or malignant, there is a slight predominance towards females.Laboratory findings are generally nonspecific, but patients may present with elevated tumor markers including CEA, CA 19-9, and/or CA 125-5.The LAMN is defined as a true neoplasm with abundant mucin production as well as dysplastic epithelium.Despite their bland appearance, LAMNs may penetrate through the appendiceal wall, cause appendiceal rupture, and progress to pseudomyxoma peritonei .Staging can further help delineate treatment protocol and outcome management.In our case, the patient’s appendiceal neoplasm was stage T4a, invading through the visceral peritoneum involving the serosa as well as M1b designating isolated intraperitoneal metastasis.Our case’s cyto-histopathology confirmed the mass and its associated omental and pelvic samples were of appendiceal origin.There has long been debate of which tumor, ovarian or appendiceal, is the site of origin in the setting of pseudomyxoma peritonei.PMP is a rare disease in which the mucin cells are distributed within the peritoneal cavity.Although most literature states PMP’s origin is appendiceal in nature, many other tumors including ovarian, stomach, and pancreas have been documented.These mucinous cells continue to proliferate which eventually causes the signs and symptoms seen during patient presentation.The tumors continue to grow and can perforate, causing seeding with the peritoneal cavity and leading to possible metastasis.This patient was found to have a primary mucinous ovarian carcinoma in early 2018 diagnosed in her home country of Jamaica.In March of 2019, once she immigrated to the United States, she was found to have a primary mucinous appendiceal neoplasm with cyto-histopathology confirming no primary ovarian lesion.In the available literature, synchronous tumors of the ovary and appendix are an uncommon yet well-recognized occurrence in the setting of PMP .The origin of such synchronous tumors is widely debated with most evidence favoring a primary appendiceal tumor with the ovarian tumor representing a metastatic process.In our case, the patient was diagnosed with separate primary ovarian and appendiceal mucinous neoplasms almost a year apart.Further investigation is needed to determine the risks of being diagnosed with more than one primary mucinous tumor over time.Although there is no data supporting a genetic component to this disease, patients with multiple primary mucinous tumors may have a possible genetic preponderance that is not currently established.In 2018, our patient was diagnosed with the mucinous ovarian carcinoma and underwent a with no further treatment thereafter.A recent 2017 study conducted in Denmark, published in the International Journal of Gynecological Cancer, aimed to assess the importance of an appendectomy in the presence of a mucinous ovarian adenocarcinoma as it can be difficult to distinguish between primary ovarian and primary appendiceal cancers clinically, histologically, and immunohistochemically .Essentially, the appendix is needed for complete and thorough staging of a presenting tumor.As such, incomplete staging can affect overall prognosis and patient outcome.The study concluded that failure to perform an appendectomy correlated with a worse prognosis.A normal-looking appendix does not exclude metastatic disease, and because an appendectomy is easily performed and does not increase morbidity, it should be performed during surgery for suspected mucinous ovarian cancer .Many studies have been conducted to ascertain whether an appendectomy is beneficial in the setting of metastatic disease with some studies against appendectomies if the appendix is visually normal ; however, our case is complicated because both neoplasms were independent, primary tumors presenting a year apart.The question remains, should the patient have had an appendectomy at the time of her TAH-BSO?,Further studies would need to be conducted to determine the benefit of an appendectomy for all patients diagnosed with a primary mucinous ovarian neoplasm and its correlation and disease potential with non-metastatic, primary appendiceal mucinous neoplasms.There is an extensive immunohistochemical panel that assesses expressions of various markers; however, even then, differentiation is difficult and pathologist dependent.Our patient’s current immunohistochemical analysis tested for CD7 and CD20 which help differentiate an appendiceal versus ovarian mucinous neoplasm.With recent advancements in gene sequencing, the March 2018 issue of Pathology Research and Practice published an article with evidence of a new supportive marker for the differentiation of a primary mucinous tumor of the ovary and an ovarian metastasis of a low-grade appendiceal mucinous neoplasm, the Special AT-rich sequence-binding protein 2 marker.The SATB2 expression is primarily within the gastrointestinal tract.Patients with physical, radiographic and/or pathologic evidence of an ovarian mass with concomitant LAMN, in the context of pseudomyxoma peritonei or small foci of peritoneal spread, the addition of the SATB2 marker in immunohistochemical staining revealed strong expression of the marker in those with LAMN while SATB2 was negative in cases that demonstrated cyto-histopathologic primary mucinous ovarian carcinomas .Our patient was not assessed for SATB2 expression of either her ovarian or appendiceal neoplasms.Further analysis could help differentiate the two neoplasms as truly primary or possible metastatic potential that would not be seen with the typical immunochemistry analysis performed on such specimens.The presented case demonstrates a rare variant in mucinous neoplasms: a primary ovarian and primary appendiceal neoplasm diagnosed and treated approximately one year apart in two different countries.Although there is no literature to support a genetic anomaly that increases the risk of mucinous tumors throughout the body, with gene sequencing and newly developing biotechnology, there may be a biologic component not currently recognized.In addition, diagnosing mucinous neoplasms based on pathology is challenging even amongst seasoned pathologists.More detailed immunological studies with more than one pathologist overlooking the case may be warranted as new markers, including the SATB2, continue to be discovered.Furthermore, adequate staging is a necessity because of this disease diagnostic challenges and routine appendectomy in the setting of primary ovarian mucinous neoplasms may be warranted to achieve the proper diagnosis and treatment for patients.These new markers and staging protocols can continue to help us accurately diagnose primary versus metastatic disease processes and achieve the best outcomes for our patients in the future.In our case, with stage T4a LAMN, regularly scheduled imaging and tumor marker follow up can permit earlier detection of recurrent disease, which occurs between 4.8 and 20.4% of the time .Current recommendations vary from imaging every six months to two years .There are no conflicts of interest that influence the submitted work.There were no study sponsors involved in the submitted work.Patient consent received in regards to publication of this case report.No outside funding was received to produce this publication.There were no conflicts of interest encountered during production of this publication.This is not applicable to the submitted work.Written informed consent was obtained from the patient for publication of this case report and accompanying images.Study Concept/Design – Misbah Yehya, Matthew Denson, Zbigniew Moszczynski,Data Collection - Misbah Yehya, Matthew Denson, Zbigniew Moszczynski,Writing the Paper – Misbah Yehya,This is not applicable to our submitted work.The Guarantor’s are Misbah Yehya, Matthew Denson, and Zbigniew Moszczynski.Not commissioned, externally peer-reviewed
Introduction: Tumors of the ovary and appendix have been well documented in the setting of pseudomyxoma peritonei (PMP) with constant debate over tumor origin. Generally, these tumors are found to have a single primary origin, most commonly the appendix, with metastatic spread to the ovaries. Care presentation: Here we present a 61-year-old female who underwent total abdominal hysterectomy and bilateral salpingo-oophorectomy (TAH-BSO) for a primary mucinous ovarian carcinoma. She presented to our institution one year later with abdominal pain and a palpable right lower quadrant mass, which on histopathologic exam was found to be a primary low grade mucinous appendiceal neoplasm (LAMN), alluding to the potential of two separate primary disease processes. Discussion/conclusion: With two primary, non-synchronous lesions, a thorough literature review suggests that during the patient's initial TAH-BSO, she could have additionally undergone an appendectomy. In doing so, this would provide accurate, complete staging and determine if the two neoplasms were truly primary in origin or metastatic. In addition, new genetic markers are being discovered, such as the Special AT-rich sequence-binding protein 2 (SATB2) marker, which has been found to be positive in those with a LAMN and negative in those with a primary mucinous ovarian carcinoma. By acquiring appropriate and complete staging we can better diagnose and treat these neoplasms.
234
Domain-specific acceleration and auto-parallelization of legacy scientific code in FORTRAN 77 using source-to-source compilation
A large amount of scientific code is still effectively written in FORTRAN 77.Fig. 1 shows the relative citations for Google Scholar and ScienceDirect for each of the main revisions of Fortran.We collected results for the past 10 years and also since the release of FORTRAN 77.As an absolute reference, there were 15,700 citations in Google Scholar mentioning FORTRAN 77 between 2006 and 2016.It is clear that FORTRAN 77 is still widely used and that the latest standards have not yet found widespread adoption.Based on the above evidence – and also on our own experience of collaboration with scientists – the current state of affairs is that for many scientists, FORTRAN 77 is still the language of choice for writing models.There is also a vast amount of legacy code in FORTRAN 77.Because the FORTRAN 77 language was designed with assumptions and requirements very different from today’s, code written in it has inherent issues with readability, scalability, maintainability and parallelization.A comprehensive discussion of the issues can be found in .As a result, many efforts have been aimed at refactoring legacy code, either interactive or automatic, and to address one or several of these issues.Our work is part of that effort, but we are specifically interested in automatically refactoring Fortran for OpenCL-based accelerators.In this paper we present a source compilation approach to transform sequential FORTRAN 77 legacy code into high-performance OpenCL-accelerated programs with auto-parallelized kernels without need for directives or extra information from the user.By heterogeneous computing we mean computing on a system comprising a host processor and an accelerator, e.g. a GPGPU, FPGA or a manycore device such as the Intel Xeon Phi.Many scientific codes have already been investigated for and ported manually to GPUs, and excellent performance benefits have been reported.There are many approaches to programming accelerators, but we restrict our discussion to open standards and do not discuss commercial solutions tied to a particular vendor or platform; and we will only discuss solutions that work in Fortran.The OpenCL framework presents an abstraction of the accelerator hardware based on the concept of host and device.A programmer writes one or more kernels that are run directly by the accelerator and a host program that is run on the system’s main CPU.The host program handles memory transfers to the device and initializing computations and the kernels do the bulk of the processing, in parallel on the device.The main advantage of OpenCL over proprietary solutions such as e.g. CUDA is that it supported by a wide range of devices, including multicore CPUs, FPGAs and GPUs.From the programmer perspective, OpenCL is very flexible but quite low level and requires a lot of boilerplate code to be written.This is a considerable barrier for adoption by scientists.Furthermore, there is no official Fortran support for OpenCL: the host API is C/C++, the kernel language is based on a subset of C99.To remedy this we have developed a Fortran API for OpenCL.1,OpenACC2 takes a directive based approach to heterogeneous programming that affords a higher level of abstraction for parallel programming than OpenCL or CUDA.In a basic example, a programmer adds pragmas to the original code to indicate which parts of the code are to be accelerated.The new source code, including directives, is then processed by the OpenACC compiler and programs that can run on accelerators are produced.There are a number of extra directives that allow for optimization and tuning to allow for the best possible performance.With OpenMP version 4, the popular OpenMP standard3 for shared-memory parallel programming now also supports accelerators.The focus of both standards is slightly different, the main difference being that OpenMP allows conventional OpenMP directives to be combined with accelerator directives, whereas OpenACC directives are specifically designed for offloading computation to accelerators.Both these annotation-based approaches are local: they deal with parallelization of relatively small blocks and are not aware of the whole code base, and this makes them both harder to use and less efficient.To use either on legacy FORTRAN 77 code, it is not enough to insert the pragmas: the programmer has to ensure that the code to be offloaded is free of global variables, which means complete removal of all common block variables or providing a list of shared variables as annotation.The programmer must also think carefully about the data movement between the host and the device, otherwise performance is poor.Our approach allows an even higher level of abstraction than that offered by OpenACC or OpenMP: the programmer does not need to consider how to achieve program parallelization, but only to mark which subroutines will be parallelized and offloaded to the accelerator.Our compiler provides a fully automatic conversion of a complete FORTRAN 77 codebase to Fortran 95 with OpenCL kernels.Consequently, the scientists can keep writing the code in FORTRAN 77, and the original code base is always intact.A conventional compiler consumes source code and produces binaries.A source-to-source compiler produces transformed source code from the original source.This transformation can be e.g. refactoring, parallelization or translation to a different language.The advantage is that the resulting code can be modified by the programmer if desired and compiled with a compiler of choice.There are a number of source-to-source compilers and refactoring tools for Fortran available.However, very few of them actually support FORTRAN 77.The most well known are the ROSE framework4 from LLNL , which relies on the Open Fortran Parser.5,This parser claims to support the Fortran 2008 standard.Furthermore, there is the language-fortran6 parser which claims to support FORTRAN 77 to Fortran 2003.A refactoring framework which claims to support FORTRAN 77 is CamFort , according to its documentation it supports Fortran 66, 77, and 90 with various legacy extensions.We tested OFP 0.8.3, language-fortran 0.5.1 and CamFort 0.804 using the NIST FORTRAN 78 test suite.All three parsers failed to parse any of the provided sources.Consequently we could not use either of these as a starting point.Like CamFort, the Eclipse-based interactive refactoring tool Photran , which supports FORTRAN 77 - 2008, is not a whole-source compiler, but works on a per-file basis.Both CamFort and Photran provide very useful refactorings, but these are limited to the scope of a code unit.For effective refactoring of common blocks, and determination of data movement direction, as well as for effective acceleration, whole-source code analysis and refactoring is essential.A long-running project which does support inter-procedural analysis is PIPS7, started in the1990’s.The PIPS tool does support FORTRAN 77 but does not support the refactorings we propose.Support for autoparallelization via OpenCL was promised but has not yet materialized.For completeness we mention the commercial solutions plusFort8 and VAST/77to909 which both can refactor common blocks into modules but not into procedure arguments.FORTRAN 77 code is often computationally efficient, and programmer efficient in terms of allowing the programmer to quickly write code and not be too strict about it.As a result it becomes very difficult to maintain and port.Our goal is that the refactored code should meet the following requirements:FORTRAN 77 was designed with very different requirements from today’s languages, notably in terms of avoiding bugs.It is said that C gives you enough rope to hang yourself.If that is so then FORTRAN 77 provides the scaffold as well.Specific features that are unacceptable in a modern language are:Implicit typing, i.e. an undeclared variable gets a type based on its starting letter.This may be very convenient for the programmer but makes the program very hard to debug and maintain.Our compiler makes all types explicit.No indication of the intended access of subroutine arguments: in FORTRAN 77 it is not possible to tell if an argument will be used read-only, write-only or read-write.This is again problematic for debugging and maintenance of code.Our compiler infers the intent for all subroutine and function arguments.In FORTRAN 77, procedures defined in a different source file are not identified as such.For extensibility as well as for maintainability, a module system is essential.Our compiler converts all non-program code units into modules which are used with an explicit export declaration.There are several more refactorings that our compiler applies, such as rewriting label-bases loops as do-loops etc, but they are less important for this paper.As discussed in Section 2, the common feature of the vast majority of current accelerators is that they have a separate memory space, usually physically separate from the host memory.Furthermore, the common offload model is to create a “kernel” subroutine which is run on the accelerator device.Consequently, it is crucial to separate the memory spaces of the kernel and the host program.FORTRAN 77 programs makes liberal use of global variables through “common” blocks.Our compiler converts these common block variables into subroutine arguments across the complete call tree of the program.Although refactoring of common blocks has been reported for some of the other projects, to our knowledge our compiler is the first to perform this refactoring across multiple nested procedure calls, potentially in different source code units.Our ultimate goal is to convert legacy FORTRAN 77 code into parallel code so that the computation can be accelerated using OpenCL.We use a three-step process:First, the above refactorings10 result in a modern, maintainable, extensible and accelerator-ready Fortran 95 codebase.This is an excellent starting point for many of the other existing tools, for example the generated code can now easily be parallelized using OpenMP or OpenACC annotations, or further refactored if required using e.g. Photran or PIPS.However, we want to provide the user with an end-to-end solution that does not require any annotations.The second step in our process is to identify data-level parallelism present in the code in the form of maps and folds.The terms map and fold are taken from functional programming and refer to ways of performing a given operation on all elements of a list.Broadly speaking these constructs are equivalent to loop nests with and without dependencies, and as Fortran is loop-based, our analysis in indeed an analysis of loops and dependencies.However, our internal representation uses the functional programming model where map and fold are functions operating on other functions, the latter being extracted from the bodies of the loops.Thus we raise the abstraction level of our representation and make it independent of both the original code and the final code to be generated.We apply a number of rewrite rules for map- and fold-based functional programs to optimist the code.The third step is to generate OpenCL host and device code from the parallelized code.Because of the high abstraction level of our internal representation, we could easily generate OpenMP or OpenACC annotations, CUDA or Maxeler’s MaxJ language used to program FPGAs.Our compiler11 also minimizes the data transfer between the host and the accelerator by eliminating redundant transfers.This includes determining which transfers need to be made only once in the run of the program.To assess the correctness and capability of our refactoring compiler, we used the NIST FORTRAN 78 test suite,12 which aims to validate adherence to the ANSI X3.9-1978 standard.We used a version with some minor changes:13 All files are properly formed; a non standard conforming FORMAT statement has been fixed in test file FM110.f; Hollerith strings in FORMAT statements have been converted to quoted strings.This test suite comprises about three thousand tests organized into 192 files.We skipped a number of tests because they test features that our compiler does not support.In particular, we skipped tests that use spaces in variable names and keywords and tests for corner cases of common blocks and block data.After skipping these types of tests, 2867 tests remain, in total 187 files for which refactored code is generated.The test bench driver provided in the archive skips another 8 tests because they relate to features deleted in Fortran 95.In total the test suite contains 72,473 lines of code.Two test files contain tests that fail in gfortran 4.9.Our compiler successfully generates refactored code for all tests, and the refactored code compiles correctly and passes all tests.Furthermore, we tested the compiler on a simple 2-D shallow water model from and on four real-word simulation models: the Large Eddy Simulator for Urban Flows,14 a high-resolution turbulent flow model ; the shallow water component of Gmodel15, an ocean model; Flexpart-WRF,16 a version of the Flexpart particle dispersion simulator that takes input data from WRF; and the Linear Baroclinic Model,17 an atmospheric climate model .Each of these models has a different coding style, specifically in terms of the use of common blocks, include files etc., that affect the refactoring process.All of these codes are refactored fully automatically without changes to the original code and build and run correctly.The performance of the original and refactored code is the same in all cases.In this section we show the performance of the automatically generated OpenCL code compared to the best achievable performance of the unmodified original code.We show that the automatically generated OpenCL code can perform as well as hand-ported OpenCL code.To evaluate the automatic parallelization and OpenCL code generation we used following experimental setup: the host platform is an Intel Xeon CPU [email protected] GHz, a 6-core CPU with hyperthreading, AVX, 32GB RAM, and 15MB cache; the GPU is an NVIDIA GeForce GTX TITAN, 980 MHz, 15 compute units, 16GB RAM.We used OpenCL 1.1 via the CUDA 6.5.14 SDK.The original UFLES code on CPU was compiled with gfortran 4.8.2 with following flags for auto-vectorization and auto-parallelization:-Ofast -floop-parallelize-all -ftree-parallelize-loops=12 -fopenmp -pthread.Auto-parallelization provides only 4% speed-up because the most time-consuming loops are not parallelized.Our compiler auto-parallelizes all loop nests in the code base and produces a complete OpenCL-enabled code base that runs on GPU and CPU.As a first test case for the validation of our automatic parallelization approach we used the 2-D Shallow Water model from the textbook by Kaempf.This very simple model consists of a time loop which calls two subroutines, a predictor and a first-order Shapiro filter, before updating the velocity.Our compiler automatically transforms this code into three map-style kernels.The results shown in Fig. 2 are for domain size of 500 × 500, 1000 × 1000, and 2000 × 2000 for 10,000 time steps.This is a high-resolution simulation with spatial resolution of 1 m and a time step of 0.01 s.The automatically generated code running on GPU is up to 9x faster than the original code.This is the same performance as obtained by manual porting of the code to OpenCL.As a more comprehensive test case we used the Large Eddy Simulator for Urban Flows developed by Prof. Takemi at the Disaster Prevention Research Institute of Kyoto University and Dr. Nakayama of the Japan Atomic Energy Agency .This simulator generates turbulent flows by using mesoscale meteorological simulations.It explicitly represents the urban surface geometry using GIS data and is used to conduct building-resolving large-eddy simulations of boundary-layer flows over urban areas under realistic meteorological conditions.The simulator essentially solves the Poisson equation for the pressure using Successive Over-Relaxation and integrates the force fields using the Adams–Bashforth algorithm.The UFLES main loop sequentially executes 7 subroutines consecutively for each simulation time step:Update velocity for current time step,Calculate boundary conditions,Calculate the body force,Calculation of building effects,Calculation of viscosity terms,Solving of Poisson equation using SOR,Our compiler automatically transforms this code into 29 map-stle kernels and 4 reduction kernels.All results shown in Figs. 2–4 and are for a domain size of 300 × 300 × 90, with the number of SOR iterations set to 50.This is a realistic use case of the UFLES covering an area of 1.2 km × 1.2 km.A simulation time step represents 0.025 s of actual time.Fig. 3 shows the breakdown of relative run time contributions per subroutine.We can see that the pres subroutine which contains the SOR iterative loop dominates the run time.On the GPU, this routine accounts for almost 90% of the run time.Fig. 4 shows the total wall clock time and wall clock times for each subroutine on CPU and GPU.Note that the scale is logarithmic.The main observations are that the GPU code is faster for all subroutines but especially so for the velFG routine.Finally, Fig. 5 shows the total speed-up and the speed-up per subroutine.The speed-up of more than 100x for velFG is remarkable.This is because this routine performs a large amount of computations per point in the domain and each point is independent.Thus the GPU can optimally exploit the available parallelism.However, the total speed-up is entirely dominated by the iterative SOR solver, which is 20x faster on the GPU.Our auto-parallelized version achieves the same performance as the manually ported OpenCL version of the UFLES .The above results demonstrate that it is possible to automatically generate high-performance GPU code from FORTRAN 77 legacy code.All the compiler expects the programmer to do is annotate a region of the code for offloading.All subroutines in this region will be offloaded to the accelerator.In practice there are some limitations.We have only presented two examples because the autoparallelizing compiler currently lacks a recursive inliner so that it only supports kernel subroutines that do not call other subroutines.We use the term “domain specific” not in the sense of a particular branch of science but rather a of class of models: in essence, we require the loop bounds to be static, i.e. known at compile time, in order to parallels the loops.For the same reason, recursion is not supported; however, recursion is not supported by the ANSI X3.9-1978.Furthermore, the current version of the compiler expects static array allocation, although this is not a fundamental limitation and we are working on supporting dynamic allocation.The current OpenCL backend generates code that is optimized either for CPU or for GPU and we are actively working on generating optimized code for FPGAs.We have developed a proof-of-concept compiler for OpenCL acceleration and auto-parallelization of domain-specific legacy FORTRAN 77 scientific code using whole-program analysis and source-to-source compilation.We have validated the code transformation performance of the compiler on the NIST FORTRAN78 test suite and a number of real-world codes; the automatic parallelization component has been tested on a 2-D Shallow Water model and on the Large Eddy Simulator for Urban Flows and produces a complete OpenCL-enabled code base that is 20x faster on GPU than the original code on CPU.Future work will focus on improving the compiler to extract more parallelism from the original code and improve the performance; and development of a complete FPGA back-end.
Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into OpenCL-accelerated programs with parallelized kernels. The main contributions of our work are: (1) whole-source refactoring to allow any subroutine in the code to be offloaded to an accelerator. (2) Minimization of the data transfer between the host and the accelerator by eliminating redundant transfers. (3) Pragmatic auto-parallelization of the code to be offloaded to the accelerator by identification of parallelizable maps and reductions. We have validated the code transformation performance of the compiler on the NIST FORTRAN 78 test suite and several real-world codes: the Large Eddy Simulator for Urban Flows, a high-resolution turbulent flow model; the shallow water component of the ocean model Gmodel; the Linear Baroclinic Model, an atmospheric climate model and Flexpart-WRF, a particle dispersion simulator. The automatic parallelization component has been tested on as 2-D Shallow Water model (2DSW) and on the Large Eddy Simulator for Urban Flows (UFLES) and produces a complete OpenCL-enabled code base. The fully OpenCL-accelerated versions of the 2DSW and the UFLES are resp. 9x and 20x faster on GPU than the original code on CPU, in both cases this is the same performance as manually ported code.
235
Creating an enabling environment for investment in climate services: The case of Uruguay's National Agricultural Information System
While society has always struggled to manage climate-related risk, increased vulnerability and the specter of climate change have stimulated recent investment in climate services.Often provided in the form of tools, websites, and/or bulletins, climate services involve the timely production, translation, transfer and use of climate information for societal decision-making; they are increasingly seen as critical to improving the capacity of individuals, businesses, and governments to adapt to climate change and variability.Investment in climate service development varies widely across the globe; some countries have well-developed climate services while others have very few or even none.A number of factors are thought to contribute to this – including the economic development of the country, its relative climate exposure, and the predictability of the climate system in that area.While it is clear that these factors are important, it is equally clear that these are not the only determinants of investment, and that a host of other considerations help to shape climate service investment decisions as well.One factor that appears to have stymied investment in climate services is the relative dearth of information regarding the economic impact of climate services; without estimates of the value of climate information in particular contexts, governments and the private sector have found it difficult to invest beyond the pilot level.To remedy this, a growing cadre of researchers has dedicated considerable effort to understanding the value of climate services in socio-economic terms, albeit with somewhat mixed results.While this field continues to grow, less attention has focused on the institutional and policy factors that shape investments in climate services.This stands in contrast to a relatively robust literature on the role that such factors have played in influencing climate change adaptation more broadly.In many cases, this work has involved explicating the notion of “adaptive capacity,” in such a way as to characterize the barriers and enabling factors that affect adaptation action.While this work has been useful in helping to identify the contexts in which investments in adaptation are likely to take place, it does little to illuminate the factors that lead countries to invest in climate services per se.Distinguishing the factors that enable investments of this nature is an important step in advancing our understanding of adaptation readiness; it is even more critical in advancing the field of climate services, where such knowledge can inform the planning and investment strategies of local, national, and international actors.This paper addresses this gap by assessing the drivers of investment in climate services within a nation.Semi-structured interviews were used to identify several factors that contributed to the decision to invest in and develop a national-level climate service for the agricultural sector in Uruguay.The climate service itself, Uruguay’s National Agricultural Information System, as well as the context in which it was developed, are described in Section 2.Section 3 provides an overview of our study methods, before results and analysis are presented in Section 4.A discussion of the potential implications for the study of other contexts in which climate services may be developed is included in Section 5.Conclusions are found in Section 6.The SNIA was officially launched in June 2016.Representing a significant investment on the part of the Uruguayan government in climate change adaptation, this national-level climate service is relatively unique with regards to the breadth of the endeavor and the extent to which it characterizes the adaptation challenge primarily as one of near-term climate risk management, rather than focusing on climate scenarios to 2050 and beyond.As such, it makes an interesting case from which to explore the role that social and institutional factors have played in enabling investment in climate services.Uruguay is one of the more affluent countries in South America; it rates high for most development indicators and is known for its secularism, liberal social laws, and well-developed social security, health, and educational systems.Agriculture contributes roughly 6% to its GDP, but accounts for 13% of the workforce and more than 70% of exports.Taking into account associated activities, Uruguay’s Ministry of Livestock, Agriculture and Fisheries estimates that the total contribution of Uruguay’s agricultural sector reaches nearly 25% of GDP.In this context, the Uruguayan government has viewed agricultural production as an important piece in Uruguay’s development – increasing efforts to support sustainable intensification and focusing on high-value, well differentiated products that can be marketed at a premium in Europe and the US.Many Uruguayan farmers have embraced this strategy, actively looking for ways to increase the efficiency of their production.Climate risk management has captured particular attention as the country has experienced a series of damaging climate shocks in recent years.The government has estimated, for instance, that economic losses associated with the 2008–2009 drought neared $1 billion USD.The 2015–2016 El Niño event also contributed to the worst floods experienced in Uruguay in more than 50 years, with more than 12,000 people made temporarily homeless and economic losses in a range of productive sectors.Uruguay’s humid subtropical climate is marked by strong inter-annual variability.Mean annual temperatures ranges from 16° to 19 °C and mean annual precipitation from 1100 to 1600 mm.While total precipitation is expected to increase over the course of the coming century, long-term climate projections suggest that the country will face an increase inter-annual variability and in the frequency and intensity of extreme weather phenomena, including rainstorms and drought.In this context, roughly 15% of Uruguayan farmers report climate fluctuations as a significant challenge.Given the importance of agriculture to Uruguay’s national economy, an information system to support decision making was first proposed by the MGAP in 2011; the concept was further developed by actors in and outside of the country and ultimately funded, in 2013, under the auspices of a World Bank project entitled Development and Adaptation to Climate Change.The SNIA brings a range of data produced by the MGAP together with information developed by other national-level actors; this includes information on soils, vegetation, and land use and on water, weather, and climate.Agricultural census data, including that regarding production and sales, are also included.The varied inputs to the SNIA make it easy for the tool to be seen differently by different actors.For instance, the SNIA can well be characterized as a data delivery tool, providing citizens and government actors with one-stop access to a host of different data sets; given the SNIA’s focus on facilitating interoperability and visualization, it is also rightly described as an analysis tool, allowing MGAP to combine dissimilar data collected from different agencies and across different spatial scales to answer pressing policy questions.This paper analyzes the SNIA as a national-level climate service, with the goal of translating and disseminating contextualized information about climate variability and change.The SNIA is found online at http://snia.gub.uy/.The SNIA effort is led by the MGAP, in conjunction with the International Research Institute for Climate and Society at Columbia University, which has supported the SNIA by providing MGAP with its own version of IRI’s Data Library – an online data management and analysis tool – and by collaborating with Uruguayan actors to develop several information products, including crop forecasts and an online decision support tool for crop production.The SNIA was developed as a collaboration between more than 30 Uruguayan organizations.Significant contributions have come from the National Institute for Agricultural Research, particularly their Agro-Climate & Information Systems, which has provided Uruguay’s agricultural community with tools to characterize, contextualize, and track climate variability since the late 1990s.The Uruguayan Institute for Meteorology supports the SNIA by providing and analyzing data from the country’s meteorological stations; the SNIA is also built around a number of climate-related products developed by the Engineering School at the University of the Republic.Following Denzin and Lincoln, qualitative methods were used to explore factors that enabled investment in the SNIA.This involved collecting empirical evidence through semi-structured interviews and the analysis of key policy documents.An initial list of stakeholders – including people and organizations who had a role in conceiving and/or developing the SNIA, or who were seen as potential suppliers and/or users of SNIA information – was developed in conjunction with the SNIA office, though a snowball approach was used to add additional stakeholders when appropriate.Stakeholders were contacted via email and interviews were conducted in person, in Spanish, with the exception of three stakeholders who preferred to speak English and two interviews that were conducted by Skype to accommodate schedule conflicts.A total of 33 interviews were conducted in March of 2013, roughly 6 months after work on the SNIA began.A framework analytical approach was used to analyze data gathered through these interviews; as such, transcripts were coded using NVivo into categories that allowed for the creation of a new structure for the data, a framework that was developed dialectically while reading through the transcripts.In December 2015, six months after the SNIA launch, an additional 10 interviews were conducted to develop a more precise understanding of the issues pertaining to each theme; three people interviewed in this round had also been interviewed in 2013.Interviews were in-depth, with the goal of revealing stakeholders’ perception of the process, and lasted roughly an hour.All interviews were recorded and the first 33 were transcribed.An interview protocol is included in Appendix 1.In all, a total of 43 interviews were conducted with 40 people representing 12 organizations, 10 directorates of MGAP, and three schools within the University of the Republic.A list of interviewee affiliations is included in Appendix 2.Relevant policy documents were identified in conversation with the SNIA office, the interviewees, and via an online search, including through Uruguayan government records.A list is included in the Appendix 3.Interviews revealed six factors that enabled investment in the SNIA, shaping the way it was conceived, designed, and implemented.These factors are presented and analyzed below.Most people reported that the focus on sustainable intensification and the production of high-value crops helped develop both the vision and the technical capacity needed to invest in the SNIA.Though this was generally accepted, two activities stand out as particularly meaningful in shaping the context in which the decision to invest in the SNIA took place.The first of these followed a 2009 policy to reduce soil erosion by requiring producers to submit certified land-use plans to MGAP’s office of Renewable Natural Resources; this policy was ratcheted up over time, and in 2016 RENARE accepted nearly 15,000 plans covering more than 1.5 million hectares of cropland.This activity generated a great deal of information and know-how, both of which are seen to have contributed to the decision to invest in the SNIA.“We know the land use of each paddock, what the producers are planning to do in terms of land use, so … there’s a great wealth of information in the Ministry – and not just in the Ministry but across the agricultural institutes – so with the SNIA we are in a position to begin to share and overlay that information and generate mechanisms of interoperability to allow the authorities to make decisions, either to implement policies or if they want to establish insurance.,MGAP employee; interview #13,A second activity involved the development of Uruguay’s National Livestock Information System; first proposed after a 2001 foot-and-mouth outbreak and ultimately launched in 2011, the SNIG ensures that all cattle are fully traceable, maintaining a database of more than 11.5 million animals and cataloging more than 350,000 transactions annually.“With the National System of Livestock Information – the SNIG, the system that supports traceability – we began to create a database … Uruguay had a lot of information, so I think the reason that Uruguay took this step is because it was already in the process for many years.And we just said “Let’s create an interoperable information system, with all the databases that exist.,I think it was a great bet on the part of the current government, but actually the logic was there and it was working.,MGAP employee; interview #24,In that sense, the work of SNIG and RENARE – neither of which engaged climate-related issues – shaped the environment in which the MGAP operates.This includes advancing the organization’s vision and capacity as well as that of Uruguay’s farmers, who now submit livestock and land-use information electronically.These efforts also allowed MGAP to build the knowledge and partnerships – and thus the innovative capacity – of Uruguay’s agriculture sector.All of this is seen to have helped pave the way for cross-agency discussions about climate-risk management, which ultimately led to a plan to invest in the SNIA.While MGAP’s focus on sustainable agricultural intensification set the context in which the SNIA was developed, interviews revealed three activities focused on climate change adaptation that laid the foundation for a larger investment in climate risk management.The first of these activities was the National System of Response to Climate Change.Immediately following the 2008–2009 drought, Uruguay’s then-president Tabaré Vazquez put the issue of climate change on the national political agenda, inviting the heads of various government departments to work together to mount a collective effort to confront the issue.This resulted in the creation of the SNRCC, formed by official decree that year and soon followed by the National Plan for Response to Climate Change.A multi-agency, multi-disciplinary group coordinated by the Ministry of Housing & Environment, the SNRCC met on a monthly basis to discuss climate-related issues and was responsible for national communications, reports, and meetings.“ was a great step, to sit around a table with different ministries, to establish consultation mechanisms to diagnose problems and make strategic change, with the support of the University of the Republic, with institutes of science and technology.This participatory process has been strengthened over time … and in that context there is a much richer and more integrated vision of information and public policies and in the .,MGAP employee; interview #35,Shortly after the creation of the SNRCCC, MGAP set out to understand current and future climate-related impacts to the agricultural sector, and to prioritize options for adaptation.In a second activity, the task of identifying, evaluating, and proposing policies related to adaptation fell to the newly created “Agricultural Climate Change Unit” of the Office for Agricultural Planning and Policy.The unit ultimately defined a transversal approach to adaptation, which included expanding services offered by existing agricultural organizations.While the work of this office is ongoing, the task of priority-setting raised interest in climate risk within the Ministry.At roughly the same time, an interdisciplinary group including government, university, and non-government actors developed a proposal to the Food & Agriculture Organization, requesting funds to conduct a study on climate vulnerability in the agricultural sector.Launched in 2011, the project was coordinated out of OPYPA with the goal of characterizing agricultural vulnerability.The finished work, a seven-part series called Clima de Cambios, offered a range of suggestions for climate risk management in the agricultural sector.This effort strengthened capacities within each agency in terms of understanding climate variability and change and advanced the collaboration of several groups that had not previously interacted with MGAP.“ began the whole process of exploring who should be involved in this kind of work … and more importantly, what do we want?,What kind of information?,What products?,What content do we need?,This was an opportunity to start doing this exercise, the effort of working to integrate policy with academia and understanding how the process worked.,UdelaR researcher; interview #6,It’s important to note that neither this kind of groundwork, nor the institutional support for sustainable agricultural activities mentioned above, made the SNIA a foregone conclusion.Indeed, members of the SNIA team report struggling to advance their work when they leaned too hard on the connections and momentum developed through existing activities to form Working Groups to help “co-produce” some information products.Indeed, though many interviewees found these groups useful in fostering discussion and in keeping people abreast of SNIA-related developments, they were not generally successful at generating products – primarily because they were voluntary, requiring people to take time out of already-busy schedules to contribute, and because they were not well enough supported by the SNIA team to ensure that work plans were completed.Though the SNIA team eventually became aware that institutional fixes would need to be found to support these groups, the connections and momentum that were developed through the three institutional activities mentioned above were key in creating an environment conducive to investment in the SNIA itself.Begun in 2008, a process to modernize the Uruguayan meteorological institute also shaped the decision to invest in and build the SNIA.Founded in 1920, the Meteorological Institute of Uruguay was originally part of the Faculty of Humanities and Sciences at the UdelaR; it was eventually moved to the Ministry of National Defense when it was incorporated as a government office.As the National Meteorological Department, the organization continued as part of the defense ministry through two external reviews published in 2009 and 2013, respectively.Both of these reviews found a series of challenges that prevented the DNM from providing the country with adequate weather and climate information in useful forms.Both reports offered a number of recommendations regarding how to improve performance – and though neither was implemented in its entirety, each led to important actions that contributed to the modernization of the meteorological service.After the 2009 report, for instance, the DNM undertook a large-scale effort to modernize the national meteorological database, structuring and organizing its own weather and climate data along with that collected by the national energy company and the national agricultural research institute.Interviewees describe the rollout of this database as fundamental to the decision to invest in the SNIA, since it allowed meteorological data to be shared and analyzed in a way that was previously impossible.Though initial efforts at modernization focused on data, later efforts were more geared toward organizational reform – and in 2013, an Inter-Ministerial Commission issued a series of guidelines for transforming the DNM into a separate institute outside of the Ministry of Defense.The process of restructuring the DNM into what is now the Uruguayan Institute of Meteorology began that same year, resulting in a number of changes designed to make the organization more flexible, more relevant, and more outward facing, focused on developing demand-driven information products).The first of these changes was to create a new institutional home for the organization.When it was located in the Ministry of Defense, the DNM was entirely beholden to defense-oriented colleagues for budgetary requests and institutional programming; it was frequently not at the top of the list of funding priorities.“There’s been modernization and strengthening of meteorological services that until recently was known as National Direction of Meteorology – but by a law that was passed last year became the Uruguayan Meteorological Institute, INUMET.The quality of the services, the staff, the equipment, the number of meteorological stations – these had all fallen quite a bit, but now I think we are in a process of strengthening meteorological services because we’re more aware of how important they are.,MGAP employee; interview #35,Outside the Defense Ministry, interviewees describe the new INUMET as more independent, with more flexibility to develop its own work plan and to request an increase in funding to support that work plan.INUMET does submit budgets to Parliament through the Ministry of Housing and Environment, but the goals of this ministry are more aligned with a “modern” meteorological institute, able to develop products and services to supply the SNIA.“In this new format, can partner with companies, public services, can establish and manage projects, which in the old arrangement was impossible.I think gives more flexibility.,INUMET employee; interview #3,Decentralizing the agency has allowed INUMET to set its own course regarding the kinds of skills and services it would like to develop.In addition, this restructuring has allowed INUMET to shift from an extremely horizontal organizational structure into one that includes more high-level experts that can perform higher quality climate analyses.This is intended to include the hiring of graduates of the UdelaR’s bachelor program in meteorology, created in 2007, and represents an important shift in interest toward the development and use of climate-related information in the country.The result is an organization better skilled to produce climate data and information useful to the SNIA.While some aspects of this modernization process happened at the same time as the decision to invest in the SNIA as a national-level climate service for the agricultural sector, it was clearly a critical step; without the national database or the restructuring effort, the meteorological service would not have been able to contribute the data, products and/or the understanding needed to support the development of this information tool.Within this institutional context, interviewees describe a policy measure critical to the decision to invest in the SNIA: Uruguay’s policy on open data.Indeed, unlike many countries in Latin America, Uruguay is legally obligated to make all data freely available, as enshrined in Law 18.381, the Right of Access to Public Information.Open data policies are intended to ensure the long-term transparency of government information and are seen to increase the participation, interaction, and empowerment of data users and providers – stimulating innovation and economic growth and enlisting the citizenry in analyzing large quantities of data.While this openness is lauded in certain circles, open data remains a particularly controversial topic within the international climate community; many countries reserve data collected by national meteorological agencies for sale, with far fewer making data widely available to the public sector for free.It is clear Uruguay’s open data policy has had both a push and a pull effect on the decision to invest in the SNIA.For instance, the fact that MGAP was already required to make data public increased the attractiveness of a public data platform; it also helped to foster interest in finding ways to sync disparate agricultural datasets to provide for a holistic analysis of current and emerging conditions.“What you’re seeing from the SNIA – presenting the data with the goal of meeting needs across sectors, making data available so that it can benefit everyone – these days the Ministry is trying to move forward on this and the SNIA is spearheading that.,MGAP employee; interview #39,On the other hand, the SNIA is obviously greatly facilitated by Uruguay’s data policy.Indeed, the current version of the tool would not be possible without open data – and other possible versions, potentially based on derived information products that did not allow for users to directly download data, would have been much more complicated to develop and to maintain.“Before this, things were more conservative – they had the idea that the data from the Ministry should not be shared.Well, we started to work through the SNIA because there were already needs for the data, and in that sense has helped to create this different dimension at the Ministry.,MGAP employee; interview #40,But while open data requires a certain relinquishing of control on the part of the public sector, which must trade its role as gatekeeper for a new role as information provider, public agencies are not always ready for this shift either logistically or conceptually.In the case of Uruguay, some aspects of the open data law are still being implemented, including the formal designation of which information should be made public and which should not, based on citizen’s privacy concerns.At the same time, the SNIA has forced the government to confront a number of data-related challenges, including around the interoperability of data sets and the provision of metadata.There are also issues related to collaboration, as interviews reveal that some of the groups responsible for contributing data and products to the SNIA have expressed a need their own contributions to be clearly recognized as well as an interest in making it clear to users who they could contact with specific questions regarding the data.As such, the SNIA portal currently lists 37 collaborating organizations and clearly indicates the organizational provenance of specific datasets.Interviews suggest that SNIA’s policy of focusing on near-term climate variability, as opposed to providing information on longer timescales, has also played a part in motivating the investment.Indeed, while the project that funded the SNIA focused on climate change adaptation, it was the first World Bank climate change project not to involve long-term climate projections.In focusing on the near-term, the SNIA is able to respond to the immediate needs of the government and its constituents – a focus on the agricultural sector in a place where inter-annual variability accounts for more than 80% of the observed climatic variance in Uruguay in the last 100 years, while decadal variability accounts for just ∼10% and the contribution of the climate change signal is extremely limited.“We are more concerned with variability than with long-term trends, especially because in Uruguay the long-term trends – particularly in relation to water – are to increase water availability.… So the soils have more water, the problem is that the distribution of water is very irregular within a year or between years, and if that variability increases, the averages are not necessarily a good indicator that everything is fine.So we worry more than anything about what will happen with the extreme events … and right now, the first step is to begin to close the gap between adaptation to the present variability.Are we well adapted?,No, well then we go to first step to adapt to the current variability.,MGAP employee; interview #25,By focusing on the near term, the SNIA also responds to a need to show tangible benefits during short political cycles – a factor that has been shown to complicate investments in adaptation in other places.In this sense, investing in climate service tools that make near-term rather than/as well as long-term information available are sometimes more attractive to politicians and to those they serve, though in other cases the need to respond to international processes or address the “newest thing” may make orienting climate services toward long-term trends more viable.As is frequently the case with major policy and institutional developments, interviews make it clear that key individuals – and the relationships of trust that developed between them – played a role in conceiving and shaping the SNIA.This jibes well with previous work on climate services that has documented the important role of “champions” in advocating for the development of such tools and capacities; in this case, two characters were seen to have played a key role in motivating investment in the SNIA.The first is the minister of MGAP, who first proposed the idea of developing a national information system that could help to manage climate-related risk both in the near- and long-term.A landowner and producer himself, he had previously served as the president of a national association of rice producers, where he gained knowledge in the use and dissemination of seasonal forecasts for decision making.Upon taking up his position in the government in 2010, the minister sought to translate this to a wider scale.“We have a minister who is very technical, who understands the subject well – that gave him a lot of momentum in saying ‘This is an issue that is very important for Uruguayans.’,MGAP employee; interview #27,Another important figure was a Uruguayan agricultural scientist based at the International Research Institute for Climate and Society, who helped facilitate discussion regarding how such a tool might be developed and the sorts of climate and weather information that might be helpful in improving decision making within Uruguay’s agricultural sector.In Uruguay, a country of just 3 million people, this scientist had collaborated with the minister before he took up his government position, which made it easy to re-initiate the connection after 2010.At least one SNIA collaborator described the connection and the trust between this scientist and the minister was described as “fundamental” to the development of the SNIA.Analysis reveals six factors that helped create an enabling environment for investment in Uruguay’s National Agricultural Information System, a national-level climate service for the agriculture sector.While these factors developed in a context that is uniquely Uruguayan – one marked by relatively high levels of political stability, economic growth, and social capital – they offer important lessons for future efforts to identify and create contexts in which investments in climate service can occur and flourish.Even accounting for Uruguay’s unique history, it seems likely that many of the factors identified here are broadly generalizable to other countries.While only further case studies, and the comparative analysis between them, can confirm this, the potential relevance of four main themes, and the research needed to explore them, is discussed below.Analysis revealed that support for sustainable agricultural intensification helped create the context in which investment in the SNIA took place.These factors also helped define the scope and capacity of specific actors, networks, institutions and approaches within Uruguay.To the extent to which these items, taken together, can be seen as contributing to the innovation of the SNIA, they can be thought of as an “innovation system.,The concept of an “innovation system” was first developed in the 1980s as a response to the neo-classical economic approach to studying innovation, in which the main impediment to innovation was seen to be high wages.In contrast to an economics-focused analysis, the innovation system literature conceptualizes innovation as the result of a number of interdependent processes which interact to create contexts conducive to innovation.To date, the main contribution of this type of analysis has been to help create frameworks to diagnose failures or weaknesses that can be addressed with specific policies.Such a framework has not yet been used to understand the development, or lack thereof, of climate services in particular contexts – though analysis of “agricultural innovation systems” has been useful in identifying ways for governments to take action to foment innovation in the agriculture sector.Further developing the concept in the climate service sphere by looking specifically at the infrastructural, institutional, interaction, and capacity failures that limit climate services investments is likely to help develop our understanding of how to build contexts conducive to the development of climate services.Given the important role that the SNRCC, the priority setting activity at MGAP, and the Clima de Cambios book played in informing the decision to invest in the SNIA, these activities can be seen to fall under the rubric of “groundwork” for climate change adaptation, as defined by Lesnikowski et al.In that analysis, roughly 2000 adaptation initiatives mentioned in the Fifth National Communication of Annex 1 Parties to the UNFCCC are grouped into three categories: recognition, groundwork, and action.This three-prong scheme is loosely echoed by Biagini et al., whose analysis of 158 adaptation activities identified 10 categories of adaptation action, including: capacity building; management & planning; practice & behavior; policy; information; physical infrastructure; warning or observing systems; green infrastructure; financing; and technology.Biagini et al. find that the first three of these categories are much more common than the others, hypothesizing that these low-cost actions are necessary antecedents that must precede and help direct high-value investments that may come later.Biagini et al. also suggest that the especially high number of references to capacity building – more than twice as frequent as references to management and planning activities, more than 20 times as frequent as references to investments in technology – may reflect an early stage societal adaptation, and/or the prevalence of barriers that must first be grappled with before adaptation can be actualized.While the notion that activity to address adaptation to climate change and variability progresses in a relatively ordered manner – beginning with basic recognition, proceeding to groundwork, and moving on to more high-level investments in technology or infrastructure – makes sense intuitively, no detailed case studies have explored whether and how such an evolution might play out with respect to individual adaptation investments.Analysis of the SNIA seems to confirm this progression, however, suggesting that further study of what constitutes effective groundwork; the timeframes on which these kinds of activities take place; and the extent to which they may be cyclical and/or additive are important areas of research needed to inform our understanding the context in which climate services develop.In this sense, institutional analyses of climate services in other contexts may help shed light on the sorts of near- and medium-term actions that can help to mainstream the development of climate services over time.The “modernization” of Uruguay’s meteorological institute and the country’s open data policy were found to have played critical roles in creating the context in which the SNIA was conceived and developed.Finding ways to analyze and diagnose these systems will clearly be important in identifying contexts conducive to climate service investment.As mentioned earlier, two external reviews were conducted to help inform this modernization process of INUMET; it is likely that many other meteorological services have undergone similar processes, though the results are generally not made public.Several authors have, however, looked broadly at how to structure meteorological services to best deliver weather, water, and climate services.The World Bank in particular has developed several principles to guide the modernization of national meteorological services so as to create robust professional agencies capable of delivering the right information to the right people at the right time; they have also looked at organization and funding models.Comparative work – and that focused on specific services – has been helpful in laying out the principle issues involved in understanding how the structure of meteorological institutes contributes to the development and delivery of climate services.However, further study in this regard, including the analysis of a range of services in context, is needed to understand how the structure and institutional home of a meteorological institute contributes to the relative success of climate services.It is also important to consider the role that the MGAP played in conceiving the SNIA and in motivating investment for it.Comparing investment in climate services developed by sectoral agencies versus those developed by meteorological services is also an important area of research, and one that should inform further discussion within the Global Framework for Climate Services.Related to the modernization of the meteorological institute is the topic of data policy.Several of the aforementioned studies have considered the role that data policy plays in informing services, though more work is clearly needed – including comparative analyses of the value to an economy of selling versus making data freely available.While making data available to the public is increasingly seen as an unalloyed good, there are a number of reasons that doing so can be legally and logistically challenging; identifying ways to characterize and measure the existence of infrastructure in place to manage these challenges is thus a critical precondition to climate service development.The relative benefit of experiences in data sharing should also be explored.Consistent with other literature regarding the uptake of scientific information, this analysis shows the role that key individuals played in helping to create and actualize a vision for the SNIA.Indeed, the role of climate service “champions” seems relatively well recognized, though research on the skills and knowledge that support such champions lags.Further work to identify commonalities across climate service champions could inform efforts to train and develop more people with the skills to motivate climate service investment.Importantly, while the champions identified in this analysis had their own motivations for participating in the SNIA, this work also reveals that the actors involved in SNIA Working Groups were often not properly incentivized to contribute new products to the SNIA.Though the performance of the Working Groups did not affect the decision to invest in the SNIA per se, it did affect the outcome, with no public products developed as a result of the Working Groups.In that sense, investments in climate services are more likely to take place when incentives to participation are clearly identified.While the greater good is a noble motivator, personal motivations – including specific salaried time for key employees or support staff to collaborate with other offices and to follow up on their suggestions – proved essential for developing appropriate products.This jibes well with previous literature on “co-production” of climate services, which indicates that this sort of bridging activity is time and resource intensive and frequently under-resourced.This paper investigates the context in which Uruguay’s Ministry of Livestock, Agriculture and Fisheries invested and developed the National Agricultural Information System, a national-level climate service for the agricultural sector.Six drivers were found to have shaped the context in which this investment was made.This includes a number of actions that developed an “innovation system” around sustainable intensification in agriculture; previous “groundwork” on climate change adaptation; and the modernization of the national meteorological service.Policy measures, such as Uruguay’s requirement that all public data be made available, and the SNIA’s policy of focusing on near-term climate variability rather than long-term climate change, enabled the investment.Key individuals, and the relationships of trust between them, were also found to be critically important.As with all countries, Uruguay is unique.As such, the broader Uruguayan context – including its relative affluence, political stability, high educational standards, and high levels of social capital – also played a role in shaping the decision to invest in climate services.Nevertheless, it is likely that many if not all of the factors identified as part of this study are broadly generalizable to other countries.The role of innovation, groundwork, data providers, and champions merit further attention, particularly as the first two of these items have not yet been explored in the climate service literature.Indeed, analysis of national and/or regional innovation systems may help climate service funders to identify where best to invest without focusing narrowly on the climate service “value chain.,Likewise, the notion that “groundwork” activities may precede successful investment in climate service has not been recognized; identifying what sort of activities are more impactful in creating conditions conducive to investment, and how to measure the effectiveness of those activities, should be a key priority as the field continues to grow.Further developing these themes, and the relative importance of them, through additional empirical and theoretical work will help to illuminate the contexts in which the development of climate services is likely to be successful, and the sorts of measures that can enable them.It will also help inform our understanding of adaptive readiness, distinguishing between factors that enable adaptation efforts broadly and those that influence investments in climate services specifically and informing a host of planning activities at local, national, and regional scales.
Increasingly challenged by climate variability and change, many of the world's governments have turned to climate services as a means to improve decision making and mitigate climate-related risk. While there have been some efforts to evaluate the economic impact of climate services, little is known about the contexts in which investments in climate services have taken place. An understanding of the factors that enable climate service investment is important for the development of climate services at local, national and international levels. This paper addresses this gap by investigating the context in which Uruguay's Ministry of Livestock, Agriculture and Fisheries invested in and developed its National System of Agriculture Information (SNIA), a national-level climate service for the agriculture sector. Using qualitative research methods, the paper uses key documents and 43 interviews to identify six factors that have shaped the decision to invest in the SNIA: (1) Uruguay's focus on sustainable agricultural intensification; (2) previous work on climate change adaptation; (3) the modernization of the meteorological service; (4) the country's open data policy; (5) the government's decision to focus the SNIA on near-term (e.g., seasonal) rather than long-term climate risk; and (6) the participation of key individuals. While the context in which these enablers emerged is unique to Uruguay, it is likely that some factors are generalizable to other countries. Social science research needed to confirm the wider applicability of innovation systems, groundwork, data access and champion is discussed.
236
Experimental assessment of solid particle number Portable Emissions Measurement Systems (PEMS) for heavy-duty vehicles applications
Ultrafine particles have been associated with adverse health effects and act through mechanisms not shared with larger particles.Road traffic contributes significantly to particulate matter concentrations and can reach 90% in busy roads.Heavy-duty vehicles can contribute half of the road-traffic PM emissions.PM emissions are regulated with filter based methods.Additionally in Europe, limits for solid particle number emissions exist for light-duty vehicles and heavy-duty engines.The type approval of a heavy-duty engine is conducted on an engine dynamometer following prescribed test cycles: a dynamic cycle starting with coolant and oil temperature at ambient conditions, followed by the same cycle with the engine warmed up, and then a stationary cycle.The SPN limits for heavy-duty engines were introduced in 2013 for compression ignition engines and in 2014 for positive ignition engines.The limit for the WHTC is 6.0 × 1011 p/kWh, and 8.0 × 1011 p/kWh for the WHSC.In Europe, the in-service conformity testing of a heavy-duty engine is conducted on the road over normal driving patterns, conditions and payloads using Portable Emissions Measurement Systems.The results should be lower than the Euro VI limit multiplied by a so-called conformity factor that takes into account the PEMS measurement uncertainty.The PEMS testing is applicable for both gaseous and PM mass emissions in USA, but only for gaseous emissions in Europe.After a long evaluation of the PM mass method with PEMS in Europe, at the end of 2015 it was decided to evaluate the SPN method.The SPN PEMS method was already evaluated for light-duty vehicles and is included in the Real Driving Emissions light-duty regulation.Nevertheless, for heavy-duty vehicles wider conditions had to be examined."European Commission's Joint Research Centre evaluated four SPN PEMS instruments in the laboratory and on the road from February until June 2016.In September 2016 five commercial instruments were further evaluated.The second evaluation phase was conducted in order to check the improvements of the systems for the exhaust of heavy-duty vehicles.The conclusion was that using SPN PEMS is a feasible method for ISC testing.After this evaluation, the European Automobile Manufacturers Association started the validation phase testing an even wider range of engines and vehicles.Four SPN PEMS were evaluated at different laboratories of European Original Equipment Manufacturers; one of them was circulated to most participants to evaluate the measurement uncertainty of the methodology with SPN PEMS.Objective of this report is to present the findings of the ACEA validation study.The tests were conducted in the facilities of OEMs from February 2017 until December 2017.Each laboratory tested at least one SPN PEMS with reference particle number systems connected to proportional partial flow dilution systems, as prescribed in the regulation, and based on the recommendations of the Particle Measurement Programme.The ambient temperature for all laboratory tests was around 25 °C.One SPN PEMS was circulated to all OEMs, except one that did not have time to participate in the inter-laboratory exercise.This OEM tested his own PEMS.The laboratory tests were conducted on engine dynamometers running mainly the type approval cycles with engine starting cold and/or hot.Other steady state tests and regeneration events were conducted to challenge the systems with higher particle number concentrations and exhaust gas temperatures.Some OEMs used the PEMS for on-road tests.The on-road routes were approximately 150 km long.For N3 vehicles the routes consisted of urban, rural and motorway phases in this order.For the N2 vehicles the shares were urban, rural and motorway.For the on-road tests the PEMS were compared to each other only, because there were no reference PMP systems on-board the vehicles.The on-road tests with only one PEMS were used only to evaluate the robustness of the systems because there was no other value to be compared with.Additionally, they were used to calculate emission factors of Euro VI heavy-duty vehicles, presented elsewhere.The diesel engines were all equipped with Diesel Oxidation Catalyst, Diesel Particulate Filter, Selective Catalytic Reduction for NOx systems and ammonia slip catalyst.The engines with Compressed Natural Gas were stoichiometric with three-way catalyst.All vehicles complied to the Euro VI standards.The experimental set-up for the evaluation of the PEMS in the laboratories is presented in Fig. 1.In most OEMs the reference instrument was a PMP system from AVL downstream of a proportional partial flow dilution system.The PFDS was sampling proportionally to the exhaust flow, as required in the regulation.The PMP systems consisted of a hot dilution at 150 °C, an evaporation tube at 350 °C and a secondary dilution at ambient temperature, followed by a Condensation Particle Counter with 50% counting efficiency at 23 nm.All SPN PEMS fulfilled the technical requirements for light-duty vehicles.SPN PEMS #1 was connected to the tailpipe with a short heated line at 150 °C with a sampling rate of 0.7 l/min.Then a 2:1 hot dilution at 150 °C took place.An evaporation tube and a catalytic stripper and a secondary dilution 3:1 at 60 °C followed.The diluted sample was transferred to the particle detector with a 1.3 m heated line at 60 °C.The principle of the detector was based on the use of a pulsed electric field to periodically remove a fraction of particles charged in a corona charger, followed by a non-contact measurement of the rate of change of the aerosol space charge in a Faraday cage.The SPN PEMS #2 was the OBS-ONE from HORIBA.It was a modified version of the NanoParticle Emission Tester.The first diluter was located directly at the sample probe at the tailpipe.With a 2.5 m heated line at 60 °C the diluted aerosol was brought to the main cabinet where a heated catalytic stripper at 350 °C removed the volatile and semi-volatile particles.A second dilution brought the concentration to the measuring range of the isopropyl alcohol-based CPC with d50% at 23 nm.SPN PEMS #3 had a 3 m heated line at 115 °C to transfer the exhaust in the main unit.The raw gas was then diluted by a factor of 10–30 in a rotating disc diluter.The diluted sample then passed through an evaporation tube set at 300 °C.The concentration of SPN was estimated by measuring the current through a mini Diffusion Size Classifier.The DiSC mini charged the particles with a unipolar diffusion charger.Subsequently, the charged particles passed through a diffusion stage where the smallest particles were deposited by diffusion and detected as an electric current.The remaining particles ended up in the filter stage, where a second current was measured.The ratio of the two currents allowed estimating the average particle diameter.With the two currents, the average particle diameter, assuming that particles are spherical and lognormally distributed with a geometric standard deviation of 1.9, and the calibration parameters, the SPN concentration was estimated.This estimation included particles < 23 nm.To estimate the SPN concentration > 23 nm, a “PMP” efficiency was applied to the SPN concentration based on the mean estimated size by the PEMS, and PMP efficiency curve in Fig. A2 of Appendix A).This correction was applied every second at the real time data, but its influence was evaluated only for tests that the emission levels were > 1 × 1011 p/kWh where the current was high enough for a robust estimation of the size.SPN PEMS #4 module from Sensors, USA) consisted of a sampling probe with a heated line at 100 °C, a hot dilution unit 30:1, a catalytic stripper, a second stage diluter 150:1 and a CPC from Sensors.SPN PEMS #2 and #4 had Condensation Particle Counters while SPN PEMS #1 and #3 had diffusion chargers to determine the SPN concentration.Both techniques have been used since many years to measure automotive exhaust aerosol.Due to space limitations, for the on-road tests and some laboratory tests SPN PEMS #2 was connected in parallel to SPN PEMS #3 with a 4 m heated line at 120 °C.This heated line introduced approximately 20–25% particle losses for particles > 23 nm as it was found with dedicated tests at OEM 1 and OEM 2.These losses were determined by comparing three repeated hot WHTCs with and without the heated line.Theoretical estimation of the thermophoretic losses using the exhaust gas temperature and the wall temperature of the heated line resulted in 16–18% losses for the specific WHTCs.Losses of 20% were taken into account in the results presented below.Depending on the vehicle, 4 or 5 in.diameter exhaust flowmeters were used.For some vehicles the exhaust flowmeters were compared to the measured flow rate and the differences were within 4%.For the calculation of the emissions in p/kWh, the SPN signal of the PEMS was time aligned with the exhaust flow rate and then the two signals were multiplied to each other second by second.The sum of their second by second multiplication was divided to the engine work for the specific cycle taking into account the density of the fuel.Fig. 2 gives real time examples of PEMS versus reference PMP systems connected to partial flow systems for a cold start WHTC for a DPF and a CNG engine.The PEMS follow the PMP system with small differences even during accelerations.The emission levels of these two examples are at the critical level where the PEMS must measure with good accuracy.For these two examples the differences of all PEMS to the PMP systems were within 35%.Fig. 3 compares PEMS with reference PMP systems connected to Proportional partial Flow Dilution Systems for various test cycles.For CPC-based systems the agreement is good, within − 35% and + 50% for the whole examined range.For DC-based systems the good agreement holds true for levels > 3 × 1011 p/kWh.The scatter is much higher for lower levels.For PEMS #3, a significant reduction of the scatter is achieved by applying the PMP efficiency correction for emissions 1–3 × 1011 p/kWh and brings these points within − 35% and + 50%.The suggested agreement limits of Fig. 3 are based on the light-duty regulated agreement limits of ± 50% or 1 × 1011 p/km.However, it seems that, currently, this limit of agreement applies from 3 × 1011 p/kWh emission levels for heavy-duty engines.Fig. 4 presents the correlation of the DC-based PEMS #1 and #3 to the CPC based PEMS #2.There were no cases where all four instruments were compared to each other on the road.In addition to the laboratory engine tests, on-road tests with heavy-duty vehicles are also plotted.Emission levels up to 2 × 1013 p/kWh were measured due to regeneration events and tests dedicated tests to challenge the PEMS.The agreement is acceptable for emission levels > 2 × 1011 p/kWh.The differences in the laboratory and on the road are similar indicating that the PEMS behave on the road as in the laboratory.The main objective of this study was to assess SPN PEMS for in-service conformity of heavy-duty vehicles.The first step was to compare the PEMS with the reference laboratory systems which are used for regulatory measurements.Fig. 5 summarizes the results of Fig. 3 for each instrument separately.For DC-based PEMS a further distinction according to the measured emission levels is made.In addition, some OEMs compared PMP systems to each other and the results are also plotted in Fig. 5.The agreement of the PMP systems to each other is excellent with a scatter of 20%.CPC-based PEMS have mean differences compared to the reference PMP systems of better than 20% with a standard deviation of 22%.These differences are comparable to the differences between the PMP reference systems.The DC-based systems have mean differences of 80% and decrease to less than 40% if a PMP correction is applied or if only emission levels > 3 × 1011 p/kWh are considered.The scatter is 50–100% for all data or data > 1 × 1011 p/kWh, but decreases to 25% for emission > 3 × 1011 p/kWh.The agreement level of 3 × 1011 p/kWh was chosen from the experimental data, because lower values had much higher differences from the PMP reference systems.High differences would be reasonable close to the maximum permitted background level of 5000 p/cm3, which translates to approximately 2–3 × 1010 p/kWh for the WHTC and typical exhaust flow rates.However, for the range 3 × 1010 to 3 × 1011 p/kWh other reasons should exist for this, in general, positive “bias”.An assumption is the lower counting efficiency of the CPCs with 50% counting efficiency at 23 nm for some materials, heavy alkanes).One other explanation could be the higher fraction of particles smaller than 23 nm in this emission range.The improvement of the correlation PEMS #3 – PMP applying a PMP efficiency curve supports this assumption.If this is the reason, the technical specifications for SPN PEMS should include also a point below 23 nm with low efficiency limit.At the moment DC-based systems have higher efficiency.Hardware or software modifications are necessary.For the tests of Fig. 3 the software modification improved the differences from +117% to +43%.Combining the results of all CPC-based systems and assuming that the agreement limits should be the absolute value plus two standard deviations, which would cover 95% of the cases, the margin for CPC systems is estimated 40%.For DC-based systems the margin is > 100% or 66%.Following a more detailed statistical analysis according to ISO 5025–2 for PEMS #2 and #3 where enough data were available, a reproducibility of 65% is calculated, with negligible inter-laboratory variability and 65% repeatability.The higher differences of the DC-based systems can be attributed to their principle of operation: The SPN concentration is estimated by particles’ charge measurements; the charge levels depend on the size of particles and this increases the uncertainty of measurements.Size estimation improves the accuracy of the instrument.Other parameters that can influence the results of all PEMS are diffusion and thermophoretic particle losses.Heavy-duty engines have high exhaust gas temperature, especially CNG engines, and “cooling” to 100 °C which is the typical inlet temperature of PEMS can easily result in 20% thermophoretic losses.The results of this study are in good agreement with previous studies: The JRC heavy-duty evaluation study found an agreement of 35–50% of PEMS #2 and #3 with the reference PMP system.In that study many heavy-duty vehicles were compared to the same reference PMP system in one laboratory.Other heavy-duty engine and vehicle studies have shown acceptable agreement of portable systems with reference systems.The JRC light-duty vehicles evaluation study found differences around 50% for the best performing CPC- and DC-based systems: the evaluation included a theoretical evaluation, tests in one laboratory with many vehicles or many laboratories with the same vehicle and instruments.Other studies found differences on the same order.Portable systems are used in workplaces for assessing personal exposure to airborne nanomaterials.The reported differences for the DISC mini are on the order of ± 30% for lung-deposited surface area measurement, but for number concentration it depends on the application.Both low and high mean differences have been reported for polydisperse aerosols.For example, a study found < 20% difference for polydisperse sodium chloride and metal particles, others up to 40% for ambient air particles, or 45% for agglomerates.Individual points can have much higher deviations.Air monitoring studies also found similar differences.Regarding the handheld CPCs and P-trak from TSI) that are used in PEMS #2 there are a few studies in the literature.Some studies found good comparability between the 3007s or P-trak with ambient aerosols, and another one found good correlation of the P-trak with a Scanning Mobility Particle Sizer for airborne particles when the appropriate size range was considered.All the above studies confirm the findings of this study: The measurement uncertainty of the CPC-based portable systems is around 40% and around 65% for DC-based systems.What is important in this study is that the uncertainty reported in the literature holds true even for the transient and aggressive exhaust aerosol from internal combustion engines.Regarding the robustness of the PEMS, there were some issues during on-road tests with PEMS #3, and in the laboratory with PEMS #2 mainly during CNG engines testing.Error warnings from the instruments informed the users.No issues occurred with the diesel engines, even during regeneration events.The findings of this study suggest that PEMS can be used for regulatory purposes.However, their measurement uncertainty has to be taken into account for the in-service conformity limits.Based on this study the measurement uncertainty is around 65%.The high scatter at lower emission levels is a topic that needs to be addressed, for example decreasing the sensitivity of DC-based systems on smaller particles with hardware or software measures.
Heavy-duty engines are type approved on engine dynamometers. However, the in-service conformity or in-use compliance is conducted on the road over driving patterns, conditions and payloads defined in the regulation using Portable Emissions Measurement Systems (PEMS). In Europe PEMS testing is currently applicable to gaseous emissions only, but the introduction of solid particle number (SPN) PEMS is under discussion for heavy-duty vehicles. Although SPN PEMS testing is required for light-duty vehicles, the robustness and the accuracy of the systems for the different conditions of heavy duty vehicles (e.g. higher exhaust gas temperatures, high content of bio-fuels, CNG engines) needs further investigation. This paper describes the experimental assessment of four SPN PEMS models by comparing them to reference regulated laboratory systems. One of the SPN PEMS was circulated to most laboratories. The tests were conducted by heavy-duty vehicle manufacturers in Europe. The results showed that the PEMS measure within 40–65% of the laboratory standards with only minor robustness issues. Thus, they can be included in the in-service conformity regulation taking into account their measurement uncertainty.
237
Bioinspired multifunctional polymer–nanoparticle–surfactant complex nanocomposite surfaces for antibacterial oil–water separation
Oil-spill clean-up is an important environmental challenge due to the significant long-term effects such accidents have on oceans and aquatic species .Absorbent materials are reported to remove oil from oil–water mixtures—however, these materials need additional steps to remove the absorbed oil and to regenerate the material for re-use; and water absorption during oil recovery reduces their efficiency .Separation membranes which have opposing wetting properties towards water versus oil can be utilised for continuous oil–water mixture separation .Due to the relative surface energies of typical oils versus water, conventional membranes repel water while allowing oil to pass through .However, these oleophilic–hydrophobic materials are easily fouled by oils causing blockage and a drop in efficiency.Furthermore, the greater density of water compared to oils can lead to the formation of a surface water layer which blocks the passage of oil .Simply by reversing the wettability, these drawbacks can be overcome.Oils are repelled and so do not easily foul the surface, while the hydrophilic nature of such materials helps to remove any contaminants in contact with the surface .The main disadvantage of such oleophobic–hydrophilic surfaces has been the complexity of their preparation and methods of application.One approach for oleophobic–hydrophilic surfaces has been the use of superhydrophilic surfaces—when underwater, the water layer formation on the surface helps to repel oils providing an underwater oleophobic surface .The major disadvantage of these underwater oleophobic–hydrophilic systems is that the filter must constantly be kept in a wetted state .They are also easily contaminated by oils due to their in-air oleophilic properties.Therefore, surfaces which display both in-air oleophobicity and hydrophilicity are more desirable for oil–water separation applications.In addition, these are also suitable for other uses such as anti-fogging and self-cleaning .One way to fabricate oleophobic–hydrophilic surfaces is to utilise polymer–fluorosurfactant complexes .These surfaces can be prepared either by a multi-step layer-by-layer approach or by direct application of the polymer–fluorosurfactant complex onto the substrate .For both cases, the oil repellency of the polymer–fluorosurfactant coating stems from the low-surface-energy fluorinated tail of the fluorosurfactant being orientated towards the air–solid interface .This localises the hydrophilic fluorosurfactant head groups in the sub-surface region where they are complexed to the hydrophilic groups of the polymer.When water molecules are placed onto the surface, they wick down towards the hydrophilic subsurface resulting in surface wetting .It has been suggested that this happens through defects in the fluorinated layer, whilst oil molecules are too large to penetrate them .Another possible mechanism is water-induced surface rearrangement of the fluorinated chains allowing penetration of the water molecules; whilst in the presence of oils, this rearrangement does not take place, and so the top-most low-surface-energy fluorinated chains repel oil .Early reports of polymer–fluorosurfactant coated surfaces showed little difference between the oil and water contact angles .Improvements in hydrophilicity were subsequently achieved through the utilisation of plasma polymer–fluorosurfactant coatings leading to larger switching parameters—however, this remained a two-step process .Although single-step processes have been reported, these surfaces tend to be initially hydrophobic, and it can take several minutes for them to achieve their final hydrophilic state .One notable exception has been fast-switching copolymer–fluorosurfactant surfaces where water wets within 10 s whilst oleophobicity is retained .This oil repellency was improved further through the use of solvent-induced roughening to yield switching parameters in the order of 100°.A comparable switching parameter has been reported by adding nanoparticles to the polymer–fluorosurfactant complex solution mixture—however, oil–water separation experiments take several minutes to allow the water to pass through due to the requirement for very small aperture meshes, therefore this system is not suitable for continuous oil–water separation .Although good initial oil repellency and hydrophilicity have been reported for a layer-by-layer approach where the polymer, fluorosurfactant, and silica nanoparticles are deposited in sequential steps—this is a lengthy process and not well suited to industrial scale-up .In this study, nanocomposite oleophobic–hydrophilic surfaces have been deposited in a single step by using polymer–nanoparticle–fluorosurfactant complexes which display a marked enhancement in the switching parameter.Coating of large aperture meshes provides for high efficiency continuous oil–water separation performance, Scheme 1.The incorporation of nanoparticles improves the hardness and enhances oleophobicity / hydrophilicity of the coatings.The latter is akin to how the roughness of plant leaves can give rise to either hydrophobicity or hydrophilicity depending upon surface functional groups .The constituent cationic polymer poly imparts antibacterial properties.Although polymeric quaternary ammonium–surfactant complexes have previously been utilised for their antimicrobial properties, they have not been developed for oil–water separation to provide multi-functional surfaces .This concept is important in relation to real-world scenarios, where the simultaneous oil–water separation and killing of bacteria during filtration is highly desirable for safe human water consumption and pollution clean-up.Aqueous poly was diluted in high-purity water at a concentration of 2% w/v and the solution allowed to shake for 2 h.If particles were to be incorporated into the coating, then these were ultrasonically dispersed for 1 h in the poly solution at various loadings of the particle dispersed in the polymer solution).The range of particles investigated are detailed in Table 1.Anionic phosphate fluorosurfactant or amphoteric betaine fluorosurfactant were further diluted in high-purity water at a concentration of 5% v/v.The fluorosurfactant solution was added dropwise in a 1:4 vol. ratio to the prepared polymer–particle solution whilst stirring leading to the formation of a polymer–particle–fluorosurfactant complex.The precipitated solid complex was collected from the liquid phase and rinsed with high-purity water followed by drying on a hotplate.The obtained dry solid was dissolved at a concentration of 1% w/v in ethanol to provide the coating solution.Glass microscope slides and silicon wafers were used as flat substrates.These were cleaned prior to coating by sonication in a 50%:50% propan-2-ol/cyclohexane mixture, followed by UV/ozone treatment, and finally another sonication step in the propan-2-ol/cyclohexane mixture.Coatings were applied either by solvent casting, or by spray coating using a pressurised spray gun.For the oil–water separation experiments, stainless steel mesh was spray coated.The stainless steel mesh substrates were cleaned prior to coating by rinsing with propan-2-ol.Sessile drop contact angle analysis was carried out on coated glass slide substrates with a video capture system in combination with a motorised syringe.1 μL droplets of ultrahigh-purity water and hexadecane were dispensed for water and oil contact angle measurements respectively.Following dispensation of the probe liquid onto the coated substrate, a snapshot of the image was taken and analysed using the VCA-2500 Dynamic/Windows software.The water contact angle was measured as soon as the droplet was placed onto the surface and again after a period of 10 s—this was done in order to observe any change in the WCA over a short time period due to the “switching” behaviour of these surfaces.The hexadecane contact angle was measured as soon as the droplet was placed onto the surface and it was observed not to vary with time.The reported contact angle measurements were made after rinsing samples with water and drying in air.Switching parameters were determined by calculating the difference between the equilibrium hexadecane and water static contact angles.Captive bubble contact angle analysis was carried out on coated glass slide substrates with the video capture system in combination with a captive bubble attachment dispensing approximately 1 μL air bubbles.Following release of the air bubble onto the coated substrate under water, the droplet was viewed using the VCA-2500 Dynamic/Windows software.Coated silicon wafer substrates were mounted onto carbon disks supported on aluminium stubs, and then coated with a thin gold layer.Surface morphology images were acquired using a scanning electron microscope operating in secondary electron detection mode, in conjunction with an 8 kV accelerating voltage, and a working distance of 8–11 mm.Hardness values were obtained for coated silicon wafer substrates using a microhardness tester fitted with a standard Vickers tip.Five microindentation measurements were made across the surface for each applied force .Oil–water separation experiments were carried out using the coated stainless steel mesh substrates.An agitated mixture of oil and water was poured over the stainless steel mesh.The mesh was either placed horizontally above one beaker or at an incline above two beakers for batch and continuous separations respectively.In order to enhance the visual contrast, Oil Red O and Procion Blue MX-R were added to the oil and water respectively.Gram-negative Escherichia coli BW25113567 Δ568 rph-1) and Gram-positive Staphylococcus aureus bacterial cultures were prepared using autoclaved Luria-Bertani broth media.A 5 mL bacterial culture was grown from a single colony for 16 h at 37 °C and 50 μL used to inoculate a sterile polystyrene cuvette containing Luria-Bertani broth.The cuvette was covered with Parafilm and then placed inside a bacterial incubator shaker set at 37 °C and 120 rpm.An optical density OD650nm = 0.4 was verified using a spectrophotometer to obtain bacteria at the mid-log phase of growth.Pieces of non-woven polypropylene sheet were spray coated with either poly–anionic fluorosurfactant complex or poly–3% w/v silica–anionic fluorosurfactant complex solutions, and the carrier solvent allowed to evaporate.Uncoated control samples were washed in absolute ethanol for 15 min and then dried under vacuum in order to make sure they were sterile and clean.At least 4 different batches of each type of coated sample, as well as the control uncoated non-woven polypropylene sheet, were tested for antimicrobial activity.Sterile microtubes were loaded with the uncoated, polymer–fluorosurfactant or polymer–nanoparticle–fluorosurfactant coated non-woven polypropylene sheet.Next, 100 μL of the prepared bacteria solution was placed onto each sheet, and left to incubate at 30 °C for 16 h. Next, autoclaved Luria-Bertani broth media was pipetted into each microtube and vortexed in order to recover the bacteria as a 10-fold dilution.Further ten-fold serial dilutions were performed to give 10−2, 10−3, 10−4, 10−5 and 10−6 samples.Colony-forming unit plate counting was performed by placing 10 μL drops from each sample onto autoclaved Luria-Bertani solid agar plates and incubated at 30 °C for 16 h.The number of colonies visible at each dilution were then counted.Oleophobic–hydrophilic surfaces were prepared using either anionic or amphoteric fluorosurfactants in combination with poly, Scheme 1.The oleophobicity of polymer–fluorosurfactant complex surfaces can be attributed to the fluorinated surfactant tails being orientated towards the air–solid interface exposing the low surface energy terminal CF3 groups .Consequently, the hydrophilic ionic surfactant head groups and the complexed polymer counterionic groups are buried within the subsurface region.When droplet water molecules come into contact with these polymer–fluorosurfactant surfaces, they are able to diffuse towards these underlying hydrophilic groups via one of two mechanisms: either the water molecules wick down towards the hydrophilic subsurface region due to defects at the air–solid interface , or the hydrophilic subsurface is exposed to the water molecules as a consequence of water-induced molecular rearrangement of the fluorinated chains .Both mechanisms can account for the time-dependent hydrophilicity of the polymer–fluorosurfactant complex surfaces.The oleophobic behaviour can also be accounted for on the basis of either mechanism.In the case of the defect mechanism, the much larger oil molecules are unable to penetrate any film defects, and so only come into contact with the low surface energy fluorinated tails.Alternatively, if the mechanism involves a water-induced molecular rearrangement, then the oleophobicity occurs as a result of the fluorinated chains remaining exposed at the air–solid interface when in contact with oil.Hence, the polymer–fluorosurfactant complex surfaces display the observed switching oleophobic–hydrophilic properties.Previously reported polymer–fluorosurfactant complex surfaces have tended to exhibit relatively small switching parameters or display long switching times .Furthermore, the current single-step application methodology is far more straightforward compared to earlier lengthy layer-by-layer approaches involving multiple steps.,Nanoparticle incorporation into these coatings led to an enhancement in the switching parameter by either decreasing the water contact angle or by increasing the hexadecane contact angle—optimally a combination of both, Fig. 1.This improvement in surface oleophobicity and hydrophilicity relative to the nanoparticle-free control samples can be attributed to the impact of surface roughening upon Wenzel and Wenzel/Cassie–Baxter states of wetting respectively.Eventually, a critical nanoparticle loading value is reached beyond which the switching behaviour starts to deteriorate.Prior to a drop in performance, the poly–anionic fluorosurfactant complex system was found to accommodate higher loadings of 7 nm silica nanoparticles compared to the poly–amphoteric fluorosurfactant complex system, and therefore the former was chosen for further investigation.At these optimum nanoparticle loadings, the surface became completely wetting towards water within 10 s, Supplementary Material Table S 1 and Table S 2.Such hydrophilicity is suitable for anti-fogging applications .Similar trends were observed for both spray coating and solvent casting methods of application.Such incorporation of nanoparticles into coating surfaces mimic nanoscale roughness widely found on plant surfaces for the enhancement of liquid wettability / repellency .A range of other unfunctionalised and functionalised negatively charged nano- and micron-size particles were also found to enhance the switching parameter, Fig. 2.On the other hand, positively charged alumina and zinc oxide nanoparticles performed less well.In the case of alumina nanoparticles, their inclusion at a loading of 3% w/v gave rise to a detrimental effect on the switching parameter stemming from a large rise in the water contact angle.Alkyl functionalised silica nanoparticles showed greater oleophobicity at low loadings compared to unfunctionalised silica nanoparticles—this is probably due to the surface alkyl group oleophobicity.At higher loadings, the alkyl functionalisation of nanoparticles appeared not to provide any significant advantage, Fig. 2.Given that the poly–anionic fluorosurfactant complex system with 3% w/v loading of 7 nm silica nanoparticles displayed the largest switching parameter, this was selected for further investigation.For the superhydrophilic poly–3% w/v silica–anionic fluorosurfactant and poly–1.5% w/v silica–amphoteric fluorosurfactant complex coated substrates, it was found that the air bubble did not release from the needle upon contact with the sample surfaces.Increasing the size of the air bubble until it eventually released from the needle led to the bubble simply rising towards the sample followed by running along the coating surface and off the edge, Supplementary Material Video S1.Hence, the captive bubble contact angle value of 180° correlates to the calculated WCA of 0° from the sessile drop technique .This surface hydrophilicity can be attributed to a water layer being present on the surface—the water layer effectively repels the air bubble preventing it from adhering to the coating surface .The difference observed between the sessile drop and the captive bubble methods for measurements made at t = 0 s is because the timescale to “switch” is about 10 s for the former, whereas the prior immersion of sample into water for the latter has already caused the surface rearrangement—thereby effectively making the captive bubble WCA unchanged between t = 0 s and t = 10 s.Scanning electron microscopy showed that in the absence of silica nanoparticles, the coatings are relatively smooth with any minor roughness features attributable to the spray coating process, Fig. 3.The incorporation of nanoparticles enhances the coating surface roughness for both the poly–anionic fluorosurfactant and poly–amphoteric fluorosurfactant systems.The scale of the surface roughness features is approximately 100–200 nm in size which is consistent with there being encapsulation of the nanoparticles within the polymer–fluorosurfactant complex host matrix.Microindentation measurements showed that for a given indentation force, the hardness improved with rising silica nanoparticle loading, Fig. 4.In the absence of or at low loadings of silica nanoparticles, a large indentation force of 490 mN was sufficient to pierce through the coatings causing the underlying silicon substrate to crack.At low indentation forces, the coatings with 2% w/v and 3% w/v nanoparticle loadings displayed no visible indent.Therefore, a force of 98 mN or 245 mN was employed in order to follow the effect of varying silica loading—both forces showed that the hardness increases with rising silica loading, Fig. 4.Poly–anionic fluorosurfactant complex and poly–3% w/v silica–anionic fluorosurfactant coated horizontal meshes displayed oil–water separation behaviour, Fig. 5.High-purity water passed through both uncoated and coated meshes, whilst oil did not pass through the coated mesh—thereby demonstrating that the coated mesh can separate oil from water with 100% efficiency.By inclining the coated meshes above two beakers, oil–water mixtures could be separated into the respective beakers, Fig. 6 and Supplementary Material Video S3.The small amount of water which passes into the oil beaker is due to some of the water being dragged along by the oil across the mesh as it passes across it, and could be easily removed by repeating the procedure.The oil–water separation is highly reproducible with over 50 coatings having been tested.Similar performance was measured for vegetable cooking oil, Supplementary Material Video S 4.E. coli bacteria often found in drinking water supplies and S. aureus bacteria present in seawater are both harmful to human health.The control untreated non-woven polypropylene sheet displayed E. coli and S. aureus bacterial counts of 2.88 ± 0.39 × 109 CFU mL−1 at 10−6 dilution and 2.70 ± 0.73 × 109 CFU mL−1 at 10−6 dilution respectively, Fig. 7.Both poly–anionic fluorosurfactant and poly–3% w/v silica–anionic fluorosurfactant complex coated non-woven polypropylene sheets showed high antibacterial activity against the E. coli and S. aureus bacteria tested.The former reduced the number of both E. coli and S. aureus bacteria to zero at 10−1 dilution, whilst the latter exceeded +99.99% killing of E. coli bacteria at 10−2 dilution and +99.97% killing of S. aureus bacteria at 10−3 dilution, Fig. 7.Such utilisation of cationic poly polymers for fluorosurfactant complex formation incorporates the added benefit of antibacterial poly quaternary ammonium centres .These antimicrobial properties arise due to the interactions of the positively charged ammonium group with the negatively charged head groups of phospholipids in bacterial membranes which cause disruption of the membrane leading to cell leakage and eventually cell death .The measured +99.99% and +99.97% bacterial kill rate for poly–3% w/v silica–anionic fluorosurfactant complex coated non-woven polypropylene sheets can be attributed to surface roughness lowering available anchoring points for bacteria attachment.The small difference in bacteria kill rates between E. coli and S. aureus for poly–3% w/v silica–anionic fluorosurfactant complex coated non-woven polypropylene sheets may be due to differences in the outer surface structures of the two species .Multifunctional fast-switching oleophobic–hydrophilic coatings have been prepared using polymer–nanoparticle–fluorosurfactant complexes.These can be deposited in a single step by spraying or solvent-casting.Electrostatic attraction of negatively charged nanoparticles within cationic poly–anionic fluorosurfactant complex films introduces surface roughening which enhances hydrophilicity and oleophobicity as a consequence of Wenzel and Wenzel/Cassie–Baxter wetting states respectively.These surfaces provide high-efficiency continuous oil–water separation.Nanoparticle incorporation also improves coating hardness.The cationic polymer quaternary ammonium centres present within these polymer–nanoparticle–fluorosurfactant complex systems impart antibacterial surface properties and S. aureus bacteria).Other applications include antibacterial–antifogging surfaces.There are no conflicts of interest to declare.This work was supported by Engineering and Physical Sciences Research Council, and British Council.S. N. B.-P.was funded by Consejo Nacional de Ciencia y Tecnología, Mexico scholarship reference 409090.J. P. S. B. devised the concept.A. W. R. carried out sample preparation and liquid repellency studies.H. J. C., S. N. B.-P., and G. J. S. performed antibacterial testing.The manuscript was jointly drafted by J. P. S. B. and A. W. R. All authors gave final approval for publication.Data created during this research can be accessed at: https://collections.durham.ac.uk.
Bioinspired polymer–nanoparticle–fluorosurfactant complex composite coatings are shown to display fast-switching oleophobic–hydrophilic properties. The large switching parameters (difference between the equilibrium oil and water static contact angles) are attributed to nanoparticle enhanced surface roughening (leading to improvement in hydrophilicity and oleophobicity for optimum nanoparticle loadings). Nanoparticle incorporation also increases hardness of the coatings (durability). Porous substrates coated with these polymer–nanoparticle–fluorosurfactant complex composite coatings are found to readily separate oil–water mixtures under both static and continuous flow as well as displaying antibacterial surface properties against Escherichia coli (Gram-negative bacteria) and Staphylococcus aureus (Gram-positive bacteria). A key advantage of this approach for coating substrates is its single-step simplicity. Potential applications include provision of safe drinking water, environmental pollution clean-up, and anti-fogging.
238
Impact of service sector loads on renewable resource integration
Distributed renewable energy resources are becoming of increasing importance to urban energy systems, posing many new technical, organisational and infrastructural challenges.Addressing these challenges requires a deep understanding of system behaviour at neighbourhood and municipality scales .Urban energy system models can serve this purpose.However, most existing models are based solely on households and do not include service sector load profiles.As real urban areas consist of both households and services,1 such as shops and schools, each with their own energy load profiles, omitting the service sector in energy models is not realistic.The annual demand of the service sector is on par with that of the residential sector in developed countries .However, their load profiles differ considerably .Therefore, urban energy models need to be extended to include the service sector, improving the understanding of the potential of renewable resource integration in cities.This paper proposes a systematic method to devise synthetic load profiles based on a large number of different data sources.Applying the proposed method for the Netherlands, this paper quantifies the impact of the service sector in future urban energy systems with a high penetration of renewables.Three renewable resource integration metrics are compared for a realistic mix of residential and service sector loads, and for residential loads only: mismatch between renewable generation and demand, renewable resource utilisation, and self-consumption.These metrics are first studied for a broad range of solar and wind generation mix scenarios.Second, metrics are compared for different times of the day, days of the week, and weather conditions for a single scenario.This is the first fundamental study that systematically addresses the impact of service sector loads on renewable resource integration in urban areas.The results of this paper are primarily of interest to researchers in urban energy systems, and to decision-makers and practitioners for grid planning, management and operation, for example, to inform decision-making on storage location, demand response programs and grid reinforcement.The service sector, also termed the commercial, business or tertiary sector, is comprised of a highly heterogeneous group of energy consumers.Although the many definitions of the sector differ, most include non-manufacturing commercial activities and exclude agriculture and transportation .This paper defines the service sector as the collection of non-manufacturing commercial and governmental activities, excluding agriculture, transportation, power sector, street lighting and waterworks.The service sector power demand in developed countries currently accounts for one quarter to one third of the total national power demand, and is thus on par with residential demand .Current estimations indicate that in 2050, the demand shares of the service and the residential sectors are projected to increase to 40% each, at the expense of the industry demand, which will account for only 20% of the total national demand.Despite the importance of the service sector in urban demand, most studies on urban energy system models are based on residential load profiles only.Mikkola and Lund are a notable exception.The authors focus on spatiotemporal modelling of urban areas for energy transition purposes.They include service sector loads in their first case study of Helsinki.The service sector demand profiles for Helsinki are based on German profiles, on the assumption that these profiles are comparable in Finland and Germany.No service sector demand data are referenced by the authors for their second case study of Shanghai.Although a number of other studies consider urban energy systems at the neighbourhood or municipality level, they do not include service sector demand profiles .For instance, Fichera et al. study how the integration of distributed renewables in urban areas can be improved using a complex networks approach.Despite considering an entire urban area, the authors model only residential loads in their numerical case study.Hachem describes a neighbourhood designed to increase energy performance and decrease green-house gas emissions and shows that the type of neighbourhood has an effect on local renewable energy utilisation.The author models service sector buildings based on a combination of residential data and commercial building code specifications, but does not use detailed service sector profiles.Alhamwi et al. focus on the geographical component of urban energy system modelling, and include service sector buildings in their work, yet the authors leave the acquisition of detailed temporal service sector profiles unaddressed.These studies illustrate that the importance of the service sector in urban energy systems is increasingly acknowledged, yet that it remains difficulty to obtain detailed service sector demand profiles.The publicly available data on service sector demand are primarily concerned with demand characterisation per subsector , or per subsector and end-use .The lack of detailed data is a serious limitation for the assessment of the potential impact of the service sector on renewable resource integration in urban energy systems.This issue is addressed in this paper.Detailed demand data, in combination with detailed generation data, are necessary to realistically assess the impact of interventions designed to increase the integration of renewables in future urban power systems.In traditional power systems, power balance is maintained through generation dispatch, which follows a variable, immutable load .The sizing and operation of dispatchable generation is based on load characteristics such as peak load .Future power systems with a high share of non-dispatchable renewables require different balancing and management approaches, such as demand response and storage.The choice of the best approach or their combination depends on the timing and the extent of mismatches between load and generation.These mismatches depend on the type of load and generation, the time, and the weather.Understanding the interactions between these factors requires detailed demand and generation data.On the generation side, detailed profiles are publicly available, or can be constructed from publicly available weather data.On the demand side, primarily residential profiles are available.These profiles cover only a part of mixed urban demand.Some non-residential profiles are published, including standard load profiles for specific connection types, and country-level load profiles.These profiles are not suitable to model mixed urban demand, nor to assess the role of the service sector.The standard load profiles lack essential metadata for non-residential loads, which thwarts their use for the estimation of demand in real urban areas.The country-level load profiles do not give sufficient insights at a local level and can therefore not be used at municipality and neighbourhood scales.This paper proposes a method to overcome the current demand data scarcity by devising synthetic service sector load profiles, and combining them with the available residential demand profiles to estimate the demand profiles of mixed urban areas.The synthetic load profiles are constructed for an area of interest and are based on a combination of reference building models of the Unites States Department of Energy , and a large number of U.S. and local service sector building use data sources, which are used to scale U.S. reference buildings to the local context.The main contribution of this paper is the systematic assessment of the impact of service sector loads on renewable resource integration in urban areas.This contribution includes:A systematic method to devise detailed synthetic service sector demand profiles based on reference building models and building use data.Quantitative results showing the impact of service sector loads on renewable resource integration metrics for a broad range of renewable resource penetration scenarios.A novel time and weather dependency classification system that enables systematic assessment of metrics that depend on both time and weather.Quantitative results showing the impact of service sector loads on renewable resource integration metrics for different times of the day, days of the week, and weather conditions.The values for the metrics reported in this paper are specific for the Netherlands.However, the methodology described can be used as a template to assess the impact of the service sector on renewable resource integration in other countries, municipalities, and neighbourhoods.Qualitative conclusions on the impact of the service sector on renewable resource integration in urban areas are assumed to hold for other developed countries as the shape of the service sector load profiles is comparable across countries .The methodology and results are of potential interest to both researchers and practitioners.Improved service sector load profiles can further the research field, for instance through combination with recently developed spatiotemporal urban energy models .Practitioners, such as urban planners, distribution system operators, and aggregators can apply the results for improved grid planning, operation and management.The remainder of this paper is structured as follows.Section 2 presents the theoretical rationale for explicit consideration of service sector loads.Section 3 describes the systematic approach used to devise synthetic service sector load profiles.Section 4 outlines the methods used for data collection and profile calculation for the Dutch case study.Section 5 provides more details on two renewable resource integration experiments.Section 6 presents the results of these experiments, which are further discussed in Section 7.A final conclusion is given in Section 8.Urban areas are typically a mix of residential and service sector loads.Residential load profiles are more readily available, making them an attractive proxy for urban areas as a whole.However, residential and service sector load profiles differ considerably.Fig. 1 illustrates the difference in load profiles between residential loads-only and mixed residential and service sector loads.This paper hypothesises that the assessment of renewable resource integration in urban areas based on household demand only is misleading.In particular, it hypothesises that substituting realistic, mixed residential and service sector load profiles with only households load profiles leads to significant misestimations of renewable resource integration metrics.In this section, a theoretical intuition supporting this hypothesis is developed for two metrics: mismatch, and renewable energy utilisation.These form the basis for the metrics used in the remainder of this paper.Measured detailed service sector load profiles are scarce and rarely available for a specific area of interest.Therefore, this paper proposes a systematic approach to devise detailed synthetic service sector load profiles for an area of interest based on load profiles available in a reference area, and building use data from both the area of interest and the reference area.Building use data are used to calculate scaling factors.This approach can be validated either for each customer type k separately, if annual demand data for each customer type in the area of interest are available, or lumped for all service sector consumers if only total service sector demand data are available.This paper focuses on the assessment of the impact of service sector loads on renewable resource integration.The Netherlands is chosen as area of interest.The methodology described above is used to devise local synthetic service sector load profiles.These profiles are combined with household load profiles, and solar and wind power generation profiles to create a realistic urban energy model.The influence of load type, and of time and weather on renewable resource integration metrics is studied using a novel simulation model.Load type effects are assessed by comparing two load cases: residential load only, and mixed residential and service sector load.Time and weather effects are studied using a novel time interval classification system.The approach is conceptually shown in Fig. 3.It consists of three steps: data collection, profile modelling, and renewable resource integration experiments.The first two steps, the core of the simulation model, are outlined in this section.The experiments are described in Section 5.Synthetic load and generation profiles are calculated based on a large number of data sources.To ensure spatial and temporal consistency, all calculations are done for the same area and the same period, taking into account official Dutch holidays and daylight saving times.The network is assumed to be a “copper plate”.All resulting profiles have an hourly granularity.Household demand data are obtained from .The average yearly household consumption is assumed to be 3500 kW h .The selected profile describes the average Dutch residential load.The use of this single average profile is assumed to be representative at the scale used in the simulations in this paper since the combined profile of such a large number of similar consumers is expected to regress to the mean profile .Detailed service sector load profiles are modelled based on United States Department of Energy reference building data and a large number of U.S. and Dutch building use data which are used to scale U.S. buildings to the Dutch context.The resulting profiles are combined into a single Dutch service sector profile, which together with the household load profile, is used to model mixed urban loads.The calculations used to scale U.S. reference buildings, as well as the data sources are described in detail in the Appendix.An overview is provided in Table 1.The approach can be summarised as follows.The service sector profiles themselves are obtained using the DOE EnergyPlus modelling software .This software builds demand profiles based on the building age, climate data, and the building location.As the simulations assume a future situation, new construction standard is used.To create profiles representative for the Netherlands, Amsterdam climate data are used .Finally, the location match in terms of climate zone is based on both the ASHRAE climate classification and the available U.S. locations for the reference models, yielding Seattle as the closest match for Amsterdam.This location match ensures that adequate heating and cooling requirements are taken into account.Solar and wind power generation are modelled using weather data from the Royal Netherlands Meteorological Institute .This paper relies on the combination of a large number of openly available data sources to model detailed service sector demand profiles.The best validation for this approach is arguably the comparison of the resulting synthetic profiles with statistically representative, real, measured profiles.However, such profiles are currently not publicly available.That is the very issue this paper is seeking to overcome by estimating service sector profiles and showing the importance of the sector for renewable resource integration.The validation used in this paper thus relies on a different approach, with both a quantitative and a qualitative component.The obtained results are compared with cumulative annual Dutch service sector load data, which are openly available but do not suffice to assess the impact of the service sector on renewable resource integration.The Netherlands Environmental Assessment Agency attributes 43.8 TW h of the Dutch annual electricity consumption to the service sector, waste and wastewater treatment, and agriculture and fisheries combined .Solely the service sector consumes 77% of this value , i.e. 33.6 TW h.The Dutch Central Bureau for Statistics reports service sector consumption of 30.6 TW h .The service sector consumption in this paper amounts to 26.9 TW h for the entire Netherlands, i.e. 80–88% of the demand published by respectively PBL and CBS.The discrepancies in published data likely arise from the lack of unified definitions, an issue also raised by other researchers and addressed further in Section 7.This quantitative validation indicates that the service sector profile estimation approach used in this paper can account for a substantial part of the Dutch service sector power demand.The remainder includes unaccounted for subsectors, inaccuracies in subsector share estimations, and load profile deviations.The calculated service sector profiles are based on U.S. reference buildings.It remains an open question whether the use of U.S. buildings in the Dutch context causes deviations from real Dutch demand profiles.Perez-Lombard et al. compared office energy end-use between U.S., Spain and the United Kingdom.End-use differences exist between the three countries.The differences between U.S. and the two European countries are however not larger than between the two European countries themselves.A similar qualitative conclusion can be drawn across the entire service sector by comparing the service sector end-use electricity consumption in the U.S. and 29 European countries .This suggests that using U.S. data for the Netherlands does not lead to larger errors than using data from another European country.Although undesirable, the practice of using data from other countries is currently common due to limited service sector data availability .Hereby it is important to note that the shape of the service sector demand profile, with a peak during the day, is similar across developed countries .It differs from the shape of household demand profiles, which typically peak in the evening .This observation qualitatively validates the use of U.S. profiles for the Dutch environment.Metric differences between residential loads-only and mixed loads are analysed for statistical significance using a two-sample t-test.Since multiple scenarios or categories are compared at once, the significance level is corrected using the Holm-Bonferroni correction to control the familywise error rate at 5%.For the first experiment, the correction is made for 121 comparisons.For second experiment, the correction is made for 150 comparisons.Two simulation experiments are carried out to study the impact of service sector loads on renewable resource integration.This impact is quantified using four metrics.The next paragraph outlines these metrics, the subsequent paragraphs provide details of the two experiments.The following renewable resource integration metrics are used in this paper:Positive mismatch.Positive mismatch accounts for generation excess.It is calculated as the difference between generation and load when generation exceeds load, it is zero otherwise.Negative mismatch.Negative mismatch accounts for generation shortage.It is calculated as the difference between generation and load when load exceeds generation, it is zero otherwise.Renewable energy utilisation.Renewable energy utilisation is the amount of renewable energy which can be used by the coinciding load.It is assumed that whenever renewable energy is available, it is utilised first.Only if no renewable energy is available, other sources are used.Self-consumption.Self-consumption is the ratio of renewable energy utilised by the coinciding loads and the total renewable energy generated.Renewable resources considered in this paper are solar PV panels and wind turbines.For both solar PV and wind turbines, the installed generation capacity is varied between 0 MW and 525 MW with steps of 52.5 MW.For the residential load case, 525 MW represents 300% of peak load.For the mixed load case, 525 MW is 367% of peak load, as mixed load has a flatter profile.The considered capacities are comparable to , where renewable resource capacity of up to 341% of peak load is considered for 2050.In each scenario, the corresponding generation profile is calculated.This generation profile is combined with, on one hand, the demand profile of residential loads-only, and, on the other hand, with the demand profile of mixed loads.For each scenario and for each load type, a year-long hourly simulation is run.From the results, annual metrics are calculated and reported.For a single scenario of solar PV and wind turbine penetration, this paper zooms in on the role time and weather conditions play in the impact the service sector has on renewable resource integration potential in urban systems with high renewable resource penetration.A novel time and weather dependency classification system is introduced to study the impact of different days of the week, times of the day, and weather conditions.The single scenario of solar PV and wind turbine penetration is obtained as a result of an area-constrained optimisation.In a power system with a high penetration of renewables, not only load variations, which mainly depend on the time of the day and the day of the week, determine the system state, but also weather variations, which govern renewable generation.To account for the future system dependency on both time and weather, this paper proposes a novel time and weather classification system.In this system, each hour of the year is classified according to four parameters: day of the week, time of day, solar power generation and wind power generation.Two categories are distinguished for the day of the week: weekday and weekend.Three categories are distinguished for the time of the day: night, day and evening.Five categories are distinguished for both solar power generation and wind power generation.In both cases, the categories are based on quantiles.In total, 150 time and weather dependent categories are defined.Their frequency of occurrence is summarised in Table 2.The problem at hand is a constrained multi-objective non-linear optimisation problem.The genetic algorithm in Matlab is used to solve this problem.This algorithm relies on a population of possible individual solutions, which evolve to an optimal solution over a number of iterations.In each iteration, the best solutions are used to create solutions for the next iterations which are more likely to be close to the optimal solution.The algorithm terminates when the improvement in solutions falls below a threshold.This section presents the results of the two experiments conducted.The experiments quantify misestimations of renewable resource integration metrics that occur when the service sector is omitted, i.e. when mixed urban loads are represented by residential loads-only.The first experiment addresses misestimations in a broad range of renewable resource penetration scenarios.The second experiment zooms in on the misestimations on different days of the week, times of the day, and weather conditions for a single scenario.Fig. 4 shows annual average differences between residential loads-only, and mixed residential and service sector loads for four renewable resource integration metrics across a broad range of renewable resource penetration scenarios.Scenarios with solar and wind generation capacity of up to 525 MW are considered, i.e. 300% of peak load for the residential loads-only and 367% of peak load for the mixed loads.Fig. 4a and b show respectively the annual average positive and negative mismatch differences between residential loads-only and mixed loads.Positive mismatch represents renewable generation excess, i.e. renewable energy which cannot be used by the local loads.Positive mismatch differences indicate to what extent renewable generation excess is overestimated if residential loads-only are used instead of mixed loads.The positive mismatch difference is zero when solar and wind penetration equals zero, since no renewable power is generated.For all other penetration scenarios, differences increase with increasing solar penetration, while the variation as a function of wind is limited.Overall, Fig. 4a shows that substituting mixed loads by residential loads-only leads to overestimation of generation excess.Results are statistically significant for solar penetration levels above 73% of peak load, and for wind penetration scenarios below 73% of peak load.Note that these cut-off values are based on the scenario step granularity of 36.5% of peak load.Negative mismatch represents generation shortage, i.e. additional energy to be supplied by non-renewable resources.Negative mismatch differences indicate to what extent generation shortage is overestimated if residential loads-only are used instead of mixed loads.Negative mismatch difference is zero when solar and wind penetration equal zero as no renewable generation is available for either load type.Negative mismatch is larger in case of residential loads-only than in case of mixed loads, leading to negative mismatch differences below zero across all remaining scenarios.Overall, Fig. 4b shows that substituting mixed loads by residential loads-only leads to overestimation of generation shortages.Results are statistically significant for scenarios with solar penetration above 110% of peak load.Fig. 4c shows renewable energy utilisation differences between residential loads-only and mixed loads.Renewable energy utilisation is the amount of renewable energy that is used by the coinciding demand.Renewable energy utilisation differences indicate to what extent the renewable energy utilisation is underestimated if residential loads-only are used instead of mixed loads.Renewable energy utilisation differences follow the same pattern as negative mismatch differences.For all scenarios, renewable energy utilisation is higher for the mixed loads than for the residential loads-only, thus the renewable energy utilisation difference is negative.Overall, Fig. 4c shows that substituting mixed loads by residential loads-only leads to underestimation of renewable energy utilisation.Results are statistically significant for scenarios with solar capacity at or exceeding 147% of peak load, at any wind penetration.Fig. 4d shows self-consumption differences between residential loads-only and mixed loads.Self-consumption is the ratio of renewable energy utilised by the coinciding demand and the total renewable energy generated.Self-consumption differences indicate the extent to which the amount of generated renewable energy that can be used by the coinciding load is underestimated if residential loads-only are used instead of mixed loads.Self-consumption is highest when the penetration of renewable generation is low, it is undefined for zero penetration.If only a small amount of renewable power is generated, any type of coinciding load is sufficiently high to use it entirely.Self-consumption differences have a similar pattern as renewable energy utilisation differences, although differences at low wind penetration scenarios are more pronounced.Overall, Fig. 4d shows that substituting mixed loads by residential loads-only leads to underestimation of self-consumption.Results are statistically significant for solar capacity scenarios above 73% of peak load and for wind penetration of at most 147% of peak load.Differences in renewable resource integration metrics between residential loads-only and mixed loads are found across a broad range of scenarios.Considering all metrics together, statistically significant results are found in all scenarios except low solar.Note that non-significant results in low solar, low wind scenarios occur because all metrics depend on the presence of renewable generation, which is very low in these scenarios.Overall, this experiment shows significant misestimations of annual average metrics if residential loads-only substitute mixed loads.The relative magnitude of these average annual differences is relatively small, up to approximately 5% of the total annual load.However, the differences between the metrics for residential loads-only and mixed loads vary throughout the year, depending on both time and weather conditions.These variations are assessed in the next experiment.A power system with a high penetration of renewables is highly dependent on both time and weather.To study this dependency, all hours of the reference year are classified using the time and weather classification system introduced in this paper.Each category has four parameters: day of the week, time of the day, solar generation, and wind generation.In total, 150 time and weather dependent categories are analysed.An example of a category is: all weekday night hours with solar generation between 0% and 3% of the installed capacity and wind generation between 0% and 5% of the installed capacity.For each category, average metrics over all hours within that category are calculated and reported.Results are shown in Figs. 5–7.In each figure, the upper row represents weekdays, the lower row – weekends.The columns represent three different times of the day: night, day and evening.Within each subfigure, 25 weather-dependent categories are shown.The three figures show the same metrics as considered for the renewable energy penetration scenarios, with positive and negative mismatches shown on one figure.The results shown are obtained assuming an optimal renewable mix for the mixed loads: 399 MW solar PV and 30 MW wind turbines.Fig. 5 shows mismatch dependency on time and weather and compares residential loads-only and mixed loads.Positive mismatch indicates renewable generation excess.Negative mismatch indicates renewable generation shortage.During weekdays and on weekend nights, the mismatch is more positive for the residential loads-only than for the mixed loads.In the weekends, during the day and in the evening, the mismatch is more positive for the mixed loads, although the differences are relatively small compared to the weekday categories.The largest differences occur on sunny weekdays, and amount to up to 24% less mismatch between demand and supply in case of mixed loads than in case of residential loads-only.The results obtained through the time and weather classification system can be used to identify critical combinations of time and weather.For instance, 62% of positive mismatches occur during weekdays at daytime when solar generation exceeds 40% of installed capacity, which corresponds to 7% of the time.Most negative mismatches occur during weekdays in the evening with solar generation below 3% of installed capacity, which corresponds to 20% of the time.Statistical significance is not shown in the graph, yet is calculated as described in Section 4.Significant differences between mismatch results for the two load types are found for all data points on weekdays during the day, as well as weekday and weekend evenings for low solar.In other periods, statistically significant differences occur for some categories.The disparity in statistical significance between periods can be attributed to two factors: the number of data points and the relative difference between residential loads-only and mixed loads for a given period.First, as weather patterns are not dependent on the day of the week, weekdays have on average 2.5 times more data points per weather category than weekends.Second, during weekends and during night periods, the difference between residential loads-only and mixed loads is smaller than during other periods as most service sector activities are shut down.Fig. 6 shows renewable energy utilisation dependency on time and weather and compares residential loads-only and mixed loads.Renewable energy utilisation is the amount of generated renewable energy that can be used by the coinciding loads.Higher renewable energy utilisation is better.The differences in renewable energy utilisation between residential loads-only and mixed loads are most pronounced at high solar generation, both on weekdays and at weekends, and during all times of the day.Wind generation has limited effects as it represents only a small portion of the total renewable generation due to area constraints.At higher solar generation levels, renewable energy utilisation is higher for the mixed loads than for the residential loads-only.The service sector consumption profile is more aligned with the solar power generation profile as both peak during the day.In the weekend, during day and evening hours, the renewable energy utilisation at high solar irradiance levels is higher for residential loads-only.This corresponds to the fact that many service sector loads are minimal in the weekend.The largest differences between the two load cases occur on sunny weekdays, and amount to up to 33% more renewable energy used directly by mixed loads than by residential loads-only.Statistically significant differences are found during weekdays at high solar generation levels for all periods.Most renewable energy utilisation occurs during weekdays at daytime with high solar generation levels, these categories correspond to 7% of the time.Further, 7% of the renewable energy is consumed during night and evening periods with lowest sun and highest wind.Fig. 7 shows self-consumption dependency on time and weather and compares residential loads-only and mixed loads.Self-consumption is the amount of renewable energy utilised relative to the amount generated.As for the mismatch and renewable energy utilisation metrics, during weekdays and on weekend nights the mixed loads performs better than the residential loads-only.During weekend days and evenings the opposite is the case, although differences are again small.As for other metrics, the largest differences are found on sunny weekdays, mixed loads have a self-consumption of up to 32% higher than residential loads-only.Statistically significant differences occur for similar categories as for mismatch.At low solar generation levels and at all wind generation levels, the self-consumption is 100%, meaning that all renewable power generated can be used by the modelled loads.As solar generation increases, self-consumption decreases.During weekdays the differences between the two load types are biggest.In these periods the self-consumption decreases faster for the residential loads-only than for the mixed loads.This result illustrates that modelling only households underestimates the self-consumption of realistic mixed urban areas.The results presented above rely on a renewable resource generation mix obtained by solving an optimisation problem assuming mixed loads.In this paper, the optimisation is constrained by area.This is the binding constraint for the number of wind turbines, regardless of the load type assumed.However, the optimal solar generation capacity changes with the load type.It is 15% lower if residential loads-only instead of mixed loads are assumed.The general trends for time and weather dependency as shown in Figs. 5–7 remain similar if residential loads-only instead of mixed loads are assumed.However, overall mismatches become more negative, renewable energy utilisation decreases and self-consumption increases.Renewable power integration metrics vary as a function of both time and weather.The results shown rely on the proposed time and weather classification system.Pronounced solar generation dependency is found for all metrics due to the high share of solar PV in the generation mix.Relative metric performance of residential loads-only and mixed loads differs per period.Overall, on weekdays mixed loads lead to lower mismatches and higher renewable energy utilisation.During weekends the contrary is the case.This difference can be attributed to service sector operation hours.Statistically significant differences between residential loads-only and mixed loads are primarily found on weekdays due to a larger number of datapoints per category and a larger difference between the two load type profiles.Overall, results show considerable differences between metrics calculated based on residential loads-only and those based on mixed loads.The integration of renewable energy resources in urban energy systems mandates a detailed understanding of the existing potential for renewable energy utilisation.In real urban areas, demand consists of a mix of residential and service sector loads.Existing urban energy system models primarily consider residential load profiles.As shown in this paper, omitting the service sector leads to misestimations of the potential for renewable energy utilisation in real urban areas.Currently, detailed measured demand data for the service sector are scarce.This paper overcomes this lack of measured load profiles by devising synthetic service sector load profiles through a combination of a large number of different data sources, and uses the obtained synthetic profiles to quantify misestimations of renewable resource integration metrics if service sector is not accounted for in urban areas.The four contributions of this paper are:A systematic method to devise synthetic service sector load profiles.Quantification of renewable resource metric misestimations if the service sector is omitted, for a broad range of renewable resource penetration scenarios.A novel time and weather classification system.Quantification of renewable resource metric misestimations if the service sector is omitted, on different days of the week, times of the day, and weather conditions.Results reported in this paper can be valuable for researchers, practitioners, and decision-makers.More realistic urban demand profiles, based on both households and the service sector, can be used to extend urban energy system models, such as described by .Decision-makers and practitioners can apply the reported results to improve grid planning, operation and management, for instance to guide interventions such as storage location, demand response programs, and grid reinforcement.The appropriate choice of such interventions depends on the timing and the extent of the mismatches between renewable generation and demand.Intervention choices based on misrepresented urban demand profiles, can lead to outcomes suboptimal for the real system.This paper does not seek to determine which grid interventions are the most appropriate, as answering this question requires more detailed data than considered in this paper.Addressing this question is subject of further research.This paper focuses on quantitatively showing the overall importance of accounting for the service sector in the transition of urban areas to renewable generation.The next paragraphs discuss the four contributions of this paper.This paper models service sector demand using U.S. commercial reference building models and a combination of a large number of different U.S. and Dutch data sources.It proposes and implements a method to overcome the current lack of openly available, detailed measured service sector demand data for specific areas of interest.This method can considerably improve existing models of urban energy systems.The best validation of the proposed method relies on detailed, measured service sector profiles, the very issue this paper addresses.This is a chicken-or-egg problem.The proposed approach would not be necessary if detailed service sector or local demand profiles are available.This is currently not the case.The results of this paper show that the current approach to approximate urban demand by residential loads-only leads to statistically significant misestimations of renewable resource integration metrics.The method proposed in this paper can be used to estimate realistic urban demand profiles, and thus to improve estimations of renewable resource integration metrics.In this paper the method is applied to an average Dutch urban area.The same approach can be applied to determine urban demand for other municipalities or neighbourhoods, provided sufficient local data are available.Neighbourhood-level urban demand modelling is currently researched by the authors.This paper considers the Netherlands as a case study.It is an open question to what extent the results can quantitatively be generalised to other countries.The service sector composition and its share in the total national demand differ between countries .This is not an issue in itself, the biggest challenge in comparing different regions arises due to inconsistencies in service sector definitions, as also underlined by other authors .Even within a country, different sources provide different values for service sector power consumption.To improve service sector modelling, at least three issues need to be addressed: inconsistent service sector definitions, lack of openly available service sector data in general, and lack of detailed service sector and local load profiles in particular.Qualitatively, our assumption is that the obtained results can be generalised to other developed countries because the shape of the service sector demand profile, with a peak during the day, is similar across developed countries .Based on the results presented in this paper, can be expected that the more important solar generation is in a country’s renewable resource mix, the greater the impact of service sector loads is.Since solar power generation peaks during the day, it matches better with the service sector demand peak than with the household demand peak.Results obtained in experiment 1 show the impact of service sector loads on renewable resource integration across a broad range of renewable resource penetration scenarios.Statistically significant differences between renewable resource integration metrics for residential loads-only and mixed loads are found in all renewable resource penetration scenarios, except in those with high installed wind turbine capacity and low installed solar PV capacity, and those with few renewable resources.Renewable generation scenarios with very high wind and low solar are highly unlikely due to physical constraints.Although the Netherlands currently produces ten times more renewable energy from wind than from solar , this trend is unlikely to hold for high renewable resource penetrations.For equal installed capacity, wind turbines require considerably more area than solar panels.For instance, an installed renewable generation capacity of 367% of peak load would cover approximately 30% of the land area, or 20% of the off-shore Dutch Exclusive Economic Zone if wind turbines are used.If solar PV panels are used, the same installed capacity would cover only 1% of the land area.From experiment 1 can be concluded that within the plausible range of scenarios, mixed loads lead to significantly less renewable energy excess, significantly less energy requirements from other non-renewable resources, and thus to a significantly higher renewable energy utilisation and significantly higher self-consumption.Although the future renewable mix is not known, these results show that service sector loads should be taken into account for renewable resource integration assessment in a broad range of plausible scenarios.A renewable power system is highly dependent both on time and weather.Time governs diurnal, weekly and seasonal patterns in demand, and diurnal and seasonal patterns in solar generation patterns.Weather governs both solar and wind power generation, as well as some portion of the demand.Current power system metrics are assessed mainly from a time perspective .The proposed time and weather classification system can contribute to the improvement of urban energy system models as it provides better insights on metric dependencies on time, weather and their interdependencies.This paper proposes a novel time and weather dependency classification system which takes both time and weather into account.This classification system is flexible and can be readily applied to a wide range of dataseries.For the reference case used in this paper, categories are based on time intervals of one hour, full-year data, and five solar and wind energy generation categories.For other purposes, time interval, dataseries size and number of categories can be varied.For instance, the time and weather dependency classification system can be used with statistical data from multiple years to identify critical combinations of time and weather, to plan and manage distribution grid operations accordingly.In the Results section such critical combinations are reported for the reference year 2014.The ability to identify such critical values as a function of time and weather and to assess their likelihood of occurrence is of importance for the design of grid interventions and distribution grid management for power systems with a high share of renewable resources.Statistical analysis of the results obtained using the time and weather classification system shows significant metric differences between residential loads-only and mixed loads in a number of time and weather dependent categories.The most and largest differences are found on weekdays, in particular during sunny periods.These results demonstrate that using residential demand profiles to model mixed urban areas results in statistically significant metric misestimations.Such misestimations can have considerable impacts on, for instance, grid planning, operation and management choices.The reported numerical results are based on the analysis of a single scenario.The following considerations indicate that the trends found can be generalised to other scenarios.First, significant annual differences in metrics are found across a broad range of scenarios.Second, the match of solar power generation with service sector power demand is better than with residential demand.Third, the mixed load profile is more constant than the residential profile, making it more likely that wind power generated at a random moment in time is used by mixed loads than by residential loads-only.Therefore, from the results obtained in experiment 2 can be generally concluded that during periods of high renewable power generation, the differences in metrics between residential loads-only and mixed loads are sufficiently large to necessitate the dedicated and detailed consideration of the service sector.This paper contributes to an improved understanding of future sustainable urban energy systems by showing the importance of including the service sector in energy system models.In the existing models, the service sector is often omitted due to the lack of detailed service sector load profiles for a specific area of interest, and the absence of a systematic method to devise them based on the very few available sources.This is the first systematic study addressing the impact of the service sector on renewable resource integration in urban areas.In this paper, a method is developed and implemented to devise synthetic service sector load profiles based on a combination of a large number of different openly available data sources.The obtained profiles are used to quantitatively show that omitting the service sector in urban energy systems leads to statistically significant misestimations of renewable resource integration metrics.The proposed method and obtained results are being used for further research.Currently, the described method is extended by the authors to the neighbourhood level, to explore the local impact of storage.As residential and service sector loads are not evenly distributed in urban areas, concrete case studies of urban neighbourhoods are expected to provide further valuable local insights.Such insights are of importance for governments, distribution system operators, grid planners and new parties such as aggregators.Future research directions include more extensive refinement and validation of the proposed method using measured service sector profile data, once they become available.Improving the proposed method further contributes to a better understanding of the measures needed to support the transition of cities to renewable resources.
Urban areas consist of a mix of households and services, such as offices, shops and schools. Yet most urban energy models only consider household load profiles, omitting the service sector. Realistic assessment of the potential for renewable resource integration in cities requires models that include detailed demand and generation profiles. Detailed generation profiles are available for many resources. Detailed demand profiles, however, are currently only available for households and not for the service sector. This paper addresses this gap. The paper (1) proposes a novel approach to devise synthetic service sector demand profiles based on a combination of a large number of different data sources, and (2) uses these profiles to study the impact of the service sector on the potential for renewable resource integration in urban energy systems, using the Netherlands as a case study. The importance of the service sector is addressed in a broad range of solar and wind generation scenarios, and in specific time and weather conditions (in a single scenario). Results show that including the service sector leads to statistically significantly better estimations of the potential of renewable resource integration in urban areas. In specific time and weather conditions, including the service sector results in estimations that are up to 33% higher than if only households are considered. The results can be used by researchers to improve urban energy systems models, and by decision-makers and practitioners for grid planning, operation and management.
239
Aqueous solution discharge of cylindrical lithium-ion cells
Discharge of lithium-ion battery cells is vital for stabilisation during LIB disposal in order to prevent explosions, fires, and toxic gas emission.These are consequences of short-circuiting and penetrating high-energy LIB devices, and can be hazardous to human health and the environment.Explosions, fires, and toxic gas emission may also damage disposal infrastructure, and damaged LIB materials could reduce the material value for recycling and materials reclamation.Indeed, when LIBs are accidentally entrained in lead-acid battery smelting input streams, fires and explosions have been reported .This highlights the risk that high-energy LIBs can pose during waste processing.In the recently published text summarising the conclusions of the publicly-funded German LithoRec projects to develop a commercial LIB recycling process , there is a whole chapter devoted to safe discharge of LIBs .This is needed for both safety and functional reasons, and Hauck and Kurrat outline a number of discharge techniques for different scales, most are a set of different solid electronic techniques, plus the mention of conductive liquids like salt water.Unfortunately the use of conductive liquids is not discussed beyond NaCl solutions.The title of the chapter, “Overdischarging Lithium-Ion Batteries” reflects the authors assumption that over-discharging is necessary for materials reclamation.However, this assumption is not necessarily valid for keeping materials functional, and electrolytic potential windows in aqueous discharge allows a natural control on the minimum achievable discharge voltage.This study was inspired by the large number of studies of disposal of lithium-ion batteries that involve salt-water discharge at the beginning .Despite this widespread usage and suggestions that it is a standard practice, there is little published information on the effectiveness of salt-water discharge.Before 2018, the only two examples from these articles are from Lu et al. and Li et al. .Lu et al. varied the NaCl solution concentration between 1%, 5%, and 10% for discharge of “new batteries, whose state of charge is about 60% and the voltage is about 3.85 V”.Other than these initial electrical states, no further details were given about the LIBs, although the cathode chemistry is almost certainly lithium cobalt oxide as that is the main objective of the study.A rapid drop in cell voltage is observed after as quickly as 7 min for the 10% NaCl solution which is attributed to “the leakage of case at the edge place”.The method of voltage measurement is not made clear, but the rapid drop suggests an unrealistic drop in chemical potential energy, and that the measurement is a superficial one due to poor contact .Li et al. also varied the NaCl concentration between 0%, 5 wt%, 10 wt%, and 20 wt%.They chose to measure the discharge via their own parameter, the “discharging efficiency”, a function linearly linked to open circuit voltage.The cells were “18650.. waste laptop batteries” with unspecified chemistries and initial voltage or state-of-charge."The results showed considerably slower discharge with NaCl in Li et al.'s study than for Lu et al.Photos illustrated that corrosion happened for all cells, including apparently pure water, after 24 h in the 30 ml solutions, and the metal concentrations in the residual solution were measured using ICP.“High” levels of aluminium and iron were detected in all cases, and “medium” levels of cobalt, lithium, copper, calcium, and manganese were also measured.Significant quantities of zinc, barium and vanadium were also detected in all cases.All metals are assumed to have comed from the 18,650 casings.Confirming the leakage of electrolyte, high concentrations of phosphorous were also measured alongside the corrosion residue, and not detected at all in the case of pure water discharge .Highlighting the timely nature of this research into aqueous discharge are two 2018 publications by Li et al. and Ojanen et al. .Li et al. was the first article that mentioned the use of a salt other than NaCl for cell discharge: sodium sulphate.Ojanen et al. attempts to take a systematic look at different salts as aqueous electrolytes in “electrochemical discharge”: NaCl, NaSO4, FeSO4, and ZnSO4, although the mechanism of discharge involved replacing a resistor for an electrochemical cell in a circuit rather than actually inserting the cell into the liquid solution.The effects of water on batteries, particularly large packs, are also very important from a safety perspective, because of the hazards associated with hazardous-voltage EV packs a number of studies have been published on that topic in recent years .Hoffman et al. found that pure water was essentially benign in the two cases they looked at, with only very minor voltage drops, but they saw violent discharge in 3% NaCl solutions, including significant heating of the water, but no fires were observed.Spek looked at immersion of a number of full EVs, and saw a range of results from fire to no significant damage.Finally, Xu et al. tried to examine failure mechanisms for HV battery packs, and concluded that electric arc caused by gas breakdown due to the severity of the electrolysis was likely to be the main factor in pack failures during water immersion.Xu et al. tested a range of NaCl concentrations up to 3.5%, and gradually increased the voltage across two metal contacts until rapid failure occurred, due to arcing.As exemplified above solution discharge is normally thought as synonymous for NaCl saline solution discharfe, which produces hydrogen and chlorine gas when electrolyzed as an aqueous solution .However, NaCl is not an ideal solvent for discharge of batteries as chloride ions accelerate aqueous corrosion of steel.In this study we have focussed on two principle considerations for aqueous solution discharge: the discharge rates and corrosion rates.Although optimisation would require a range of concentrations for any given solute, we have kept to a single concentration to make all solutes comparable, and used air conditioning to keep the room at ≈25 °C.Discharge has been measured at fixed time intervals up to 24 h, and the terminal corrosion has also been visually observed at fixed time intervals up to 24 h."In order to be as objective as possible in the evaluation of the salts the same type of 18,650 LIB cell has been used throughout: the Sanyo UR18650RX – manufacturer's data is given in the supplementary information.A basic inventory of the relative weights of the components is shown in Fig. 1, from a single cell tear-down.When discharging the cell, a very important energy aspect to characterise is the capacity as a function of voltage, which is shown in Fig. 2.The discharge capacity was measured directly using a slow C/50 constant discharge down to zero V.The energy capacity was then calculated by integrating under a plot of voltage vs charge capacity.Fig. 2a depicts the incremental capacity to highlight the voltages at which more charge is available .Two distinct IC peaks can be seen at 3.5 and 3.6 V, with largest falling around 3.6 V, in line with the nominal voltage specified by the manufacturer.Fig. 2b depicts the energy capacity as a percentage of the maximum capacity, on a logarithmic scale.This helps to clarify the remaining percentage energy capacity at 1 to 3 V.The voltage as a function of energy is also shown with the axes reversed in Fig. 2c to further help visualise the remaining energy below 3 V.The cells were all tested using electrical impedance spectroscopy, charged up to 4.2 V, and weighed before the discharge experiments.A range of aqueous electrolyte solutions were made, all to 5 wt% concentration, using the salts outlined in Table 1.The solutes were all over 95% purity, most over 99%, and were purchased from various commercial chemical suppliers.The solutes were chosen for various reasons, but because feasibility studies showed that corrosion was primarily located on the positive electrode for NaCl solutions, this study focuses on varying the anions.Nevertheless alternative cations from Na+ were also chosen, with K+ picked for the greater dissociation than sodium, and because some salts with certain halide anions were cheaper.NH4+ was chosen to compare a common ‘weak base’ cation with the sodium and potassium, which both form strong bases.The solutions were all made in 2 litre plastic bottles in at least the first instance, the large volume chosen to help keep temperatures more even, both for experimental quality and reproducibility, but also to potentially improve safety.For subsequent tests 1 litre plastic bottles were used.All the experiments were carried out in a well-ventilated, controlled climate of 25 °C, and the temperature was measured in at least one solution on-line throughout the experiments, which showed that the solution temperature was generally 22–23 °C.The official hazard statements are all shown in Table 2, and given that hazards need to be kept to a minimum for brine discharge to be competitive with resistive discharge, where safety is paramount, less hazardous salts are clearly more attractive.Table 2 shows that NaOH, NaNO2, K2CO3, and NH3 have three official hazard statements, whilst NaNO3 and K3PO4 both have two hazard statements.NaHSO4, Na2CO3, KBr,2CO3, NH4HCO3 all have one hazard statement, leaving 15 hazard-free salts.Obviously, these official hazards do not take into account any effects from chemical contamination by electrolytic products and corrosion of the cell terminals.In all cases the pH, conductivity and specific gravity of the brine solutions was measured before and after discharge.The pH was measured using an Oakten EcoTestr pH 2 handheld device, and the conductivity was measured using an Oakten COND 6+.The specific gravity was measured using a variety of analogue hydrometers.In some cases the salt ions may be consumed in the electrolysis, but in most cases it is believed that the salts act as non-consumed electrolytes, with the water electrolysing at both electrodes to generate hydrogen and oxygen.See Section 3 for details on the possible products of theoretical competing reactions.The cells were charged up to 4.2 V, and dropped into the brine baths to start discharge.For each of the salts at least one discharge experiment was carried out where the cells were dropped into the bath with no connections, and removed at 30 minute intervals, for 10 h, to manually measure the cell voltage using a handheld multimeter.These results were used for the main comparison as the contamination risk is kept to a minimum.The cells were then left to complete discharging overnight before being finally removed 24 h after starting the discharge.At 5, 10, and 24 h the cells were all taken out of the solutions to observe the corrosion visually, and photographed using a digital camera.After 24 h of discharge the liquid properties of specific gravity, pH, and conductivity were measured to compare with the values before immersing the cell in the solutions.As well as photographs and cell voltage, the weight of the cells was measured.This was also measured two weeks later to allow the volatile solvents to evaporate off.Where the electrodes were not completely corroded, electrical impedance spectroscopy measurements were carried out using a BioLogic VMP3 multi potentiostat, with BH-1i cell holders.These results were compared to a single cell discharged via resistors, at various states of charge.The EIS measurements were taken over a range of frequencies from 100 kHz down to 1 mHz, with 10 mV amplitude and nine measurements per logarithmic decade."At the cell's negative terminal, the vast majority of the solutes, particularly the Na+ and K+ ones, evolve hydrogen gas according to Eqs. or depending on the pH of the solution and the balancing equation at the other electrode.For water reduction at the positive battery terminal, Eqs. and are the balancing equations, generating both oxygen gas and electrons for the completion of the circuit, and the potential given is the oxidation potential.The cell potential for water dissociation is −1.23~V and can proceed via a basic or acidic reaction route.Applying a voltage of above 1.23~V will cause water electrolysis, however there are kinetic barriers that manifest themselves as overpotentials for each half-cell reaction .The faster the ions move through solutions the quicker the discharge of the cell, until equilibrium of the components is reached."If gases are lost then the equilibrium will not occur according to Le Chatelier's principle.In practice, at the negative terminal the cation may typically be looked at as providing a competing reduction reaction to water reduction and), but the anions do also demonstrate some capability in this area.The electrolysis of ammonium ions at the cathode could in theory happen according to Eq., but the only aqueous example that could be found in literature comes from metal plating by Berkh et al. who observed the onset of hydrogen production with2SO4 at a more positive potential relative to pure H2O .This suggests that the reduction potential of ammonium is likely to be non-competitive with that of water, which would explain why ammonia electrolysis studies assume the water provides the hydrogen, and focus on ammonia oxidation to aid hydrogen generation , which may be an incorrect assumption.Table 3 gives a good overview of the possible competing cathodic reactions at the negative electrode, and the total Ecell0 for each of these with water electrolysing to generate oxygen at the anode.As all of the anionic reduction reactions involve H+ or OH– species, they are expected to occur significantly only in acidic or basic solutions, respectively.Both nitrate half-equations involve H+ ions, and are at significantly higher potentials than water reduction, meaning they would dominate in acidic environments, producing preferentially NO2 in the case of Eq. at a potential 0.16 V higher than the production of NO in the case of Eq.Since they are both competitive with each other, a mix of both gases may be expected to be produced, but both are significantly toxic, and often referred to as NOx.The two competing cathodic reduction reactions for nitrites and) both involve OH– ions which suggest they will only occur in basic solutions, and again have significantly higher reduction potentials than water reduction in basic solutions).Eq., with the highest half-cell potential of +0.15 V, also produces a nitrogen oxide, N2O, otherwise known as nitrous oxide/laughing gas.N2O is far less toxic than NOx for the environment, less flammable, and less dangerous to human health.The final competing reaction is for phosphate ions in Eq.It involves OH– ions and so will only occur in basic solutions, although the potential is significantly lower than for nitrites, and is actually lower than water reduction in Eq., meaning that although it may compete with water reduction, hydrogen production should still dominate.The final set of equations are related to ammonium cation oxidation.Eq. gives an example of direct oxidation to generate nitrogen gas at a slightly more negative potential than water oxidation, which should occur only in basic solutions.Whilst nearly all of the potentials have been acquired from the CRC Handbook of Chemistry and Physics chapter "Electrochemical Series" , the reduction potential of Ammonia in Eq. of −0.77 V was obtained from a different source .Eq. appears to give a corresponding equation for acidic media which is much more competitive, and likely to prevent water oxidation, and again produces nitrogen per mole of oxidation).The final Eq. is a solution-based half equation which should only occur in acidic media again, so could compete with previous equation only if the kinetics inhibit Eq. with its more positive potential.All of these equations are also shown in Table 4.A variety of sodium and potassium salts at a fixed concentration of 5 wt% were all compared directly, and the results for these sixteen discharge tests are shown in Table 5.The final cell voltage, cell mass, conductivity, pH and specific gravity are all shown after leaving the fully-charged cell in the solution for 24 h.In Table 5 pH, conductivity and specific gravity have been measured once before discharge and once at the end.In many cases, particularly the sodium and potassium solutions, there is marginal change in the properties between the beginning and the end unless significant corrosion was observed.Two notable exceptions to this are the sodium nitrate and sodium nitrite where the pH jumped up significantly in both cases, perhaps indicating that OH– ions were not oxidised into oxygen at the positive terminal, and that the NO3− and NO2− ions underwent a significant competing reaction at this terminal.There are fewer ammonium salts, and so observing trends in these single-test data points is risky, but one distinct trend is that all of them exhibit a reduction in specific gravity, most significantly in ammonium bicarbonate.However, this could just reflect significant loss of ammonia gas from a well-dissolved state.Some ammonium solute conductivities seem to change more significantly than for the sodium and potassium solutes, particularly the ammonium phosphates2HPO4 from 33.8 mS.cm−1 to 42 mS·cm−1, NH4H2PO4 from 22.7 mS·cm−1 to 28.7 mS·cm−1), although the other solutes are probably not beyond a realistic error margin.The rate of gas production at the electrodes was also qualitatively assessed, but the differences between high and medium gas production are hard to pinpoint.What was more definite was that all the electrodes seemed to produce significant quantities of gas bubbles at the beginning of electrolysis, with the notable exceptions of the negative terminals of cells in NaNO3 and NaNO2 solutions, and both terminals of the cell in NH3.Fig. 3 shows the voltage plotted as a function of time for the same discharge experiments.As can be seen in the insets, by 10 h only the cell in sodium nitrite has passed below the 3.5 V mark, when the remaining charge capacity drops to below 500 mAh.However, K2CO3 and NaNO3 are below 3.6 V, and a number of salts are under 3.7 V, most notably Na2CO3 which accelerates between 10 and 24 h to overtake NaNO3.However, Fig. 4 shows that the different in voltage at 24 h is really insignificant in terms of discharge capacity.In terms of energy capacity, this is even less significant, as demonstrated by the differences between Fig. 2a and b.The positive terminals of the cells in the halide solutions and the thiosulphate corrode fast, and they all barely last the first hour before a stable voltage cannot be measured.The cell in NaOH solution appears to last no longer than those in the halide solutions, before the voltage falls negative, whilst K3PO4 lasts beyond 10 h before it gives no stable final 24 h voltage.Table 5 gives some clues as to why NaOH and K3PO4 might have this problem, where their final cell masses are considerably lower than their initial ones.Additionally, the solutions both appeared to give off the sweet scent of the polycarbonate ester solvents from within the cells, indicating that some electrolyte solution has leaked into the aqueous solutions.The NaOH pH is significantly reduced to 11.1 after discharge from the initial value of 12.9, but the K3PO4 pH is not reduced at all.After the NaNO2 and NaNO3 the 5 wt% CO32– solutions clearly discharge the cells fastest according to Fig. 3.Nevertheless, the HCO3−, SO42− and HPO42− solutions come steadily behind for more than one cation, which could just be down to lower conductivities of these solutes.Fig. 4 shows that this is partly true in some cases, although solubility limits of salts like NaHCO3 and Na2HPO4 mean that conductivities above 100 mS.cm−1 might be impossible to achieve for solutions of these salts.Below the chart in Fig. 4 the conductivities of all the solutions are given in three different lines.The top line refers to solutions with particularly high discharge at 10 h after commencing discharge, the bottom line refers to those at a fairly standard rate, and the middle one partly accommodates slightly higher than standard discharge rates, or those that are fully corroded by 10 h.As well as sodium nitrite and nitrate, sodium citrate and ammonia shows noticeably fast discharge for their measured conductivity than others.Whilst sodium sulphate also partly does, the two different measurements made for sodium sulphate are included to highlight that the same solution could give quite different capacity results when reproduced, even though the precise voltages were not very different, the capacity these voltages corresponded to at that point was over 100 mAh apart.Pure ammonia solution showed very odd discharge kinetics, particularly given that a 5 wt% solution has a conductivity of only ≈1 mS·cm−1.Discharge appeared to start very slowly and then accelerate after 5 h, overtaking three solutions with conductivities over 20 mS·cm−1 by 24 h to reach almost 65% total discharge.Accounting for these variations in kinetics is hard to pick out, but an important consideration is the potential window of the redox reactions taking place at the electrodes.Water has a potential window of 1.23 V, and any solutes which have reactions that reduce this window may manage to speed up the relative discharge at lower voltages versus those that just undergo water electrolysis.Possible competing reactions have been explored in the discussion, in Section 3.In the supplementary information a photo shows most of the Na+ and K+ solutions with cells in them approximately 10 min after starting the experiments.This shows how rapidly the steel corrosion by the halide solutions occurs versus the rest, with the exception of sodium thiosulphate, also shown in a separate photo in the supplementary information.Fig. 5 shows the corrosion of a cell in NaCl solution after 5, 10, and 24 h.The part that corrodes the most, the positive terminal, is shown on the left.The negative terminal in the middle shows some red iron oxide, although how much is corrosion on the terminal itself and how much adsorbed particles is not clear.The image of the 2 l vessel shows how full of this corroded material the aqueous solution was, and also shows how the particles settled over the 24 hour period, although this probably just reflects a reduction in gas formation as electrolysis reduces.Within a minute of discharge commencing in the 2 l, 5 wt% NaCl solution, red corrosive products were being formed, and these images highlight how corrosive NaCl is.With the cylindrical LIB cells, most corrosion occurs at the high voltage positive terminal, and so a matrix of positive terminal photos is shown in Fig. 6, demonstrating the visual corrosion for all 26 solutes at 5, 10, and 24 h after starting the experiments.The negative terminals are shown in the supplementary information in a similar manner.Aside from the rapid corrosion by NaCl, Na2S2O3, KCl, KBr, and KI solutions, a couple of other clear features can be observed from Fig. 6: a number of cells exhibit blackening of the terminal during discharge, the terminal falls off in acidic sodium bisulphate, many terminals have small signs of iron oxide by 24 h, and the blue insulating paper is variously damaged in different salts.In some cases the photo was taken while the cell was still wet meaning that the cell looks shiny, and in some cases, only really at 24 h though, the cell dried without being wiped and so some salt deposits can be seen on the cell.Unfortunately, with the exception of ammonium bicarbonate particularly non-hazardous, mildly alkaline salts comprising of bicarbonate or monohydrogen phosphate anions all seem to cause some form of black deposit on the positive terminal.Whilst this could be a deposit and not any significant corrosion, it is not that positive for future use of the cell.Indeed, the dihydrogen phosphates seem to demonstrate less corrosion despite their acidic pH.Fig. 6 shows some unexpected results, such as lower corrosion on cells in higher alkalinity K2CO3 vs Na2CO3, and no visible major corrosion on some cells where significant mass was lost, those discharged in NaOH, NaNO3, and K3PO4.There are also a number which show virtually no rusting, and others than show clear levels of rusting.These experiments show single runs, and fully conclusive results will require multiple tests, but they give good starting points for understanding what will definitely cause corrosion, and those which are likely to cause minor corrosion at worst.Electrochemical impedance spectroscopy was done on the cells where sufficient contact could be made at both terminals.The cells which had lost significant levels of mass generally showed very irregular impedance measurements, and so are not included here.Those with unstable EIS measurements included the cell discharged in sodium bisulphate solution, despite giving a stable voltage reading after 24 h discharge.For comparison, EIS measurements were carried out on pristine cells at different voltages on discharge from fully charged at 4.2 V.The Nyquist plot of impedance, showing imaginary and real components of impedance, is shown in Fig. 7.The pristine cell measurements are shown in different colours from 1.33 V up to 4.21 V, and the cells discharged in different solutions are shown in dark solid curves with labels outlining the solute and the 24 h measured voltage.All of the cells seem to plot roughly where would be expected given their final measured voltage, with the notable exception of ammonia.With a conductivity of only 1 mS.cm−1, the fact the NH3 discharged to 3.63 V was unexpected, but the EIS measurement would place the final Nyquist plot resistance as typical of a cell between 3.13 V and 2.92 V.As mentioned in the introduction, the vast majority of previously reported studies on aqueous solution discharge use NaCl solution , which might explain the relatively low interest in this process for cell discharge.Amongst the other academic studies, only three give specifically different solutes for solution discharge: Nie et al. used a saturated Na2SO4 solution with iron powder, for 24 h, Li et al. used Na2SO4, and Ojanen et al. used NaSO4, FeSO4, ZnSO4, as well as NaCl.All the other studies were less specific, referring either to ‘brine’ , which could imply sea-water composition, or unspecified electrolytic solutions ."Table 5 shows the range of solution pH's, and the mass change result for the only solution with a pH <4 demonstrates that the steel casing is vulnerable to acidic solutions.The nickel-plated steel top of the positive terminal dropped off within 5 h. Although a stable voltage for the cell in NaHSO4 could still be measured after 24 h, a significant mass loss was observed, and no stable EIS measurement was observed, hence its absence from Fig. 7."The drop in masses for highly alkaline solutions shows that high pH's are also risky for corrosive results.Given that Na2CO3 and K2CO3 both give 5 wt% solution pH above 11, it is not surprising that some discharge events with these salts possibly perforated the can."The general conclusion has to be that moderate pH's are desirable to be certain of avoiding can penetration due to H+ or OH– ions, although if the damage is only due to gasket corrosion, this is much less likely to be a risk for pouch cells.Given that both the rate of discharge should be strongly linked to conductivity, and corrosion should be at least partly influenced by the rate of discharge, solution conductivity was measured.Conductivity depends strongly on the ionic nature of the compound dissolved in the solution, and the corresponding ability of it to dissociate into charged ions, in order to then carry charge.If all the solutes were the salts of string acids and bases we could expect the conductivity to scale directly with molarity, but not all our solutes are the products of strong acids and bases.As Table 6 shows, theoretical conductivity also varies depending on the chemistry of the solute itself.The theoretical conductivity values for 5 wt% solutions are shown in Table 6, and the deviation of the measured value from the theoretical value is given as a fraction in the term α, with most α values falling between 0.3 and 0.8.The notable exceptions are NaHSO4, which showed a considerably higher measured conductivity than the theoretical value, and NH3 which was significantly lower.The specific gravity values shown in Table 5 hardly vary at all during the electrolysis of one 2 Ah cell in 2 l of solution.Even those which apparently do are possibly still within the bounds of error given the coarseness of the tool used to measure this.The main reason it has been included is that it is a very practical way for getting a quick solution measurement for any upscaled discharging process.For a given solute, specific gravity is a decent proxy of the concentration, and hence a useful measure of how much the electrolyte has been consumed or contaminated through the electrolysis.For the Oakten Cond 6+ device used in this study the cell constant, k, equals 1 cm2.However the path for the ionic transport between the 18,650 terminals is not limited in the same way, meaning that k could well be larger than 1, reducing the overall resistance.Whilst it may appear that quite large effective solution resistances are obtained; for k = 1 and length = 6.5 cm these would vary from 87 Ω at 75 mS·cm−1 to 325 Ω at 20 mS·cm−1.As mentioned before, k is likely to be significantly greater than 1, but it is clear that the solution conductivity is a limiting factor to achieving faster discharge rates, and the likelihood of a dangerous short-circuit causing thermal runaway is negligible without a significant reduction in the final solution resistance.The ionic transport path lengths will be shorter for some pouch and prismatic cells, but could be longer for others.The path length could be shortened for cylindrical cells by deliberately damaging the plastic coating.Multiple cells could be stacked in the same bath to reduce the path lengths between terminals, but if this was not done in a controlled manner then the rate of discharge of the cells in the bath could vary significantly.There are not many studies that have clearly recorded discharge rates for solution discharge.For NaCl, probably the most detailed study is by Li et al. , where 18,650 s were discharged in different concentrations of sodium chloride solution.The initial voltage is not stated, but given that the maximum ‘discharge efficiency’ is ≈75%, which must be around 1.23 V, then the likely initial voltage is 1.23 ÷ 0.25 = 4.92V.This is obviously unrealistically high, but suggests that the cells were originally fully charged, although given that the discharge went on for 24 h, and the results were measured manually with a voltmeter, our experience would suggest that any voltage measurements of 18,650 s discharging in NaCl solutions of 5 wt% or higher for longer than an a couple of hours are likely to be quite unreliable due to corrosion.The 18,650s also were of unspecified capacity, so a direct comparison of discharge rate is not really possible.Indeed, Li et al. used a much smaller volume of solution, and so the corrosion might increase the conductivity of the solution, perhaps increasing discharge and associated corrosion rates.Ojanen et al. carried out most of their cell discharge experiments in a different manner, where the cell was not in the solution but soldered platinum cables were used.For sodium sulphate, which could be compared to the results in this study, more corrosive electrolytic behaviour was observed despite good Pt catalysis of water electrolysis, reducing the half-cell overpotentials.However, there is some doubt about the sodium sulphate chemical formula in their study.Ojanenen et al. did not specify the distance between their Pt wire electrodes, and indeed their photos suggest an uncontrolled distance.They also used a much smaller capacity battery, and so their discharge rates of minimum 10 h for 5 wt% solutions compare unfavourably with the times presented in this study, where the discharge time was similar, but for around three times the capacity.A final study to compare with is Lu et al. , who looked at discharge in NaCl solutions with times of around 30 min for 5 wt% solutions, but the starting state-of-charge was declared o be 60%, or ≈3.85 V.The capacity of these cells is not clear, nor is the method of determining the voltage, making any comparison difficult.With so many missing parameters, direct comparisons are impossible, but rapid discharging of large capacities cannot be expected without additional engineering, or significant increases in solution conductivity.Corrosion is probably the main consideration during discussions about using aqueous solutions for cell discharge.In some cases rapid corrosion is desired in order to destroy the cell, but in most cases slow corrosion is desired to allow for maximum discharge, allowing safe transportation of undamaged cells, perhaps for reuse, but normally for disposal and materials reclamation.As mentioned in the results section, the alkaline solutions above a pH of 12 appeared to penetrate the can without visible terminal corrosion.Curiously, Table 5 shows that the NaOH pH drops to well below 12 by 24 h of cell discharge, and yet K3PO4 which also starts with a pH of above 12 and appears to penetrate the can, exhibits no drop in pH. This may reflect the length of time that the electrolyte solvent leaking out of the can for, as the cell appears to be compromised after only 2 h in 5 wt% NaOH solution, but does not cause any irregularities in 5 wt% K3PO4 solution until after 10 h.Another explanation could be that the phosphate anion competes with hydroxyl oxidation at the positive terminal, meaning that OH– ions are not consumed during electrolysis as they would necessarily be for NaOH solutions.The halides and sodium thiosulphate exhibit significant destruction to the positive terminal, and therefore appear to be suitable for cell destruction like the alkaline solutions, although the residue will be much more dirty than one that, presumably, does not corrode the steel but corrodes through the much smaller rubber gasket.Nevertheless, in either case electrolyte solution will leak into the aqueous solution to create a contaminated liquid waste, but a much more containing waste than the way in which damaged cells are standardly stabilised at the University of Warwick: short-circuiting in a protected room, usually by some form of penetration.Even if a short-circuited cell does not burst into flames, electrolyte and gas will escape into the room in a relatively uncontrolled manner.As mentioned before, the most acidic solution is sodium bisulphate, and it shows considerable damage to the steel positive terminal, in a cleaner but considerably slower manner than the halides and Na2S2O3.Mildly acidic solutions, like the monobasic phosphates, demonstrate very clean discharge at the positive electrode, but this perhaps suggests that some very slow acidic corrosion is taking place.In Fig. 6, these mildly acidic solutions cause less visible corrosion to the terminals than mildly alkaline solutions.Also, for some cells the rusting was less visible immediately after being removed from the solution compared with after they had dried.EIS showed that most cells with no weight-loss exhibited inductance and conductance behaviour along the lines of what would be expected from resistively-discharged cells.The only exception to this was ammonia which exhibited a Nyquist curve more along the lines of <3.1 V when the final measured voltage was 3.63 V. Whilst exceptionally odd, this only adds to the confusing pattern of discharge that NH3 solution exhibited, suggesting that more in-depth research into NH3 electrolysis could be considerably more complex than for other aqueous solutes.The mild effects of corrosion are not really necessary to quantify for cell disposal, as the main finding that corrosion rates are considerably slower than discharge rates for most solutes will satisfy this requirement.However if there is any intention to re-use the cell then the choice of solutes will have to be examined more closely to ensure that any mild corrosion will not have longer-term effects on cell performance and safety.Previous literature has also observed corrosion at steel terminals with NaCl solutions, which is why Ojanen et al. used platinum wires to remove electrode corrosion, although Lu et al. suggested that low concentration solutions could reduce the corrosion whilst still discharging the cells.Nevertheless, the problems associated with comparing previous literature results due to a lack of recorded details, outlined in the previous subsection on discharge rates, still apply for corrosion rates.When considering electrolysis of water, the generation of hydrogen is particularly dangerous, especially because of the mutual generation of oxygen means that the creating an inert environment is difficult.Nevertheless, the hydrogen will not spontaneously combust unless there is at least 4 vol% of H2 in the gaseous mix .4% represents the lower limit for upward propagation of a flame through the mixture; for horizontal propagation this limit rises to 6%, and for downwards propagation it is as high as 9%.This is nearly irrelevant of the oxygen concentration.Kumar showed that upward propagation of the flame through a hydrogen gaseous mixture shows weak or no dependence on diluent type or concentration, whereas downwards propagation did show relatively significant variation depending on gas type.Given the need to be conservative with health and safety, a strict upper limit of 4% H2 must be observed to ensure safety, which will require a good ventilation system.The generation of alternative gases to hydrogen or oxygen is interesting, but some gas analysis must be carried out before speculating about specific hazards associated with any of them.Although it may be thought that the use of ammonium solutes might reduce hydrogen generation at the negative terminal from water electrolysis, they still were seen to produce gas at both terminals, and although ammonia could in theory be produced at the negative terminal , it is still a gas.That said, ammonia is less flammable than and more soluble than hydrogen.Nitrogen gas could also then be generated from electrolysis of ammonia at the negative terminal .This study is a systematic academic analysis of an applied topic, but with it being such an applied topic there is a desire to make some applied recommendations and generalisations.A single type of cylindrical cell has been analysed in this study, and the limiting factors have been the positive terminal, whose geometry will vary to a certain extent between models, and the gasket, which could be made of different materials, but needs to be insoluble and non-reactive to the electrolyte solution.The level of variation between models is unlikely to have a significant effect on the discharge properties in solutions as demonstrated in this article.The geometry of cylindrical Li-ion cells is relatively well standardized to 18,650s, although larger 26,650 and 21,700 geometries, amongst others, are possibly reducing the level of cylindrical standardisation.These should not have a big effect on discharge characteristics either, except prolonging the discharge due to greater capacity.Intact polymer wrapping appears important to prevent a shorter path between the terminals, and an additional step in the discharge process could be shortening the path by cutting into the polymer wrapping.Other geometries will vary according to the packaging materials.Pouch cells with polymer aluminium laminate packaging are likely to be relatively inert to most aqueous solutions, unless considerably acidic or alkaline, but this could cause catastrophic penetration if the polymer layer is removed too rapidly.Pouch cell terminals are normally Ni-plated, and therefore should generally show similar patterns for corrosion as observed in the 18,650 cells in this study, but they could be made of different metals, particularly copper or aluminium which may corrode at different rates.The terminals are also possibly closer to each other than for cylindrical cells, so some degree of care will be required to ensure safe discharge of pouch cells in aqueous solutions.Prismatic cells vary quite a lot, but most are steel-cased, meaning that the same sort of corrosion may be expected to be observed, but there will also be polymer wrapping and different terminal path lengths to consider.Due to the variability in prismatic geometries, this is probably the hardest to draw general conclusions, but for any cell type, immersion in relatively low-concentration inert solute would seem advisable.A solution of bisodium phosphate or sodium bicarbonate would seem safe places to start, particularly since they are not highly soluble.With respect to solute choice, this will depend on the requirement of the discharger, but for non-corrosive discharge it appears that there are a wide range of options with weak anions, although use of sodium nitrite appeared particularly attractive for fast and low-corrosion discharge.Perhaps a mixture of solutes may be desired for optimal performance in different scenarios, to optimise the discharge characteristics with the price of the solution, and also perhaps minimise environmental impacts as far as possible.For destroying damaged cells, a corrosive solution would be desirable.NaCl is an obvious choice given its abundance, but neater options may include those that target the rubber gasket alone, such as alkaline agents like NaOH or K3PO4, although these could result in a less reliable passivation of the cell interior than a solution that could attack the metallic casing, like a stronger acid.However, ensuring that the solution will remove HF - the most dangerous product of electrolyte-water reaction - might indicate that an alkaline solution would be preferable, but alternative HF-scavenging agents could also be used.As is often the case, a combination of solutes may the optimal solution when destroying a cell with aqueous solutions.This study has presented evidence on the effectiveness of aqueous electrolyte solutions for discharging a single type of lithium-ion battery cell in a systematic way.Nickel-plated steel cylindrical cells are a relatively common form, and the capacity of ∼2 Ah, although low compared with even some large cylindrical cells, is reasonable for estimating how long larger capacity cells may take to discharge in solutions of the same conductivity.The evidence shows that electrolytic discharge has the potential to be a flexible and safe way to stabilise a wide range of different types of high-energy cells.The rate of discharge will vary depending on a number of factors, but primarily on the actually solution resistance - itself depending on both the conductivity of the medium and the distance between the electrodes.The rate also appears to depend strongly on the chemistry, and presumably the competing electrolytic reactions, but unless very concentrated solutions are used it appears that the rate will always be relatively constrained, and the risk of short circuit low.For the range of solutes tested here, a huge range of different corrosive behaviours have been observed from almost no corrosion at all to complete destruction of the positive terminal."Although the low-hazard mildly alkaline bicarbonates, and the dibasic hydrogen phosphates discharged fine, only the cell discharged in ammonium bicarbonate did not show any dark residue on it's positive terminal.Indeed the non-hazardous mildly acidic monobasic hydrogen phosphates exhibited uniform corrosion-free terminals.Amongst the other solutes, the rate of discharge of the sodium nitrite solution makes nitrites particularly interesting, despite their human toxicity, because nitrites are notably non-corrosive to steels.From a practical perspective, the choice of solute will depend on whether the purpose of stabilisation is to destroy the cell completely, or to simply discharge the cell to a safe level with minimal damage.If someone would like to destroy the cell safely using a solution process, then they will end up with a toxic liquid waste because the leaked electrolyte will react with the water to create HF.If they want to discharge a cell with minimal corrosion, then this is possible for the standard nickel-plated steel cells tested here, but a careful choice of a non-corrosive solute for the specific cells to be discharged is essential to achieve this.A second consideration may be how fast the process will take, and certain solutes will not be soluble enough to reach desired conductivities.For refining solution choices a number of factors will come into play including cost, availability, and health, safety & environment impacts.Although not showing the electrolysis product hazards, the hazard list in Table 2 shows the official hazard labels assigned to the solutes used here.
The development of mass-market electric vehicles (EVs) using lithium-ion batteries (LIBs) is helping to propel growth in LIB usage, but end-of-life strategies for LIBs are not well developed. An important aspect of waste LIB processing is the stabilisation of such high energy-density devices, and energy discharge is an obvious way to achieve this. Salt-water electrochemical discharge is often mentioned as the initial step in many LIB recycling studies, but the details of the process itself have not often been mentioned. This study presents systematic discharge characteristics of different saline and basic solutions using identical, fully charged LIB cells. A total of 26 different ionic solutes with sodium (Na+), potassium (K+), and ammonium (NH4+) cations have been tested here using a fixed weight percentage concentration. An evaluation of possible reactions has also been carried out here. The results show good discharge for many of the salts, without significant damaging visual corrosion. The halide salts (Cl−, Br−, and I−) show rapid corrosion of the positive terminal, as does sodium thiosulphate (Na2S2O3), and the solution penetrates the cell can. Mildly acidic solutions do not appear to cause significant damage to the cell can. The most alkaline solutions (NaOH and K3PO4) appear to penetrate the cell without any clear visual damage at the terminals. Depending on what is desired by the discharge (i.e. complete cell destruction and stabilisation or potential re-use or materials recovery), discharge of individual Li-ion cells using aqueous solutions holds clear promise for scaled-up and safe industrial processes.
240
TP53 mutations induced by BPDE in Xpa-WT and Xpa-Null human TP53 knock-in (Hupki) mouse embryo fibroblasts
The tumour suppressor p53 plays a crucial role in the DNA damage response, garnering the title ‘guardian of the genome’ .A paramount function of p53 is to prevent DNA synthesis and cell division, or to promote apoptosis, following DNA damage, which it performs primarily by regulating a large network of transcriptional targets .Disruption of the normal p53 response by TP53 mutation contributes to transformation by eliminating a key pathway of cellular growth control, enabling the survival and proliferation of stressed or damaged cells.Somatic mutations in TP53 occur in more than 50% of human cancers .The majority of TP53 mutations are missense and occur between codons 125 and 300, corresponding to the coding region for the DNA binding domain .Over 28,000 TP53 mutations from human tumours have been catalogued in the International Agency for Research on Cancer TP53 mutation database, providing a key resource for studying the patterns and frequencies of these mutations in cancer .Interestingly, exposure to some environmental carcinogens can be linked to characteristic signatures of mutations in TP53, which provide molecular clues to the aetiology of human tumours .A useful model for studying human TP53 mutagenesis is the partial human TP53 knock-in mouse, in which exons 4-9 of human TP53 replace the corresponding mouse exons .The Hupki mouse and Hupki mouse embryo fibroblasts have been used for both in vivo and in vitro studies of TP53 mutations induced by environmental carcinogens .TP53 mutagenesis can be studied in cell culture using the HUF immortalization assay.In this assay primary HUFs are first treated with a mutagen to induce mutations.The treated cultures, along with untreated control cultures, are then serially passaged under standard culture conditions, whereby the majority of HUFs will undergo p53-dependent senescent growth arrest, due to the sensitivity of mouse cells to atmospheric oxygen levels.HUFs that have accumulated mutagen-induced or spontaneous mutations that enable bypass of senescence continue to proliferate and ultimately become established into immortalized cell lines.DNA from immortalized HUF clones is then sequenced to identify TP53 mutations.Environmental carcinogens that have been examined using the HIMA include ultraviolet radiation , benzopyrene and aristolochic acid I ; in all cases the induced TP53 mutation pattern corresponded to the pattern found in human tumours from patients exposed to these mutagens.To protect the genome from mutation, several efficient mechanisms exist in cells to repair damage to DNA.One key repair system responsible for removing damage induced by certain environmental carcinogens is the nucleotide excision repair pathway.NER removes several types of structurally distinct DNA lesions including UV-induced photolesions, intrastrand crosslinks and chemically-induced bulky DNA adducts, such as those formed after exposure to polycyclic aromatic hydrocarbons .NER operates in two distinct subpathways: global genomic NER that recognizes lesions that cause local structural distortions in the genome, and transcription-coupled NER that responds to lesions that block the progression of RNA polymerase II on the transcribed strand of transcriptionally active genes.Following damage recognition, a common set of factors are recruited that ultimately incise the DNA 5′ and 3′ to the lesion to remove a 24-32 nucleotide fragment.The undamaged strand serves as a template for replicative DNA polymerases to fill the gap, which is finally sealed by ligation .Mouse models deficient in various NER components have been generated not only to study the role of NER in the repair of different types of damage and ascertain how this relates to cancer risk , but also to increase the sensitivity of carcinogenicity studies .For example, Xpa-knockout mice, or cells derived from them, are deficient in both GG-NER and TC-NER.Xpa-Null mice are highly sensitive to environmental carcinogens and exhibit accelerated and enhanced tumour formation after treatment with carcinogens such as UV and PAHs like BaP, compared with wild-type mice .Increased mutation frequencies of a lacZ reporter gene have been measured in tissues from Xpa-Null mice treated with the aforementioned carcinogens, and an increased rate of p53-mutated foci was detected on the skin of Xpa-Null Trp53 mice exposed to UVB .Further, in in vitro studies, cells with reduced or deficient repair capacity were also more sensitive to the lethal or mutagenic effects of DNA damage .Here we have generated an Xpa-deficient Hupki mouse strain with the aim of increasing TP53 mutation frequency in the HIMA.As Xpa-Null cells are completely deficient in NER, we hypothesized that carcinogen-induced DNA adducts would persist in the TP53 sequence of Xpa-Null HUFs, leading to an increased propensity for mismatched base pairing and mutation during replication of adducted DNA .In the present study primary Xpa-WT and Xpa-Null HUFs were treated with benzopyrene-7,8-diol-9,10-epoxide, the activated metabolite of the human carcinogen BaP , which forms pre-mutagenic BPDE-DNA adducts-7,8,9-trihydroxy-7,8,9,10-tetrahydrobenzopyrene ) that can be removed by NER .BPDE-treated HUFs were subjected to the HIMA and TP53 mutations in immortalized clones were identified by direct dideoxy sequencing of exons 4-9.The induced TP53 mutation patterns and spectra were compared between the two Xpa genotypes and to mutations found in human tumours.BaP was purchased from Sigma-Aldrich.For in vitro treatments, BaP was dissolved in DMSO to a stock concentration of 1 mM and stored at −20 °C.For in vivo treatments, BaP was dissolved in corn oil at a concentration of 12.5 mg/mL.BPDE was synthesized at the Institute of Cancer Research using a previously published method .BPDE was dissolved in DMSO to a stock concentration of 2 mM under argon gas and stored at −80 °C in single-use aliquots.Hupki mice, homozygous for a knock-in TP53 allele harbouring the wild-type human TP53 DNA sequence spanning exons 4–9) in the 129/Sv background were kindly provided by Monica Hollstein.Transgenic Xpa+/− mice, heterozygous for the Xpa-knockout allele on a C57Bl/6 background were obtained from the National Institute for Public Health and the Environment.In the Xpa-knockout allele, exon 3, intron 3 and exon 4 have been replaced by a neomycin resistance cassette with a PGK2 promoter.To generate Hupki mice carrying an Xpa-knockout allele, Hupki+/+ mice were first crossed with Xpa+/− mice.Progeny with the Hupki+/−;Xpa+/− genotype were then backcrossed to Hupki+/+ stock to generate Hupki+/+ animals that were Xpa+/+ or Xpa+/−.Hupki+/+;Xpa+/− and Hupki+/+;Xpa+/+ offspring were thereafter intercrossed to maintain the colony and produce Xpa+/+ and Xpa−/− mice and embryos for experiments.Animals were bred at the Institute of Cancer Research in Sutton, UK and kept under standard conditions with food and water ad libitum.All animal procedures were carried out under license in accordance with the law and following local ethical review.The Hupki and Xpa genotype was determined in mouse pups or embryos by PCR prior to experiments.To extract DNA for genotyping, ear clips or cells were suspended in 400 μL of 50 mM NaOH and heated to 95 °C for 15 min.Next, 35 μL of 1 M Tris-HCl was added to each sample, followed by centrifugation for 20 min at 13,000 rpm.The supernatant was used for genotyping.Primers and PCR reaction conditions for the Hupki, mouse Trp53 or Xpa alleles are described in Table S1.Female Xpa-WT and Xpa-Null Hupki mice were treated with BaP as indicated below and sacrificed either 24 h or 5 days after the last administration following treatment regimens published previously .Several organs were removed, snap frozen in liquid N2, and stored at −80 °C until analysis.In the first experiment, three groups of animals were treated orally with 125 mg/kg bw of BaP.Groups 1 and 2 received a single dose and Group 3 was dosed once daily for 5 days.Groups 1 and 3 were sacrificed 24 h after the last administration, and Group 2 was sacrificed 5 days after the last administration.In the second experiment, two groups of animals were treated with 12.5 mg/kg bw BaP.Group 4 was treated with a single dose and sacrificed 24 h later.Group 5 was dosed once daily for 5 days and sacrificed 24 h after the last administration.Matched control mice for each group received corn oil only.DNA adduct formation was assessed as described below.Genomic DNA was isolated from cells or tissue by a standard phenol/chloroform extraction method and stored at −20 °C.DNA adducts were measured in each DNA sample using the nuclease P1 enrichment version of the 32P-postlabelling method .Briefly, DNA samples were digested with micrococcal nuclease and calf spleen phosphodiesterase, enriched and labelled as reported .Solvent conditions for the resolution of 32P-labelled adducts on polyethyleneimine-cellulose thin-layer chromatography were: D1, 1.0 M sodium phosphate, pH 6; D3, 4.0 M lithium-formate, 7.0 M urea, pH 3.5; D4, 0.8 M lithium chloride, 0.5 M Tris, 8.5 M urea, pH 8.After chromatography TLC plates were scanned using a Packard Instant Imager.DNA adduct levels were calculated from adduct counts per minute, the specific activity of ATP and the amount of DNA used.Results were expressed as DNA adducts/108 normal nucleotides.An external BPDE-modified DNA standard was used for identification of BaP-DNA adducts.Xpa-WT and Xpa-Null Hupki mouse embryonic fibroblasts were isolated from day 13.5 embryos of intercrosses of Hupki+/+;Xpa+/− mice according to a standard procedure.Briefly, neural and haematopoietic tissues were removed from each embryo by dissection and the remaining tissue was digested in 1 mL of 0.05% trypsin-EDTA at 37 °C for 30 min."The resulting cell suspension from each embryo was mixed with 9 mL of growth medium supplemented with 10% fetal bovine serum and 100 U/mL penicillin and streptomycin), pelleted at 1000 rpm for 5 min, and then transferred into a 175-cm2 tissue-culture flask containing 35 mL of growth medium.Cells were cultured to 80–90% confluence at 37 °C/5% CO2/3% O2 before preparing frozen stocks.Fibroblast cultures were genotyped as described above.HUFs were cultured in growth medium at 37 °C/5% CO2 with either 20% or 3% O2, adjusted using an incubator fitted with an oxygen sensor and a nitrogen source.All manipulations conducted outside the incubator were performed at 20% O2.For passaging, cells were detached with 0.05% trypsin-EDTA for 2–3 min, suspended in growth media and reseeded at the desired cell number or dilution."When required, cells were counted using an Improved Neubauer Hemacytometer according to the manufacturer's instructions.Mammalian cell culture, including that of HUFs, typically takes place in incubators containing ambient air buffered with 5–10% CO2, which contains a much higher level of oxygen than the concentration to which tissues are exposed in vivo.The mean tissue level of oxygen is variable, but is typically about 3%, and mean oxygen tension in an embryo may be even less .Mouse cells are more sensitive than human cells to growth at 20% O2, whereby they accumulate more DNA damage and respond by entering senescence within two weeks of culture .While this is required for the selective growth of TP53-mutated clones in the HIMA, it also limits the number of primary cells available for experiments prior to initiating an assay.It was shown previously that culture-induced senescence of primary mouse embryo fibroblasts could be inhibited by growing cells in 3% oxygen .To compare the growth of HUFs at atmospheric or physiological oxygen levels, passage 0 primary HUFs were thawed and cultured in either 20% O2 or 3% O2.After 48 h, the cells were trypsinized and counted.Cells were reseeded into 25-cm2 flasks and again incubated at either 20% or 3% O2.Cells were counted every 3–4 days and reseeded at 2.5 × 105 cells/25-cm2 flask for several weeks.Cultures in 3% O2 began to proliferate more rapidly after ∼3 weeks in culture and were subsequently reseeded at 0.8–1.2 × 105 cells/25-cm2 flask.Cultures in 20% O2 began to populate with spontaneously immortalized, faster growing cells after 30–40 days and were subsequently reseeded at 0.8–1.2 × 105 cells/25-cm2 flask.The fold-population increase was calculated each time cells were counted and was used to calculate the cumulative population increase: CumPI1 = PI1, CumPI2 = PI1 * PI2, CumPI3 = PI1 * PI2 * PI3, etc.The cumulative population increase was then used to calculate the cumulative population doubling: CumPD1 = log2, CumPD2 = log2, etc.The crystal violet staining assay was used to determine relative cell survival following BaP or BPDE treatment, compared with control cells.Cells were seeded on 96-well plates at 2.5–5.0 × 103/well and treated the following day with BaP or BPDE diluted in growth medium to a highest final concentration of 1 μM.BPDE treatment media was replaced after 2 h with normal growth media.Treatment was performed in 5 replicate wells per condition at 37 °C/5% CO2/3% O2.At 24 or 48 h following initiation of treatment, cells were rinsed with PBS and adherent cells were fixed and stained for 15 min with 0.1% crystal violet in 10% ethanol.Cells were gently washed with PBS to remove excess crystal violet and allowed to dry.For quantification, the dye was solubilized in 50% ethanol and absorbance at 595 nm was determined using a plate reader.Data are presented as the amount of absorbance in wells of treated cells relative to that of DMSO-treated cells and are representative of at least three independent experiments.The linearity of this method was confirmed for subconfluent HUF cultures.The day prior to treatment, primary HUFs were seeded so as to be sub-confluent at the time of harvest.For BaP treatment, 1.5 × 106 or 1.0 × 106 cells were seeded into 75-cm2 flasks for treatments of 24 or 48 h, respectively.For BPDE treatment, cells were seeded at 2.0–2.5 × 106 cells into 75-cm2 flasks.Duplicate flasks were treated for each condition.BaP and BPDE were diluted in growth medium to a highest final concentration of 1 μM.Cells were incubated with BaP at 37 °C/5% CO2, in either 20% or 3% O2, or with BPDE at 37 °C/5% CO2/3% O2.Cells grown in medium containing 0.1% DMSO served as control.In the BPDE-DNA adduct removal experiment, treatment medium was removed after 2 h and replaced with fresh growth medium.Cells were harvested following the indicated incubation time and stored as pellets at −20 °C until analysis.DNA adduct formation was assessed as described above.Immortalization of primary Hupki mouse embryo fibroblasts treated with BPDE was performed twice, according to previously published protocols .Frozen Xpa-WT and Xpa-Null primary HUFs were thawed and seeded into 175-cm2 flasks at 37 °C/5% CO2/3% O2 for expansion.After 3 days, cells were trypsinized, counted and seeded at 2.0 × 105 cells/well into 6-well Corning CellBind® plates.Cells were treated the following day with 0.5 μM BPDE or 0.1% DMSO for 2 h.Following treatment, as the cells approached confluence, the HUFs were subcultured on 6-well Corning CellBind® plates at a dilution of 1:2–1:4.After a further 4 days, all cultures were transferred to 20% O2 to select for senescence bypass.Cultures were passaged once or twice more at 1:2–1:4 before the cultures slowed significantly in their rate of proliferation, and began to show signs of senescence.During senescent crisis, cultures were not passaged again until regions of dividing cells or clones had emerged, and were not diluted more than 1:2 until cells were able to repopulate a culture dish in less than 5 days after splitting.Cultures that did not contain dividing cells were passaged 1:1 every 2 weeks until clones developed.When immortal cells emerged from the senescent cultures and expanded into clones, serial passaging was resumed at dilutions of at least 1:2–1:4 for several passages, followed by further passaging at dilutions up to 1:20.Once a culture achieved a doubling rate of ≤48 h and appeared homogeneous, it was progressively expanded from a 6-well plate to larger flasks, frozen stocks were prepared and a portion of cells was pelleted for DNA extraction."DNA was isolated from cell pellets using the Gentra Puregene Cell Kit B, according to the manufacturer's instructions.Human TP53 sequences were amplified from each sample using the human TP53-specific primers and cycling conditions described in Table S2.Amplification products were assessed by electrophoresis on 2% agarose gels in TBE buffer.Band size and intensity were monitored by loading 4 μL of Gel Pilot 100 bp Plus marker onto each gel.To remove primers and deoxynucleoside triphosphates prior to sequencing, PCR reactions were digested with 2 U exonuclease I and 10 U shrimp alkaline phosphatase for 20 min at 37 °C followed by an inactivation step at 80 °C for 15 min.Samples were submitted to Beckman Coulter Genomics for Sanger dideoxy fluorescent sequencing using the sequencing primers indicated in Table S2.Chromas software was used to export the FASTA sequence from the chromatograms which were also visually inspected.FASTA sequences were analyzed by alignment against a human TP53 reference sequence, NC_000017.9 from Genbank, using the Basic Local Alignment Search Tool from the National Center for Biotechnology Information.Variations were assessed using the mutation validation tool available at the IARC TP53 mutation database, and could be classified as either homo-/hemi-zygous or heterozygous.Mutations were confirmed by sequencing DNA from an independent sample of cells from the same clone.All statistical analyses were carried out using SAS v. 9.3 for Windows XP."The effects of Xpa status and chemical treatment on adduct formation were examined using ordinary least-squares 2-factor Analysis of Variance followed by Bonferroni's post-hoc contrasts.Homogeneity of variance across the treatment groups was examined using the Bartlett test."Alternatively, pair-wise comparisons employed the Student's t-test.The effects of Xpa status and chemical treatment on the frequency of TP53 mutant clones or TP53 mutation frequency were analyzed using 2 × 2 × 2 contingency table analysis.The Chi-square test was used to test the null hypothesis that row and column variables are not significantly associated.Odds Ratio values were employed to assess the relative risk of a given outcome for paired levels of the chemical treatment or Xpa genotype.The exact statement in Proc Freq provided exact tests and confidence limits for the Pearson Chi-square and Odds Ratio values.Since a small proportion of TP53 mutant clones contained more than one mutation, the TP53 mutation response was treated as an ordinally-scaled dependent variable.The effects of Xpa genotype and chemical treatment on mutation response were determined using ordinal logistic regression in SAS Proc Catmod.Pair-wise statistical comparisons of mutation patterns employed a variation on the algorithm originally published by ; later modified by ."Briefly, statistical comparisons of mutation patterns for two conditions were assessed using the Fisher's exact test with P values estimation determined using Monte Carlo simulation with 50,000 iterations.Hupki mice deficient in NER were generated by crossing the Hupki strain with transgenic mice harbouring an Xpa-knockout allele.The Hupki+/+;Xpa−/− offspring were healthy and did not show any obvious phenotypic differences from Hupki+/+;Xpa+/+ mice within the timeframe of these studies.Likewise, Xpa-WT and Xpa-Null HUFs were morphologically similar.DNA adduct formation after treatment with BaP was initially assessed in vivo.One day following a single BaP treatment at 125 mg/kg bw, DNA adduct levels were significantly higher in three of six tissues examined from Xpa-Null mice compared with their WT littermates, ranging from 1.4- to 3.7-fold.Unexpectedly, all Xpa-Null mice died within 2–3 days of BaP treatment.In the Xpa-WT mice, BaP-DNA adduct levels following a single treatment persisted 5 days later in all tissues except the small intestine where there was a 2.5-fold decrease.Further, DNA adduct levels greatly increased in Xpa-WT animals that received BaP daily for 5 days.Due to the acute toxicity induced by BaP at 125 mg/kg bw in Xpa-Null mice, animals were treated with a 10-fold lower dose in a subsequent experiment.A single low dose of BaP resulted in detectable DNA adducts in all tissues examined which exhibited a trend towards being higher in Xpa-Null mice compared to Xpa-WT mice in line with the results obtained after a single administration of 125 mg/kg bw.Xpa-Null mice were able to tolerate 5 daily treatments with the lower dose of BaP.Interestingly, after 5 daily treatments, DNA adduct levels were about the same in Xpa-WT and Xpa-Null animals.Taken together, these experiments indicate that BaP-DNA adduct removal is dependent on Xpa/NER within 1 day of treatment, although NER fails to remove these adducts following 5 days of BaP exposure even in Xpa-WT mice.Further, the inability of Xpa-Null mice to repair BaP-DNA adducts can result in lethal toxicity if the DNA damage exceeds a certain threshold.Due to the previously reported inhibition of culture-induced senescence of MEFs at 3% O2, we sought to determine whether the growth of primary HUFs could also be enhanced and/or extended in 3% oxygen.Growth curves were generated over 6–9 weeks of culture in order to establish the growth characteristics of both Xpa-WT and Xpa-Null HUFs in 20% and 3% oxygen.In 20% O2 the primary HUF cultures rapidly proliferated for the first several days in culture.After 5 days the original populations had increased approximately 40-fold, and after 11 days the populations had increased about 200-fold.The proliferation of HUFs in 20% O2 markedly decreased after 11–15 days, as cells began to senesce.The cultures resumed proliferation after 35–45 days as immortal cells emerged.The HUF cultures grew rapidly in 3% O2 for the first 11 days, at a slightly increased rate compared with cells grown in 20% O2, doubling every 24–30 h.After 5 days, the original populations had increased by 70-fold, and after 11 days they had increased by 2100-fold.After this point the cultures temporarily proliferated at a reduced rate and the cells appeared morphologically heterogeneous.By 25 days in culture in 3% O2, cultures were again rapidly proliferating and were homogeneous in appearance.Another set of HUF cultures was grown in 3% O2 for 1 week and then transferred to 20% O2, in order to determine whether cultures grown temporarily at 3% O2 would still be capable of senescence and immortalization at 20% O2.Following transfer to 20% O2, the cultures underwent 2 population doublings in the first 3 days of culture, but then slowed and began to senesce by the next passage.Similarly to cells grown continuously at 20% O2, immortalized cells emerged in the cultures after 35–40 days.The growth curves generated in 20% and 3% O2 indicated a clear growth advantage for primary HUFs grown in 3% O2, at least prior to 2 weeks in culture.However, the impact of 3% versus 20% O2 on the metabolic activation of carcinogens has not been previously examined.Primary Xpa-WT HUFs grown in 20% and 3% O2 were treated with 1 μM BaP for 24 or 48 hr to assess DNA adduct formation.Interestingly, DNA adduct levels were markedly higher in HUFs treated in 3% O2 than in cells treated in 20% O2.After 24 h treatment, a 4-fold higher level of DNA adducts was detected in HUFs treated in 3% O2 and after 48 h treatment DNA adduct levels were 2-fold higher in 3% O2 than in cells treated in 20% O2.Therefore, growing HUFs temporarily in 3% O2 not only provides a substantial increase in cells available for experiments, but may enhance the formation of DNA-reactive metabolites following treatment with environmental carcinogens.From these observations, all subsequent experiments were performed in 3% O2.Previous studies using MEFs from Xpa-knockout mice showed that Xpa-deficient cells are highly sensitive to DNA damage that is normally repaired by NER, including that induced by BaP .Here, we have compared Xpa-Null HUFs with Xpa-WT HUFs for their sensitivity to BaP and its reactive intermediate BPDE.Xpa-Null HUFs were indeed more sensitive to treatment with both compounds, although the difference was more pronounced after 48 h.Following treatment with 1 μM BaP, 63% of Xpa-Null cells had survived after 24 h, but by 48 h BaP-treated Xpa-Null cells were 23% of control.Upon 1 μM BPDE treatment, 41% of Xpa-Null cells had survived after 24 h, but this had decreased to 6% at 48 h.Interestingly, surviving Xpa-Null cells treated with ≥0.5 μM BaP did not resume proliferation, whereas those treated with ≤0.25 μM BaP resumed proliferation within 1–2 days of treatment.Xpa-Null HUFs were shown to be highly sensitive to treatment with BaP and BPDE.Next, the level of DNA adducts induced by these compounds was assessed in Xpa-Null and Xpa-WT HUFs.Cells were treated with 0.05–1.0 μM of BaP for 24 h or 0.125–1.0 μM BPDE for 2 h.A concentration-dependent increase in DNA adduct formation was found after treatment with both compounds.Interestingly, the Xpa-Null HUFs, despite being deficient in NER, accumulated similar or slightly lower levels of BaP-induced DNA adducts to Xpa-WT HUFs, reaching up to 370 ± 111 adducts per 108 nt at 1 μM BaP versus 513 ± 34 adducts per 108 nt in Xpa-WT HUFs.On the other hand, DNA adduct formation following BPDE treatment was slightly higher in Xpa-Null HUFs than in Xpa-WT HUFs, although the difference was significant only at the highest concentration of BPDE where DNA adduct levels reached 566 ± 88 adducts per 108 nt in Xpa-Null HUFs versus 475 ± 40 adducts per 108 nt in Xpa-WT HUFs.Additionally, we examined a time course of BPDE adduct formation and removal in Xpa-WT and Xpa-Null HUFs.Previous studies have shown that the half-life of BPDE in cells is ∼12 minutes and peak adduct formation appears to vary from 20 min to 2 h, perhaps depending on cell type and experimental design .Here, HUFs were treated with 0.25 μM BPDE for up to 2 h, and one set of cultures was further incubated in normal medium for 4 h after BPDE was removed.Cells were harvested to assess DNA adduct levels at 30 min, 2 h and 6 h.Longer incubation times were not included to avoid effects caused by proliferation.After 30 min incubation with BPDE, Xpa-Null and Xpa-WT HUFs accumulated the same level of DNA adducts.This initial DNA adduct level progressively declined in Xpa-WT HUFs, by 18% at 2 h and by 30% at 6 h.In Xpa-Null HUFs, however, DNA adduct levels peaked at 2 h, and were similar at 6 h to the levels detected at 0.5 h.These results demonstrate that Xpa-WT HUFs are able to repair BPDE-DNA adducts over time, while the repair capacity of the Xpa-Null HUFs is impaired.Primary Xpa-WT and Xpa-Null HUF cultures were exposed for 2 h to 0.5 μM BPDE and then serially passaged for 8–16 weeks in 20% O2, resulting in one immortalized cell line per culture.Untreated cells of each Xpa genotype were immortalized in parallel.Mutations in the human TP53 sequence of immortalized HUF lines were identified by PCR amplification of exons 4–9 and direct dideoxy sequencing.From untreated HUF cultures, only two spontaneously immortalized lines of each Xpa genotype were found to contain mutated TP53.Treatment with BPDE markedly increased the frequency of TP53 mutations over that observed in untreated cultures.Of the 102 immortalized cell lines derived from BPDE-exposed Xpa-WT HUFs, 16 clones harboured a total of 20 mutations, while 23 immortalized cell lines derived from BPDE-exposed Xpa-Null HUFs harboured a total of 29 mutations.Statistical data analyses initially examined the effect of BPDE treatment on the frequency of TP53 mutant clones, and confirmed a statistically significant effect for both Xpa-WT cells and Xpa-Null cells.Similarly, the analyses showed a significant effect of BPDE treatment on TP53 mutation frequency for Xpa-WT cells as well as Xpa-Null cells.Furthermore, these data suggest a trend for an increased frequency of TP53 mutagenesis in BPDE-exposed Xpa-Null HUFs compared with Xpa-WT HUFs that was confirmed by statistical analyses.More specifically, Odds Ratio values confirmed that Xpa-Null cells are more susceptible to the effects of BPDE treatment as compared with Xpa-WT cells.However, the increase in the relative risk of TP53 mutation between Xpa-Null and Xpa-WT HUFs is not statistically significant due to the relatively small number of mutants obtained and the consequently low statistical power.Indeed, separate statistical analysis that examined the impact of Xpa status on TP53 mutation frequency for BPDE treated cells only failed to detect a significant effect.Most mutations induced by BPDE occurred at G:C base pairs, predominantly consisting of single base substitutions.The most frequent mutation type was a G:C > T:A transversion, the signature mutation of BaP/BPDE, followed by G:C > C:G transversions, and G:C > A:T transitions.Single or tandem deletions of guanines, leading to a frameshift, were also observed but were more frequent in Xpa-Null clones.Approximately half of the mutations at G:C base pairs occurred at CpG sites.Out of 33 CpG sites between exons 4–9 in TP53, 11 were mutated by BPDE, most commonly resulting in G:C > C:G or G:C > T:A transversions.Of the four mutations found in untreated control cultures, one was a G:C > C:G transversion and three were A:T > C:G transversions.The A:T > C:G mutation type did not occur in BPDE-treated Xpa-WT HUFs but was found in three BPDE-treated Xpa-Null clones.It has been shown previously that DNA damage induced by BPDE is repaired more rapidly if it occurs on the transcribed strand of TP53 compared with the non-transcribed strand .This is thought to explain the strand bias of G to T mutations in TP53 found in lung tumours of tobacco smokers, where mutations are preferentially found on the non-transcribed strand .In contrast, in TC-NER-deficient cells mutations are biased in favour of the transcribed strand .Indeed, here we found an increased number of BPDE-induced mutations on the transcribed strand in Xpa-Null HUFs relative to Xpa-WT HUFs.Statistical analysis to examine the influence of Xpa status on the transcribed and non-transcribed strand BPDE-induced TP53 mutation frequencies, respectively, showed a statistically significant effect for the former, but not the latter.More specifically, Xpa status had a significant effect on the frequency of BPDE-induced mutations on the transcribed strand.Moreover, the Odds Ratio confirmed a 6-fold increase in the average likelihood of BPDE-induced transcribed strand mutations for Xpa-Null cells compared to Xpa-WT cells.No such effect of Xpa status was observed for BPDE-induced TP53 mutations on the non-transcribed strand.All mutation types detected on the non-transcribed strand of Xpa-Null clones were also found on the transcribed strand, with the exception of A:T > T:A.The majority of BPDE-induced TP53 mutations were missense.Additionally, three nonsense, three silent, one splice and five frameshift mutations were induced, most of which occurred in Xpa-Null clones.All of the silent mutations occurred in clones that harboured a second mutation.Most of the missense mutations found in the immortalized HUF clones could be classified as ‘non-functional’, or defective in transactivation activity, as determined by a comprehensive yeast functional study .This indicates, as shown previously, that loss of transactivation activity is an important aspect of senescence bypass by p53 inactivation .However, eight missense mutations were classified as ‘partially functional’ or functional, whereby these p53 mutants retained some or all of their transactivation activity.Notably, all but one of the PF/F mutants occurred in clones that also contained a second mutation, suggesting that partial loss of p53 transactivation function is not sufficient for senescence bypass.We also examined how sequence context and the presence of methylated CpG sites influenced the pattern of G:C base pair mutations induced by BPDE.In Table S3 mutations of each type were sorted by the bases 5′ and 3′ to the mutated base.For G:C > A:T transitions, mutations occurred at CpG sites, at GG pairs, or at G bases with a 5′ or 3′ T.For G:C > C:G transversions, mutations occurred at CpG sites or at GG pairs.For G:C > T:A transversions, most mutations occurred at CpG sites, and the remaining mutations arose at GG pairs or in a 5′T-G-T3′ context.Similarly, deletions at G:C base pairs occurred either at CpG sites or at GG pairs.In the case of mutations occurring at GG pairs, the second G was 5′ or 3′ to the mutated base.A total of 46 unique BPDE-induced mutations were detected in the sequenced exons, occurring in 38 codons overall.Three codons were mutated in both Xpa-WT and Xpa-Null clones, but unique mutations were induced in each case.Mutations were found in codons for two key residues that make direct contact with the DNA, three that support the structure of the DNA binding surface and one that is important for coordinating Zn binding.The mutations identified in BPDE-exposed HUF clones were compared with TP53 mutations found in human cancer across all tumour types, as well as specifically in lung cancer from smokers and non-smokers, using the IARC TP53 mutation database, version R17.All but five of the 46 TP53 mutations found in BPDE-exposed HUFs have been detected in at least one human tumour.Mutations that were infrequently or not found in human tumours included silent mutations, frameshifts, and mutations that resulted in a partially functional mutant p53 protein.Of the six hotspots most frequently mutated across all cancer types, mutations were induced by BPDE at each, with the exception of R175.Further, BPDE also targeted two lung cancer-specific hotspots, codons V157 and R158.All of these hotspots, with the exception of codon 249, contain CpG sites.At codon 157, BPDE induced one G:C > T:A mutation at the 1st position and one G:C > A:T mutation at the 3rd position.At codon 158, BPDE induced a G:C base pair deletion at the 1st position and two G:C > T:A mutations at the 2nd position.Codons 157 and 158 are more frequently targeted in smokers’ lung cancer compared with cancer overall, with G:C > T:A transversions predominating.In cancer overall, G:C > T:A transversions are also the most common mutation type at codon 157, but are less frequent at codon 158, where G:C > A:T transitions at the second position are more common.No mutations at codons 157 or 158 have ever been detected in spontaneously immortalized HUFs.The most frequently mutated TP53 codons in cancer overall and in smokers’ lung cancer are 248 and 273.Likewise, these two codons were hotspots for BPDE-induced mutations in the current HIMA.In cancer overall and nonsmokers’ lung cancer, the most frequent mutation type at codon 248 is G:C > A:T, and indeed two of the mutations induced by BPDE were G:C > A:T transitions at the 2nd position.Additionally, one G:C > T:A mutation at the 2nd position was detected in the BPDE-treated HUFs.G:C > T:A transversions at the 2nd position in codon 248 are much more frequent in smokers’ lung cancer compared with all cancer.Notably, mutation at codon 248 has not been detected in untreated, spontaneously immortalized HUFs.With regards to codon 273, BPDE-induced mutations included one G:C > T:A transversion at the first position, one G:C > C:G transversion at each of the 1st and 2nd positions, and two G:C > T:A mutations at the 2nd position.The most common mutation type found in human cancer at codon 273 is G:C > A:T; G:C > T:A transversions at the 2nd position are much more frequent in smokers’ lung cancer."G:C > C:G mutations at codon 273 occur in 1-2% of cancer overall and 3-5% of smoker's lung cancer.BPDE-induced TP53 mutations were compared further with mutations detected in previous HIMAs, including HUFs treated with BaP, 3-NBA, AAI, UV and MNNG and untreated controls.Seven codons mutated by BPDE were also mutated in cells treated with BaP, and six identical mutations were induced by both compounds.Codons 157 and 224 were not mutated in HIMAs using other compounds.One mutation at codon 158 was induced by AAI, one mutation at codon 248 was induced by UV, and codon 273 was targeted once each by 3-NBA and AAI.We have generated an NER-deficient Hupki model by crossing the Hupki mouse with an Xpa-deficient mouse.We hypothesized that Xpa-deficiency would increase the sensitivity of the Hupki model to DNA damage normally repaired by NER and thereby increase the frequency of carcinogen-induced TP53 mutations in immortalized HUFs.Xpa-WT and Xpa-Null mice and HUFs were treated with the ubiquitous environmental carcinogen BaP, or its reactive metabolite BPDE, which form DNA adducts that have been shown to be repaired by the NER pathway.We found that Xpa-Null Hupki mice and HUFs were more sensitive than their Xpa-WT counterparts to the DNA damage induced by BaP or BPDE, exhibiting pronounced mortality at the highest doses tested.Further, we observed a bias for BPDE-induced mutations on the transcribed strand of TP53 in immortal clones derived from Xpa-Null HUFs, although TP53 mutation frequency overall was not significantly increased in Xpa-Null HUFs compared to Xpa-WT HUFs.Although BaP- and BPDE-induced DNA adduct levels were generally similar between Xpa-WT and Xpa-Null HUFs, Xpa-Null cells were less able to survive the DNA damage.This suggests that the sensitivity of Xpa-Null cells was not due to retention of more DNA adducts.The sensitivity of Xpa-Null HUFs to BaP and BPDE is more likely caused by the blockage of RNAPII by BPDE-N2-dG adducts in actively transcribed genes.The persistence of DNA lesions on the transcribed strand of active genes in TC-NER-deficient cells is a strong trigger for apoptosis induction .It has been observed that Xpa-Null cells, deficient in both GG- and TC-NER, undergo apoptosis after DNA damage induced by carcinogens such as UV or BaP, whereas Xpc-Null cells, deficient only in GG-NER, do not, although this may be cell-type specific .TC-NER-deficient cells are unable to repair RNAPII-blocking lesions; subsequent induction of apoptosis appears to occur during replication, possibly due to collision of DNA replication forks with stalled transcription complexes during S phase .Xpa-Null Hupki mice were also highly sensitive to treatment with BaP; while treatment was well tolerated by Xpa-WT Hupki mice, Xpa-Null mice died within 2–3 days of receiving the highest dose tested.This sensitivity to genotoxins has been shown previously for Xpa-Null mice with WT Trp53 after exposure to UV, 7,12-dimethylbenzanthracene, BaP and 2-amino-1-methyl-6-phenylimidazopyridine .As discussed above for HUFs, the sensitivity of Xpa-Null mice to these carcinogens is likely due to TC-NER deficiency and blockage of RNAPII by unrepaired DNA adducts.Xpc-Null mice, deficient only in GG-NER, do not share the same sensitivity .One day following a single treatment with BaP, several tissues analyzed from Xpa-Null Hupki mice had a higher level of BPDE-N2-dG adducts compared with Xpa-WT mice.When the animals were treated with 5 daily doses of BaP, however, similar DNA adduct levels were detected in Xpa-WT and Xpa-Null mice.This suggests that GG-NER is activated initially following BaP treatment in Xpa-WT mice, but is unable to deal with continuing damage.Interestingly, when BaP was previously tested in Xpa-Null and Xpa-WT mice with WT Tp53, 9 weeks of treatment were required before DNA adduct levels in Xpa-Null mice surpassed those of Xpa-WT mice ; in that experiment DNA adduct formation was not assessed following a single treatment.Our results and those of others suggest that GG-NER kinetics of BPDE-N2-dG adducts in NER-proficient mice is dose- and time-dependent.It is apparent that further investigations are required to explain these observations.In addition to increased sensitivity to BPDE-N2-dG adducts, Xpa-Null HUFs also exhibited enhanced BPDE-induced mutagenesis on the transcribed strand of TP53 compared with Xpa-WT HUFs following BPDE treatment.These data further suggest that Xpa-Null HUFs are unable to repair BPDE-N2-dG adducts on the transcribed strand; adducts that do not induce apoptosis may be converted to mutations.While the number of immortal Xpa-WT and Xpa-Null clones harbouring TP53 mutations on the non-transcribed strand was nearly the same, 5.5-fold more Xpa-Null clones contained mutations on the transcribed strand compared to Xpa-WT clones.Further, the number of additional mutations on the transcribed strand induced by BPDE in Xpa-Null HUFs was equal to the overall increase in TP53 mutations in Xpa-Null HUFs compared to Xpa-WT cells.This, and the accompanying statistical analyses, suggests that the increase in BPDE-induced TP53-mutagenesis in Xpa-Null HUFs compared to Xpa-WT cells can be primarily be accounted for by the inability of Xpa-Null HUFs to repair damage on the transcribed strand.It is unclear why Xpa-deficiency did not also increase TP53 mutagenesis on the non-transcribed strand.It is known that repair of BPDE-DNA adducts is slower on the non-transcribed strand compared to the transcribed strand in the TP53 gene of normal human fibroblasts, creating a bias of mutations on the non-transcribed strand in NER-proficient cells .Despite the relative inefficiency of GG-NER compared to TC-NER, BPDE-DNA adduct removal from the bulk of the genome has been shown, to varying extents, in multiple studies.The amount of removal within 8 h of BPDE exposure ranged between 5 and 60% in normal human fibroblasts , to 50% in V79-XEM2 cells , to 75% removal in A549 lung carcinoma cells .In the current study, we found that Xpa-WT HUFs removed 30% of BPDE-N2-dG adducts within 6 h of treatment.It is not known what percentage of BPDE-N2-dG adducts may have persisted in the HUF genomes beyond this time-point.Few studies have compared BaP/BPDE-induced mutagenesis in NER-proficient and NER-deficient cells.Xpa-Null mouse embryonic stem cells treated with BaP exhibited a higher rate of Hprt mutations than their WT counterparts, although the Xpa-Null cells also had a higher rate of spontaneous mutagenesis .Further, more Hprt mutations were induced by BPDE in an NER-defective Chinese hamster ovary cell line relative to a WT line .On the other hand, in vivo, similar mutation frequencies at a lacZ reporter gene were detected in the liver and lung of BaP-treated Xpa-Null and Xpa-WT mice; lacZ mutation frequencies did increase in the spleens of Xpa-Null mice, but only after 13 weeks of BaP treatment .Thus, the impact of NER-deficiency on mutagenesis resulting from BPDE-N2-dG adducts may be cell-type specific or dependent on the target gene of interest and whether or not the gene is subject to TC-NER.In agreement with previous studies, the majority of BPDE-induced TP53 mutations occurred at G:C base pairs in both Xpa-WT and Xpa-Null HUFs, with G:C > T:A transversions being the predominant mutation type .A high percentage of the mutations at G:C base pairs occurred at CpG sites; G:C > C:G and G:C > T:A transversions were more common at these sites than G:C > A:T transitions.Further, we found that BPDE induced mutations at several sites that are hotspots for mutation in cancer overall, or smokers’ lung cancer specifically.Codons 157, 158 and 273 were also mutated in prior HIMAs with BaP-treated HUFs .The pattern and spectrum of TP53 mutagenesis can be influenced by a number of factors.In previous studies DNA adduct formation by BPDE was enhanced at methylated CpG sites in TP53 hotspot codons 157, 158, 245, 248, and 273 on the non-transcribed strand ; the precise mechanism underlying this phenomenon is not yet understood.It has been proposed that the methyl group of 5-methylcytosine allows increased intercalation of BPDE at methylated CpG sites and that this increase in BPDE intercalative binding subsequently results in increased covalent interaction .Others have suggested that the methylation of cytosine enhances the nucleophilicity of the exocyclic amino group of the base paired guanine .All of the CpG sites in Hupki TP53 are methylated .Interestingly, codon 179, which is a mutation hotspot in smokers’ lung cancer but does not contain a CpG site or a G on the non-transcribed strand, was not mutated by BPDE in our study and was not targeted by BPDE in normal human bronchial epithelial cells .On the other hand, codon 267, which is infrequently mutated in lung cancer but does harbour a CpG site, was mutated by BPDE in two HUF clones and exhibited pronounced BPDE-DNA adduct formation in NHBE cells .Our data provide additional support for the idea that certain TP53 mutation hotspots act as selective BPDE binding sites.Additional factors such as sequence context, efficiency of lesion repair, and fidelity of translesion synthesis polymerases also play important roles in TP53 mutagenesis.We found that a common context for BPDE-induced single base substitutions or deletions at G:C base pairs was GG dinucleotide sequences; mutation hotspots for BPDE-N2-dG adducts have previously been found in such sequences .Sequence context likely influences adduct conformation which may result in different sequence-dependent removal rates of the lesion and also control mutagenic specificity.Furthermore, the TP53 mutations ultimately observed in human cancers are strongly influenced by functional selection for mutants that have a deleterious impact on the normal function of p53 or that acquire gain-of-function properties .For example, many mutation hotspots occur at codons for amino acids that are essential for DNA contact or structure of the DNA binding domain; mutations at these sites create a mutant protein that lacks the ability to transactivate the normal suite of p53 target genes.With the exception of codon 175, all of these hotspots were mutated by BPDE in our study.Further, most of the missense TP53 mutations detected in our study were classified as non-functional and, with one exception, the mutations that retained some functionality occurred only in clones that also harboured a non-functional mutation.Taken together, the pattern and spectrum of mutations generated in this study indicate that similar factors influence TP53 mutagenesis in the HUF immortalization assay and human cancer, further supporting the strength of this model for assessing the effects of carcinogens on this tumour suppressor gene.We also showed that the replicative capacity of primary HUFs could be extended by culturing the cells at 3% O2.After 11 days of culture, the population increase of HUFs grown at 3% O2 was 10-fold higher than that of HUFs grown at 20% O2.The enhanced growth permitted by temporary culture in 3% O2 provides a substantial increase in cells available for further experiments.To select for TP53-mutated cells, HUFs must eventually be transferred to 20% O2, where the ability of cells to bypass senescence serves as the selection pressure for mutants.Importantly, we found that untreated HUFs grown at 3% O2 for one week were still able to senesce when transferred to 20% O2, and immortal variants that bypassed senescence developed in a similar timeframe to cells maintained at 20% O2.Parrinello et al. found that MEFs cultured at 3% O2 for more than 15 population doublings lost their propensity to senesce at 20% O2, which they speculated may be due to a mutagenic event or adaptive response .It may therefore only be beneficial to culture HUFs for 1–2 weeks at 3% O2.In addition to a clear growth advantage, we found that HUFs accumulated a higher level of DNA adducts at 3% O2 relative to 20% O2 following treatment with BaP.We did not determine the mechanism for this in our study, but future work could examine the expression/activity of enzymes required for BaP activation at 3% O2 and 20% O2.Recent work by van Schooten et al., using the human lung carcinoma cell line A549 treated with BaP, has shown that the level of DNA adducts and the gene expression of CYP1A1 and CYP1B1 was increased under hypoxic conditions, while gene expression of UDP-glucuronosyltransferase detoxifying enzymes UGT1A6 and UGT2B7 decreased .The authors concluded that the balance of metabolism of BaP shifts towards activation instead of detoxification under low oxygen conditions.These results corroborate our findings that altered oxygen levels can influence the metabolic activation of compounds such as BaP.Although we were unable to detect a significant increase in BPDE-induced TP53 mutations overall in Xpa-Null HUFs compared to Xpa-WT HUFs, perhaps a divergence in mutation frequency would be evident at the genome-wide level.Less than 25% of HUF clones were immortalized by TP53 mutation, thus other genes are clearly involved in senescence bypass by HUFs and could also be targeted by BPDE .Recently, exome sequencing of DNA from HUFs treated with various mutagens was used to extract genome-wide mutational signatures of mutagen exposure .This study demonstrates that a single mutagen-treated immortal HUF clone harbours hundreds of single base substitutions at the exome level, likely consisting of both immortalization driver mutations and passenger mutations.Therefore, whole-genome sequence analysis of BPDE-treated Xpa-WT and Xpa-Null clones may allow differences in mutation frequency in the context of the genome to be detected that are not observed in an assay for a single gene.Furthermore, beyond simply increasing mutation frequencies, Xpa-Null HUFs may be useful in the future to enhance our understanding of the role of NER in shaping carcinogen-induced mutagenesis of both TP53 and the genome.
Somatic mutations in the tumour suppressor gene TP53 occur in more than 50% of human tumours; in some instances exposure to environmental carcinogens can be linked to characteristic mutational signatures. The Hupki (human TP53 knock-in) mouse embryo fibroblast (HUF) immortalization assay (HIMA) is a useful model for studying the impact of environmental carcinogens on TP53 mutagenesis. In an effort to increase the frequency of TP53-mutated clones achievable in the HIMA, we generated nucleotide excision repair (NER)-deficient HUFs by crossing the Hupki mouse with an Xpa-knockout (Xpa-Null) mouse. We hypothesized that carcinogen-induced DNA adducts would persist in the TP53 sequence of Xpa-Null HUFs leading to an increased propensity for mismatched base pairing and mutation during replication of adducted DNA. We found that Xpa-Null Hupki mice, and HUFs derived from them, were more sensitive to the environmental carcinogen benzo[. a]pyrene (BaP) than their wild-type (Xpa-WT) counterparts. Following treatment with the reactive metabolite of BaP, benzo[. a]pyrene-7,8-diol-9,10-epoxide (BPDE), Xpa-WT and Xpa-Null HUF cultures were subjected to the HIMA. A significant increase in TP53 mutations on the transcribed strand was detected in Xpa-Null HUFs compared to Xpa-WT HUFs, but the TP53-mutant frequency overall was not significantly different between the two genotypes. BPDE induced mutations primarily at G:C base pairs, with approximately half occurring at CpG sites, and the predominant mutation type was G:C. >. T:A in both Xpa-WT and Xpa-Null cells. Further, several of the TP53 mutation hotspots identified in smokers' lung cancer were mutated by BPDE in HUFs (codons 157, 158, 245, 248, 249, 273). Therefore, the pattern and spectrum of BPDE-induced TP53 mutations in the HIMA are consistent with TP53 mutations detected in lung tumours of smokers. While Xpa-Null HUFs exhibited increased sensitivity to BPDE-induced damage on the transcribed strand, NER-deficiency did not enhance TP53 mutagenesis resulting from damage on the non-transcribed strand in this model.
241
Evidence for subsolidus quartz-coesite transformation in impact ejecta from the Australasian tektite strewn field
Quartz is one of the most common minerals in Earth's continental crust.Under shock metamorphism it displays a wide range of effects including mechanical twins, planar fractures, planar deformation features, diaplectic glass, and lechatelierite.The study of shock metamorphism of quartz, and high-pressure silica polymorphs, i.e. coesite and stishovite, is therefore relevant to defining the physical conditions attained during the majority of hypervelocity impacts of cometary or asteroidal bodies on Earth, as well as quartz-rich surfaces elsewhere in the Solar System.Coesite is rare at the Earth’s surface but can occur in exhumed deep-seated metamorphic rocks such as kimberlites.On the other hand, coesite is a fairly common product of impact cratering and indeed it is one of the most important and reliable indicators of shock events.Coesite was synthetized for the first time by Coes and later discovered in nature by Chao et al. in sheared Coconino sandstone deposits at the 1.2 km diameter Barringer Crater.This mineral has been the subject of numerous studies seeking to understand how silica polymorphs react under sudden and extreme P-T gradients.These studies include computational simulations, shock recovery experiments, and the analysis of impact rocks from craters ranging in size from the 45 m diameter Kamil crater in Egypt to the 24 km diameter Ries crater in Germany and the 300 km diameter Vredefort structure in South Africa.In endogenic geological processes, which typically involve equilibrium reactions and timeframes ranging from years to millions of years, coesite forms from quartz at pressures between ∼3 and ∼10 GPa.In impactites, coesite is preserved as a metastable phase in crystalline rocks that experienced peak shock pressures above ∼30–40 GPa, and in porous sedimentary rocks shocked at pressures as low as ∼10 GPa.Moreover, coesite associated with impact events shows a characteristic pervasive disorder or polysynthetic twinning, both developing along composition planes.The solid-state transition of quartz to both coesite and stishovite are reconstructive transformations.This means that covalent SiOSi bonds between silica tetrahedra must break before the new framework can reassemble.Coesite has a pseudo-hexagonal framework that preserves silicon in tetrahedral coordination, while stishovite has a rutile-like packing with silicon in six-fold octahedral coordination.Reconstructive transformations are slow and hence it is generally believed that such subsolidus transformations do not occur either in shock experiments or in natural impacts even if pulse durations up to seconds are expected - depending on projectile dimensions and impact velocity.Consequently, there is a general consensus that coesite within impactites originates by crystallisation from a dense amorphous phase during shock unloading, when the pressure release path passes through the coesite stability field.The precursor amorphous phase may be a silica shock melt or a highly densified diaplectic silica glass.Here we present evidence for direct solid-state quartz-to-coesite transformation in shocked coesite-bearing quartz ejecta from the Australasian tektite/microtektite strewn field, which is the largest and youngest on Earth.These findings contradict current models for coesite formation yet are consistent with recent results from the Kamil crater, the smallest coesite-bearing impact crater reported so far.The coesite-quartz intergrowths in shocked quartz arenite from the Kamil crater suggest a quartz-to-coesite transformation that takes place during localized shock-wave reverberation at the beginning of the pore collapse process, documenting the production of localized pressure-temperature-time gradients in porous targets, as predicted by numerical models in the literature.Tektites are relatively homogeneous silica-rich glass bodies formed by melting of terrestrial surface deposits during the impact of an extraterrestrial body."They are up to several tens of centimetres in size and are found scattered over large areas of the Earth's surface called strewn fields. "The Australasian tektite/microtektite strewn field covers ∼15% of the Earth's surface, with a minimum lateral extent of 14,000 km.Australasian tektites occur on land from southeast Asia over much of Australia and Tasmania.Microtektites have also been found in the surrounding ocean basins in deep-sea sediments, and in Victoria Land, Antarctica.Despite its size and young age, location of source crater for the Australasian strewn field is still debated.Many authors suggest that a >30 km diameter crater should be located somewhere in Indochina to explain abundance, petrographic and geochemical trends in microtektite distribution.However, a new hypothetical location of the crater in the arid area of Northwest China, most probably in the Badain Jaran or Tengger deserts on the Alxa Plateau, has been proposed on the basis of geochemical and isotopic data.Due to the lack of field evidence of a source crater, other authors have proposed that the strewn field was generated by a low-altitude airburst of an impacting comet.The nature of the impact target rocks that generated the Australasian strewn field is also an open question.Major and trace element analyses on tektites suggest that tektites might be the result of the mixing between at least two different rocks, such as quartz-rich sandstone and shale.Taylor and Kaye suggested a sedimentary target rock, showing strong similarities in the comparison of major and trace element abundances between tektites and terrestrial sandstones, such as graywacke-subgraywacke-arkose.Mineral inclusions found in layered tektites from Indochina with x-ray diffraction and energy dispersive X-ray analysis indicate a fine-grained sedimentary target.More recently, Glass and Koeberl and Mizera et al., studying the ejecta found in the Australasian microtektite layer and Australasian tektites respectively, proposed that the parent material of the Australasian tektites/microtektites was a fine-grained, quartz-rich sedimentary deposit, possibly loess.A surface or near-surface sedimentary deposit was also suggested by 10Be cosmogenic nuclide analysis.There is thus a general consensus on a porous sedimentary parent material.Unmelted and partly melted impact ejecta can be found together with classical glassy microtektites in the AAMT layer.These ejecta particles consist of rock fragments, which may contain coesite, rarely stishovite, and other high-pressure phases like TiO2-II and reidite and white, opaque grains consisting of a mixture of quartz, coesite, and stishovite.Impact ejecta were first recognised in the >125 µm size fraction in 7 out of 33 microtektite-bearing deep-sea sediment cores obtained within 2000 km from Indochina.The discovery of these high-pressure phases provided a strong support for the impact cratering origin of tektites/microtektites and to the hypothesis that the crater is located in the Indochina area.More recently, impact ejecta in the AAMT layers were also discovered in the Ocean Drilling Program 1143A core, in the central part of the South China Sea, and in the Sonne-95-17957-2 and ODP-1144A cores, from the central and northern part of the South China Sea, respectively.The shocked ejecta particles associated with the AAMT layer studied in this work are from two deep-sea sediment cores both located within 2000 km of Indochina: ODP site 1144A and Sonne Core SO95-17957-2.The ejecta particles are from the >125 µm size fraction of a sediment sample from Core 37X, Section 6, 66-67 cm depth at ODP Site 1144; and from a sample from a depth of 806 cm in Core SO95-17957-2.For extraction procedures see Glass and Wu.They include 569 and 141 rock fragments and mineral grains from ODP-1144A and SO95-17957-2 cores, respectively.All the particles were first characterized in terms of shape, size, color, transparency, and luster, using a ZEISS Stemi 2000 stereomicroscope equipped with an Axiocam Camera.Seventy particles ranging in size from 150 µm2 to 500 µm2 were selected for field emission gun - scanning electron microscopy and Raman micro-spectroscopy.Twenty of these particles have pure silica composition, four of which show evidence of shock metamorphism and were thus embedded in EpoFix resin, sectioned and polished for additional Raman analysis and FEG-SEM study.Five electron-transparent focused-ion beam lamellae were cut and extracted from one ∼250 µm2 size particle that has a high abundance of micro-to-nanometre scale shock features.Backscattered electron images were obtained at the Centro Interdipartimentale di Scienze e Ingegneria dei Materiali of the University of Pisa using a FEG-SEM FEI Quanta 450 operating at 10 mm working distance, 15 kV beam acceleration and 10 nA probe current.In order to identify quartz and coesite and for their discrimination and the subsequent selection of the best sample areas for the extraction of FIB lamelle, a preliminary Raman survey were carried out using a Jobin-Yvon Horiba XploRA Plus equipped with an Olympus BX41 microscope, a grating with 1200 grooves/mm, and Peltier-cooled charge coupled device detector.The samples were analysed with a 532 nm solid-state laser using a 100× objective lens and numerical aperture 0.90.The output laser power at the sample of ∼6 mW.Wavelength calibration was done utilizing the first-order phonon band of a silicon wafer at ∼520 cm−1, with a wavenumber accuracy of 0.3 cm−1 and spectral resolution of 1.5 cm−1.The calibration was further improved using a sample of quartz before and after each session.The system was operated in the confocal mode, resulting in a spatial resolution of ∼361 nm.Spectra were collected through three acquisitions with single counting times up to 120 s.Electron-transparent lamellae were prepared for transmission electron microscopy at the Kelvin Nanocharacterisation Centre of the University of Glasgow using a dual beam FIB FEI 200TEM FIB, following the procedure described in Lee et al.TEM and electron diffraction studies were carried out at the Center for Nanotechnology Innovation@NEST of the Istituto Italiano di Tecnologia using a ZEISS Libra operating at 120 kV and equipped with a LaB6 source and a Bruker EDS detector XFlash6T-60.TEM images were recorded by a TRS 2 k × 2 k CCD camera.Scanning-transmission electron microscopy images were recorded by a high-angle annular dark-field detector.ED data were acquired by an ASI Timepix detector, able to record the arrival of single electrons and deliver patterns that are virtually background-free.Three dimensional ED data sets were obtained rotating the sample along the tilt axis of the TEM goniometer using the procedure described by Mugnaioli and Gemmi.3D ED acquisitions were performed in angular steps of 1° and for tilt ranges up to 90°.Due to the small size of quartz and coesite crystals and the similar contrast of these phases in STEM images, crystal position was tracked after each tilt step in TEM imaging mode.Both single-pattern and 3D ED data were acquired in nano-beam electron diffraction mode after inserting a 10 µm C2 condenser aperture, in order to have a parallel beam of about 300 nm on the sample.An extremely mild illumination was used for avoiding any alteration or amorphization of the sample.3D ED data were reconstructed and analysed using the ADT3D program and specially written Matlab routines.Phase/orientation maps were carried out using the precession-assisted crystal orientation mapping technique which has been implemented on a Zeiss Libra TEM through a Nanomegas Digistar P1000 device.The maps were obtained by scanning an area of the sample with a nanometric beam probe, while recording the diffraction patterns in precession mode.The patterns were collected by filming the fluorescent screen of the TEM with a fast-optical CCD during the scanning procedure.A cross correlation routine, which correlates these patterns and a database of patterns generated by taking in consideration the possible phases present in the sample, allows the phase determination and the indexing of each pattern.The collected maps cover areas of 4 μm2 with a spatial resolution of 20 nm.All crystalline impact ejecta with a pure silica composition are colourless, translucent to opaque white, sometimes with yellow and red stains.Particles that show evidence of a shock metamorphic overprint are typically 150–300 µm2 in size, as discussed below.They have a subangular to subrounded shape and are partially covered by a fine-grained micaceous matrix, which is similar to the micaceous fraction of the ‘rock fragments’ reported by Glass and Koeberl.These particles mostly consist of a mixture of coesite and quartz in variable proportions, the latter with multiple sets of PDFs.They are pristine, with no evidence of secondary processes that typically affect shock metamorphic rocks such as hydrothermal activity and post-shock thermal overprint, or even deep-sea marine alteration that they could have suffered in their sampling sites.FEG-SEM coupled with Raman spectra suggests ejecta range from almost pure PDF-bearing quartz with traces of coesite to coesite grains with PDF-bearing quartz in variable amounts.They are also associated with micrometre-sized anhedral Ti-oxides, and Fe-S phases, as observed on their external surface and in section.FEG-SEM analyses also revealed the first report of PDFs in the Australasian tektite strewn field, with at least three diffuse cross-cutting sets in all shocked silica particles.The particle selected for high-resolution investigation, falls within the description given above.However, it shows an impressively high abundance of shock features compared to the other particles.The PDFs are so well developed throughout the sample that they are already obvious on the external surface.Within each set, PDFs occur as parallel, narrow and closely-spaced features.Raman micro-spectroscopy indicates that the portions of the particles that lack PDFs are dominated by coesite, whereas the PDF-containing parts show spectra diagnostic of both coesite and quartz – with different peak intensities – suggesting a fine-grained coesite plus quartz intergrowths.The presence of coesite can be also discerned from BSE image contrast, where coesite results with a slightly lighter grey back-scatter signal compared with that of quartz, and this characteristic has been highlighted in Fig. 3 by enhancing contrast.Five FIB lamellae were extracted from interface areas between PDF-bearing quartz and coesite and analysed by transmission electron microscopy and electron diffraction.Each lamella consists of an assemblage of quartz and coesite, in variable ratios.The two phases can be easily distinguished by ED, and subordinately by their different contrast in TEM imaging.Quartz shows well-developed sets of PDF.At the TEM scale, PDFs are normally slightly open.They may be empty or filled with a low-contrast scoriaceous amorphous material.Where dominant, quartz shows a uniform crystallographic orientation in individual FIB cuts, indicating each lamella and probably the whole particle was a single quartz crystal of the parent rock.Coesite ranges in size from ∼500 nm to few nanometres, as aggregates of very inhomogeneous crystals.Coesite grains may have rounded or elongated habit and, where suitably oriented, show planar contrast features.3D ED analysis indicate that such features are the results of twinning and planar disorder along planes.Coesite grains do not show any evident preferential or reciprocal iso-orientation.Amorphous material was detected only as scoriaceous filling in open PDFs.No appreciable amorphous or ‘glassy’ volume was detected by TEM inside the FIB lamellae.A diffuse porosity occupies intergranular areas.It is not obvious if such porosity is a primary feature of the particle, possibly associated with volume reduction from the transformation of quartz into coesite and subsequent volume expansion during pressure release, or an artefact of sample preparation.Despite the finding of traces of stishovite in a few shocked-quartz grains from the AAMT layer at Site ODP1144A, we found no evidence of this mineral using ED or Raman micro-spectroscopy.As outlined in the Introduction, it is generally believed that coesite in impact rocks forms by the crystallisation of an amorphous phase.This process would take place during the decompression stage either from silica melt with short-range order and silicon in fourfold coordination, or through a solid-state transformation of diaplectic silica glass.Both models are based on direct observations of natural and experimental non-porous rocks and on theoretical considerations: in non-porous rocks, coesite only occurs in association with amorphous silica material; coesite cannot be produced in shock experiments, possibly due the too short pressure-pulse length reached in laboratory conditions; direct quartz-to-coesite transformation is reconstructive and hence is presumed to be too time-consuming to take place during compression in impact cratering events, this because the complete collapse of the crystal structure to glass in the solid state is the only possible response to rapid shock compression.On the other hand, the subsolidus direct quartz-to-coesite transformation has recently been proposed for the shocked quartz arenite from the Kamil crater to explain the coesite-quartz intergrowths in the so-called symplectic regions.Similar features were first described by Kieffer et al. in the Coconino sandstone from Barringer crater.The iso-orientation of quartz in the FIB lamellae indicate that the particle ‘1144A_350’ originated as a single quartz grain in the target rock.The direct contact between quartz and microcrystalline coesite and the sawtooth-like geometry of quartz-coesite interface indicate direct quartz-to-coesite transformation.Moreover, the euhedral habit of the coesite grains is consistent with a solid-state transformation, with coesite growing at the expense of quartz.As the polycrystalline coesite domains have sets of planar features that are in textural continuity with PDFs in the adjacent quartz relics, they are interpreted to form at the expense of PDF-bearing quartz.Thus, coesite must postdate PDF formation in the quartz precursor, and the involvement of a liquid intermediate phase during the quartz-to-coesite transformation can be ruled out.However, it must be noted that the 15–25 GPa required for PDF formation in the studied grain is higher that required for the formation of coesite at equilibrium conditions.In order to reconcile this discrepancy, Folco et al. suggested that the quartz-to-coesite transformation takes place during decompression and subsequent pressure amplification due to localized shock-wave reverberation connected to the pore collapse process in porous rocks.Such a mechanism may also be favoured by heterogeneities in the protolith, such as grain boundaries, fractures, inclusions or dislocations, which would also enable the target material to experience high pressure for a longer time.This would provide sufficient time for the subsolidus quartz-to-coesite transformation and could be the dominant mechanism of coesite formation in porous quartz-bearing target rocks, including the postulated parent rocks of the Australasian tektites.It is important to emphasize that during impact cratering events, minerals abruptly adjust to the extreme pressure-temperature conditions imposed by the passage of shock waves at supersonic velocity.Variations in pressure/temperature conditions and phase transitions connected with shock metamorphism occur in timeframes that are orders of magnitude shorter than typical geological processes, and non-equilibrium conditions are the rule rather than the exception.This implies metastability conditions, so that at a given time, P-T coordinates and mineralogical paragenesis may not match in the transition phase diagram.Time plays a pivotal role, and the resulting rock resembles a yield from an incomplete reaction.A short time of reaction, like the one expected for impact metamorphism, results therefore in rocks where phases and shock features like quartz, PDFs, coesite and possibly stishovite can coexist in a non-equilibrium assemblage.This consideration may thus explain why coesite is found in high abundances in rocks that experienced shock pressures far above its equilibrium field.Nonetheless, due to the rapid shock compression the only possible phase transition of quartz is the collapse of the crystal lattice to glass in the solid state, i.e. diaplectic glass at 30–35 GPa.Based on these statements, it would appear that because the quartz-to-coesite transformation is reconstructive, it is presumably too time-consuming to take place during the compression stage during impact cratering events.However, there are many examples of subsolidus polymorphic transition in impact cratering events, despite their reconstructive nature.These processes usually occur via diffusionless mechanisms.Probably, the best-known example is the graphite-to-diamond transformation.Other common solid-state polymorphic transitions include reidite and two different high-pressure polymorphs of rutile, namely TiO2-II and akaogiite.Some of these generally acknowledged phase transitions induced by shock metamorphism are structurally more striking than quartz-to-coesite transformation.For example, in graphite-to-diamond and in rutile-to-akaogiite transformations C and Ti change their atomic coordination state, from 3 to 4 and from 6 to 7, respectively.All these phase transitions are diffusionless-type transformations – to which twinning also belongs – in which the crystal structure is distorted through a cooperative movement of atoms or through shear, without long-range diffusion.Quartz and coesite structures are both based on a network of SiO4 tetrahedra, where silicon is similarly coordinated with 4 oxygen atoms.The actual mechanism that allows the transformation from one structure to the other is not evident because there are many possible rearrangements that may lead quartz network to coesite.Moreover, a very fast change in PT conditions, like the one triggered by impact events, may allow phase transitions pathways different from those inferred from results of static anvil-cell experiments.Finally, the pervasive planar disorder and polysynthetic nano-twinning typical of impact-coesite may play a role in accelerating the rate of coesite formation and reducing the activation energy required.The model described here explains the formation of coesite at lower shock pressures and over shorter durations typical of shock wave propagation scenarios, thus accounting for its presence in materials that did not experienced melting – which require shock pressure higher than 30 GPa for porous rocks.A decrease of peak pressure required for the coesite formation provides fundamental constraints on the physical condition attained during impacts in quartz-bearing porous target rocks, and a re-evaluation of peak shock pressure estimates may be necessary.This work provides electron diffraction evidence for the direct subsolidus quartz-to-coesite transformation in shocked coesite-bearing quartz ejecta from the Australasian tektite/microtektite strewn field.This transformation postdates PDF formation in the quartz precursor.This is in contrast with previous studies, mostly based on observations from crystalline rocks, which suggested that impact-formed coesite is the product of rapid crystallization from silica melt or diaplectic glass during shock unloading, when the pressure release path passes through the coesite stability field.The quartz-to-coesite transformation model proposed here is based on the ED study of a few samples only.A future detailed ED investigation of both crystalline and porous target rocks from different impact structures may provide additional insight into quartz-coesite relations and phase transition phase paths, helping to understand whether the direct subsolidus quartz-to-coesite transformation is specific for porous target rocks or whether is the norm in impact events regardless of the target type.The preservation of the fine quartz-coesite textural and microstructural relationships, such as those observed in this work, depend on the extent of the post-shock thermal overprint commonly observed in shock metamorphic rocks.Pristine features, such as those found in the ejecta particles from the Australasian microtektite layer studied here, represent an excellent opportunity to investigate the mechanism and kinetics of the direct subsolidus quartz-to-coesite transformation during shock metamorphic events.This work shows the potential of the emerging 3D ED method for the structure characterization of materials available only as sub-micrometer-sized grains, thereby opening a new perspective in shock metamorphic studies, given the micro-to-nanometer scale of shock metamorphic features and their defective nature.Interestingly, by using very mild illumination conditions, complete and high-resolution data can be collected on phases that normally deteriorate rapidly in high resolution TEM mode.Likewise, the PACOM technique enables reliable phase/orientation maps with a spatial resolution down to 2 nm when used with a field emission gun, which is well below the 20–50 nm achieved with EBSD and similar to the spatial resolution achieved by TKD.Also, whilst yielding less precise orientation measurements when compared with Kikuchi lines in EBSD, spot diffraction patterns are less affected by the distortion induced by high dislocation densities.Therefore, PACOM is particularly suited for investigating strongly plastically deformed materials like the shocked silica ejecta studied here.
Coesite, a high-pressure silica polymorph, is a diagnostic indicator of impact cratering in quartz-bearing target rocks. The formation mechanism of coesite during hypervelocity impacts has been debated since its discovery in impact rocks in the 1960s. Electron diffraction analysis coupled with scanning electron microscopy and Raman spectroscopy of shocked silica grains from the Australasian tektite/microtektite strewn field reveals fine-grained intergrowths of coesite plus quartz bearing planar deformation features (PDFs). Quartz and euhedral microcrystalline coesite are in direct contact, showing a recurrent pseudo iso-orientation, with the [11¯1]* vector of quartz near parallel to the [0 1 0]* vector of coesite. Moreover, discontinuous planar features in coesite domains are in textural continuity with PDFs in adjacent quartz relicts. These observations indicate that quartz transforms to coesite after PDF formation and through a solid-state martensitic-like process involving a relative structural shift of {1¯011} quartz planes, which would eventually turn into coesite (0 1 0) planes. This process further explains the structural relation observed between the characteristic (0 1 0) twinning and disorder of impact-formed coesite, and the 101¯1 PDF family in quartz. If this mechanism is the main way in which coesite forms in impacts, a re-evaluation of peak shock pressure estimates in quartz-bearing target rocks is required because coesite has been previously considered to form by rapid crystallization from silica melt or diaplectic glass during shock unloading at 30–60 GPa.
242
Environmentally enriched pigs have transcriptional profiles consistent with neuroprotective effects and reduced microglial activity
Environmental enrichment paradigms are widely used in laboratory rodents as a method to study the effects of environmental influences on brain development and function .EE utilises multiple methods of environmental modification to facilitate cognitive and motor stimulation in a species appropriate manner.In rodents this is often achieved through the use of novel objects, running wheels, nesting material and group housing .The beneficial effects of EE on physiology and brain function in rodents are well documented).EE paradigms have been shown to have a neuroprotective effect through the inhibition of spontaneous apoptosis and through increased cell proliferation and neurogenesis in the hippocampus and prefrontal cortex .Indeed, EE has been shown to limit neurodegeneration in models of aging , and can rescue some behavioural and physiological brain effects of early life stress .Current evidence suggests this is most likely precipitated by the mechanisms underlying neuronal plasticity, due to increased expression of trophic factors and immediate early genes in the brains of EE animals .Expression of neurotrophins has also been proposed as a contributory factor in mood disorders, in particular stress-induced depression and anxiety.While the main focus has been around expression of brain derived neurotrophic factor , there are a number of other trophic factors and IEGs with a role in stress resilience and mood.Neuronal growth factor mRNA has been shown to be reduced in the dentate gyrus in response to restraint stress in rats .Environmental deprivation during early life in rats reduces IEG expression in the prefrontal cortex both as a chronic effect , in response to social interaction , and after chronic social defeat stress in mice .Levels of IEG Arc mRNA in the frontal cortex and hippocampus of rats exposed to predator scent stress were reduced in those animals which displayed an extreme anxiety-like behavioural response, and in the dentate gyrus Arc mRNA expression levels also correlated with levels of circulating corticosterone .Interestingly, depressive-like behaviour in rodents induced by social isolation in early life not only results in decreased expression of a number of IEG involved in synaptic plasticity, including Bdnf, Egr-1 and Arc, but also in increased microglial activation.Microglia are specialised macrophages of the central nervous system that play a role in neuromodulation, phagocytosis and inflammation.There is increasing evidence to suggest that microglial mediated brain inflammation is a contributing factor to a number of mental health and nervous system disorders, such as multiple sclerosis , Alzheimer’s disease , schizophrenia , and major depression .While the exact mechanism by which microglia contribute to these disorders is not yet known, psychological stress has been shown to result in increased levels of microglia in the prefrontal cortex of rats while early life stress, in the form of maternal deprivation, has also been shown to lead to increases in microglial motility and phagocytic activity that persist into adulthood.Anti-inflammatory drugs such as minocycline produce antidepressant effects in rodent models .Several anti-psychotic medications have also been shown to block microglial activation .The physiological and behavioural effects of a number of nervous system disorders can be modulated by EE in rodent models , and EE is now widely used as an experimental tool in studies of neurodegeneration and neural development .Due to its beneficial effects on behaviour, EE has long been used as a method to improve the welfare of production animals, by providing environmental manipulations that allow the animal to behave in a more species-typical way.However there has been little work on the effects of EE on the brain in a large animal model, such as the pig.The aim of the present study was to investigate the effects of EE on gene expression within the frontal cortex of young pigs with the aim of identifying potential gene expression signatures of a healthy brain in a large animal model, and to potentially identify early signatures of neural ill-health in the absence of abnormal behaviours.All work was carried out in accordance with the U.K. Animals Act 1986 under EU Directive 2010/63/EU following ethical approval by SRUC Animal Experiments Committee.All routine animal management procedures were adhered to by trained staff and health issues treated as required.All piglets remaining at the end of the study were returned to commercial stock.Post-weaning behavioural observations were carried out on litters from six commercial cross-bred mothers; the boar-line was American Hampshire.Artificial insemination was performed using commercially available pooled semen.Litters were born within a 72 h time window in free-farrowing pens which allowed visual and physical contact with neighbouring litters.No tooth resection was performed and males were not castrated.In line with EU Council Directive 2008/120/EC tail docking is not routinely performed.Weaning occurred at 24–27 days of age.At weaning piglets were weighed, vaccinated against Porcine Circoviral Disease and ear tagged for identification.Post-wean diet was in the form of pelleted feed provided ad libitum.At 4 days post weaning, 8 piglets per litter were selected as being as close to the litter average weight as possible while balancing for sex.These 8 piglets per litter were split into sex balanced quartets to give final group sizes of 4 piglets per matched group.One group was then housed in an enriched pen and one in a barren pen, all in the same room.Post weaning pens did not allow for physical or visual contact between neighbours.All pens contained feed troughs ∼80 cm in length and piglets had access to one water nipple per pen.All piglets were marked on the back with spray marker paint to allow individual identification.B pens measured 1.8 m × 1.8 m with concrete floors and rubber matting in one corner to provide some thermal insulation.Kennels were provided for the first 10 days to enhance thermal comfort as B housed piglets had no opportunity to use straw to control their thermal environment as the EE pigs could.EE pens measured 3.6 m x 1.8 m also with concrete flooring and with large quantities of straw covering the floor.The same rubber matting was also provided in the enriched pens.Piglets in the six EE pens were provided with additional enrichment once daily in the form of a plastic bag filled with straw.This bag was placed in the EE pens after morning husbandry was complete.At the same time the gate of the matched B pen was approached to account for the stimulating effect of human presence.The bag was removed at the end of each day and a new bag used on the following day.The bag was open at one end to allow straw to be extracted by the piglets and was the only source of fresh straw given to the EE pigs daily.The bag was first introduced at day 5 post weaning and thereafter daily.Pen cleaning and enrichment order was randomised daily.In accordance with the Defra Code of Recommendations for the Welfare of Livestock , temperature within the room was automatically controlled at 22 °C and artificial lighting was maintained between the hours of 0800–1600, with low level night lighting at other times.Piglets were digitally recorded in their home pen using Sony LL20 low light cameras with infra-red and a Geovision GV-DVR.Two cameras were set up per EE pen, one at the rear and one at the front to provide maximal coverage, and one camera per B pen.Piglets were observed in their home pens on five days between the ages of 41 and 53 days.Piglets were all observed on the same days during the same time periods to negate any time of day effects on sampling.The behaviour scores are the cumulative total of how many times that behaviour occurred in the timeframe within the pen.Observations began after morning husbandry was complete, 15 min prior to the time point when the additional enrichment was provided to the EE pigs, and finished 4 h after the provision of additional enrichment.Behaviours were recorded by scan sampling with a frequency of one minute for the total 4.25 h duration.The observer used a 10 s window to aid in accurate identification of the behaviour occurring at the start of every scan sample.A condensed ethogram based on previous work in enrichment and play in pigs was used.All behaviours were recorded as frequencies.One observer completed all video analysis to remove any reliability issues relating to multiple observers.The full dataset was condensed to give one data point for each behaviour per hour period per day for each pen being the cumulative recordings for all piglets in that pen in that observation period.The introduction of the bag is time-point 0.Analysis was performed on these mean values for each period per pen averaged across the 5 observation days.Analysis of the behavioural data was performed in Minitab version 17.3 using a generalized linear model with treatment as a fixed effect and time point as a random effect.Sex, litter of origin and pen were included in the model as covariates for the behaviour.At 54 days of age one male piglet per pen was sedated at one hour after the introduction of the additional enrichment stimulus, and one male per pen at 4 h post enrichment stimulus.An intramuscular injection was used with a combination of Domitor, Ketamine, Hypnovel and Stresnil.Piglets were left for 15 min to allow sedation to take full effect before being euthanized via intercardial injection with Euthatal for brain tissue collection.Male brain tissue was sampled to be consistent with other EE studies, mainly performed in rodents.Piglet brains were removed whole and dissected over dry ice.For continuity all dissections were performed by a single researcher using http://www.anatomie-amsterdam.nl/sub_sites/pig_brain_atlas for reference, utilising both parasagittal and rostrocaudal views.Frontal cortex was placed in RNAlater and agitated on a tube rotator for 15 min to assist in the perfusion of RNAlater through the tissue.Samples were frozen at -20 °C until required.Frontal cortex samples in RNAlater were removed from -20 °C and allowed to reach room temperature.RNA extraction was performed using the Qiagen RNeasy lipid tissue kit as per manufacturer’s instructions.80 mg of tissue was used from each sample and a TissueRupter used for homogenization.Extracted RNA was stored at -20 °C until required.Samples were quantified and quality checked using a 2200 Tapestation.Poly A + library preparation and RNA sequencing of the prepared samples was carried out by Edinburgh Genomics Next Generation Sequencing Service using the Illumina HiSeq 4000 platform, generating >30 million strand-specific 75bp paired-end reads per sample.RNA-seq reads were processed using the high-speed transcription quantification tool Kallisto , version 0.42.4, generating transcript-level expression estimates both as transcripts per million and estimated read count.Kallisto quantifies expression by building an index of k-mers from a set of reference transcripts and mapping the reads to these directly.For this purpose, we used the combined set of cDNAs and ncRNAs from annotation Sscrofa11.1.Transcript-level read counts were then summarised to the gene level using the R/Bioconductor package tximport v1.0.3 , with associated HGNC gene names obtained from Ensembl BioMart v90 .The tximport package collates Kallisto’s output into a count matrix, and calculates an offset to correct for differential transcript lengths between samples.The count matrix was imported into the R/Bioconductor package edgeR v3.14.0 for differential expression analysis, with the ‘trimmed mean of M values’ method used to normalise gene counts.Genes with expression level estimates lower than 2 TPM in one or both treatment groups were filtered out as noise.After filtering lowly expressed genes, the final dataset comprised 21,971 genes.Genes were considered to be differentially expressed when they returned a between group fold-change greater than +/-1.2, with a p-value <0.02.These thresholds for fold change and p-value were chosen specifically for this study, to minimise noise with regard to identifying a conservative set of significantly differentially expressed genes.A fold change of +/- 1.2 reduces the likelihood of natural variation in expression, as does an alpha level more stringent than the conventional 0.05.The sequencing data generated for this project is deposited in the European Nucleotide Archive under study accession number PRJEB24165.Gene set enrichment analysis was performed on the differentially expressed gene lists using Panther.db v13.0 with default parameters.To assess the statistical significance of enrichment-induced variation in gene expression for small groups of genes with a common function, we used a randomisation test as in .Two sets of gene signatures, derived from previous human studies, were first obtained for IEG genes and microglial genes , and filtered to remove those genes not annotated in pigs.Subsets of x genes were drawn at random s = 10,000 times from the set of all genes for which a fold change is quantified, where x = number of signature genes.We calculated q, the number of times the set of signature genes had a higher or lower median fold change than each randomly chosen subset.Letting r = s-q, then the p-value of this test is r+1/s+1.These expression values were entered into the network analysis tool Graphia Professional.A sample-to-sample analysis was performed using a correlation coefficient threshold of 0.99 to assess the impact of individual variation on all gene expression profiles.A gene-to-gene network was also created from the whole dataset using a correlation coefficient threshold of 0.90.The nodes were clustered using the Markov Clustering algorithm with an inflation value of 2.2.This algorithm identifies tightly coordinated sub-structures within the overall network.An analysis of the percentage of active behaviours in EE vs. B housed piglets found a treatment by time-point effect.The coefficients show this difference to be attributable to an increased proportion of active behaviours in the EE piglets at one hour post provision of the enrichment stimulus.Piglet sex was not observed to have an effect on the active behavioural response to enrichment.A housing treatment x time effect was observed in locomotor behaviour, interaction with the enrichment stimulus and interaction with other pen components, such as rubber matting.These effects peaked at one hour post provision of the enrichment stimulus to the EE pens with greater performance of locomotor behaviour and interaction with the enrichment stimulus, and at two hours post enrichment stimulus for interaction with other pen objects in the B pens.A greater proportion of lateral lying in B housed pigs was observed at one hour post enrichment stimulus.An effect of housing treatment alone was observed in nosing of pen-mates with more nosing of pen-mates occurring in the B housed piglets.An effect of time alone was found in feeding behaviour, nosing of the pen and a trend towards lateral lying.No effect of time or treatment was observed in any of the other behaviours recorded.Means and S.E.M for behaviours are shown in table S1.There was no difference between piglet growth during the pre-weaning period, prior to the start of the study = 0.027, p = 0.873).Piglet growth was observed to differ between environments = 5.68, p = 0.038) during the study period, with EE piglets having on average a higher ADG than B piglets.Complete lists of differentially regulated genes are provided in Table S2.Initial analysis revealed that for a large majority of genes, expression in the frontal cortex was unaffected by the environmental enrichment.When all 21,971 genes were included in a network analysis, members of the same litter showed similar expression patterns, regardless of treatment or time.This was particularly noticeable for litters L3, L4 and L6.Using the full gene set there was no association with treatment or time.A set of 349 genes were up-regulated after one hour of environmental enrichment but appeared to return to baseline after four hours.A second set of genes was down-regulated at both one hour and four hours in the EE cohort.The composition of these two gene sets is discussed below.Gene set enrichment analysis at one hour post enrichment highlights an over-representation of genes involved in synaptic transmission being up-regulated in the EE animals, with an over-representation of genes involved in apoptosis and cellular defence being down-regulated in the EE animals.Further analysis of the gene lists revealed that approximately 8% of those genes upregulated at 1 h in the EE animals are known IEGs while 15% of those genes down-regulated in the EE animals at 1 h are related to microglial processes.At four hours post enrichment no gene sets were found to be overrepresented using the Panther GSEA overrepresentation test.Significant p-values from randomisation tests would suggest these signatures do not occur by chance in EE animals.In a network analysis using only the differentially expressed genes, the EE samples were close together in a relatively tight group, while the B samples were more dispersed, but still located together with little overlap with the EE samples.In addition there was no relationship with litter, indicating that for the differentially expressed gene set the effect of environment was stronger than any intrinsic difference based on genetics or early life effects.The expression profiles of the differentially expressed genes were analysed to create a gene-to-gene network.As might be expected there were two clear elements in the network, comprising one element where the overall gene expression level was higher in the EE animals and a second element where the level was lower in EE animals.Similar results were seen when using the one hour, four hour or complete set of differentially expressed genes.When the DEGs at one hour were used to construct a network graph, cluster 1 contained microglia-associated genes such as CSF1R , IRF8 and P2RY12 alongside various connective tissue genes including several collagen genes and FBN1.Cluster 4, where the average expression was increased in EE samples, contained IEGs such as ARC, several EGR genes, FOS and IER2.Cluster 2 contained largely neuronal genes which were not differently expressed between environments.Cluster 3 contained additional connective tissue genes with high levels in four of the B individuals.Clusters 5 and 6 represent groups of genes that were increased in a single individual.The gene lists for each cluster and average expression profiles for clusters where treatment differences were shown are presented in Table S3 and Fig. S1.The profiles for the four hour samples were largely determined by high or low expression in a single or small number of individuals.Many of the genes in this list were non-coding RNAs or unannotated genes.When both time points were included in the analysis, cluster 3 primarily contained IEGs and showed a pattern in which the one hour EE samples had higher expression than the one hour B samples, but at four hours the two groups had similar expression.One individual in the B group had a spike of IEG expression at four hours although most of the four hour B samples showed lower IEG expression than the EE samples.The EE piglets in this study displayed more active behaviour, social and object interaction) than B piglets, but only in the first hour after the additional enrichment stimulus was provided.This was not affected by sex.This suggests that the piglets’ behavioural response to enrichment in this experiment was largely defined by the period that the additional enrichment stimulus remained novel.After the 4 h of observation the piglets had tended to remove some or all of the straw from the enrichment stimulus and reduced the physical integrity of the bag which may have reduced the novelty value of the stimulus.Despite the larger space allowance and fresh bedding provided daily to the EE piglets, their overall activity profile was not different to the B housed piglets outside of the first hour post additional enrichment.Overall total activity levels were lower in this study than in previous studies, but the proportion of active time spent interacting with the enrichment stimulus or the pen environment was higher than previously reported.Within the time spent active there were some effects of housing on specific behaviours.B housed piglets displayed more nosing of pen-mates over the course of the observation, not dependent on time.This maladaptive behaviour is known to occur more frequently in piglets when the opportunities to use the snout to forage and explore are not provided .Despite no differences in time spent feeding being observed between treatments, piglets in the EE environment experienced higher growth rates than those in the B environment.Enriched housing has previously been shown to increase body weight and growth rate in grower pigs but not in piglets at such a young age .A recent meta-analysis has found that the provision of straw bedding is a significant factor in the increased average daily gain observed in EE housed pigs though there has been little research to determine why this may be the case.In rodents, increased brain weight in the absence of increased body growth has been observed in environmentally enriched conditions.Given the short time frame of this study we would not expect to observe significant differences in brain weight however over a longer period this may well be a contributory factor to observed weight gain.The gene expression changes observed in this study mirror those of the behavioural differences, with most genes differentially expressed at one hour post the provision of the additional enrichment stimulation and few genes differentially expressed at four hours post his point.The observed increase in genes involved in synaptic transmission at the one hour time point in EE animals supports previous work describing a transient increase in cell activity in the hippocampus of rats following environmental enrichment.This increase in synaptic transmission is likely to be the mechanism by which EE leads to increased brain weight , greater cortical depth , increased neurogenesis and increased synapse formation .The set of transiently-induced genes at one hour shows significant overlap with recent curated sets of IEGs defined in stimulated neurons based upon single cell RNA-seq .Increases in expression of neurotrophic factor proteins have previously been observed in multiple brain regions of rats housed in enriched cages and were observed as gene expression changes in the frontal cortex of the piglets in the current study, although not all growth factors fall within the gene set enrichment analysis.Neuronal growth factors are considered to be regulators of cell growth and survival in both the developing and adult nervous system and have been shown to acutely potentiate synaptic plasticity .Experience dependant plasticity, the way by which synaptic connections are strengthened or weakened, or indeed formed or eliminated, in response to experience is a well established mechanism).Intracellular scaffold protein and regulator of membrane trafficking GRASP has multiple protein interacting domains allowing a wide range of protein assemblies to be trafficked at the synapse .GRASP is known to be involved in dendritic outgrowth and in synaptic plasticity through its interaction with group I and II metabotrophic glutamate receptors .Reductions in GRASP protein expression have been observed in post mortem prefrontal cortex of schizophrenia patients with no correlation to illness duration or history of medication.Similarly, decreases in mGluR2 have been observed in multiple psychiatric conditions including schizophrenia and addiction , while treatment with the antidepressant medication imipramine has been shown to increase mGluR2 protein expression in the hippocampus of wild-type rats but not to increase mRNA expression.Another G-protein coupled receptor upregulated in the EE pigs, GPR3, has been shown to be involved in anxiety related behaviours, with Gpr3-/- mice displaying increased anxiety-like behaviours that could be rescued with the use of the anxiolytic diazepam .Upregulation of these G-protein coupled receptors and their targets may suggest that EE piglets in the current study were experiencing reduced levels of anxiety and a more positive affective state compared to their B counterparts.Given the lack of sex difference in behavioural activity observed in this study, and no evidence of overlap between our list of DE genes and those shown to be sexually dimorphic in neonatal rodent cortex , we would propose that this effect would be consistent across both males and females.An unexpectedly large proportion of the set of genes downregulated in the brains of the pigs exposed to environmental enrichment are clearly associated with microglia.Numerous recent studies have identified microglia-associated genes in mice and humans however there has been little work focused on the pig.The set of genes down-regulated after one hour post the additional enrichment stimulation included generic myeloid markers such as PTPRC and CD68, as well as most well-documented microglial markers.These findings suggest that microglia have altered transcriptional activity in EE relative to B pigs.With our study design it is impossible to determine whether the EE or B animals are the baseline so any changes in expression are relative, however irrespective of whether microglial activity is greater in B or reduced in EE animals, this result suggests that a relative reduction in activity may have benefits for brain health .While reduced transcriptional activity could arise due to a selective reduction in microglia number, in rodents the number of microglia is relatively stable over time .Experimental depletion of microglia in rat neonates results in lifelong decreases in anxiety-like behaviours and increased locomotory behaviour , while increases in microglial numbers and activation states have been observed in stress-induced depression models).In human patients, increased microglial activation has been observed using PET scanning in major depressive disorder , and from histology samples from brains of depressed suicides .This suggests that B housed animals may be experiencing greater anxiety and/or lower mood than their EE housed counterparts.This may provide part of the functional link as to why pigs housed in enriched environments display more optimistic judgement bias than those in barren conditions .Current evidence suggests that microglial gene expression is not sexually dimorphic at this equivalent developmental stage , and thus while only male brains were sampled in this study this may be extrapolated to include females with relative confidence.Interestingly, at the four hour time-point there is a high proportion of 5 s ribosomal and spliceosomal RNAs identified as differentially expressed, which may be adding noise to the dataset in the later time-point.As observed in the network analysis, inter-individual variability in gene expression was greater at 4 h, perhaps due to the varying rates of decay of the effect of the additional enrichment stimulus, or due to the animals no longer having a behavioural ‘focus’.This is consistent with the behaviour analysis, which indicated that the benefit from the additional enrichment experience was only felt while the object remained novel.This may contribute to the noise in the data at four hours, as animals may ‘cope’ with the lack of behavioural stimulation in different ways, or may be being stimulated by other factors within the home pen.The current study would suggest the benefits of enrichment may occur in a pulse-like fashion, with bursts of behavioural activity being followed by a return to behavioural baseline, these bursts mirroring the active engagement with the behavioural opportunities offered by the additional enrichment stimulus.It would be of great interest to determine if these bursts of activity alter the structure and function of the brain in a way that facilitates long term developmental change.Confirmation of the longer term effects of EE on microglial numbers and activation are required to determine if the observed gene expression changes are an indicator of lower neuronal health in B housed animals or if they are transient expression changes with little long term consequence.In this study, behaviour was mainly organised around the time point when a daily enrichment stimulus was provided.This resulted in a ‘pulse’ of active behaviour that was mirrored by temporal changes in gene expression profiles in the frontal cortex.These gene expression changes primarily affected genes involved in synaptic plasticity, neuroprotection and the immune response, with a substantial fraction of known microglial signature genes showing relative down-regulation with enrichment.Our analyses suggest that relative to piglets in barren environments, those in enriched environments may experience reduced anxiety, increased neuroprotection and synaptic plasticity, and an immune response consistent with reduced inflammatory challenge.
Environmental enrichment (EE) is widely used to study the effects of external factors on brain development, function and health in rodent models, but very little is known of the effects of EE on the brain in a large animal model such as the pig. Twenty-four young pigs (aged 5 weeks at start of study, 1:1 male: female ratio) were housed in environmentally enriched (EE) pens and provided with additional enrichment stimulation (a bag filled with straw) once daily. Litter, weight and sex matched controls n= (24) were housed in barren (B) conditions. Behaviour was recorded on alternate days from study day 10. After 21 days, RNA-sequencing of the frontal cortex of male piglets culled one hour after the enrichment stimulation, but not those at 4 h after stimulation, showed upregulation of genes involved in neuronal activity and synaptic plasticity in the EE compared to the B condition. This result is mirrored in the behavioural response to the stimulation which showed a peak in activity around the 1 h time-point. By contrast, EE piglets displayed a signature consistent with a relative decrease in microglial activity compared to those in the B condition. These results confirm those from rodents, suggesting that EE may also confer neuronal health benefits in large mammal models, through a potential relative reduction in neuroinflammatory process and increase in neuroprotection driven by an enrichment-induced increase in behavioural activity.
243
Broccoli by-products improve the nutraceutical potential of gluten-free mini sponge cakes
The incidence of gluten-sensitivity disorders caused by allergic and immune reactions is on the rise.Celiac disease and other gluten-related disorders are frequently accompanied by nutritional deficiencies that contribute to chronic conditions.Patients with celiac disease are also at a higher risk of lymphoma, in particular enteropathy-associated T-cell lymphoma.A gluten-free diet is the only approved method of treating gluten intolerance.However, GF products are characterized by lower palatability and nutritional value than gluten-containing foods, and their nutritional and sensory properties can be improved with supplements containing functional ingredients.Unfortunately, food products enriched with nutrients and biologically active compounds are often more expensive.By-products from fruit and vegetable processing can be used as additional sources of nutrients and functional ingredients without increasing production costs.Epidemiological studies have demonstrated that a diet rich in cruciferous vegetables, including broccoli, can reduce the risk of cancer and cardiovascular diseases.Brassica vegetables, including cabbage, cauliflower and broccoli, contain glucosinolates, a large group of sulphur-containing glucosides with chemopreventive properties.Unhydrolysed GLS are not biologically active.The endogenous enzyme myrosinase,which hydrolyses GLS into several biologically active isothiocyanates and indoles is released when plant tissue is damaged by crushing or chewing.Sulforaphane is the most widely studied isothiocyanate which is a degradation product of glucoraphanin, the main GLS in broccoli.Numerous in vivo and in vitro studies have demonstrated that isothiocyanates and indole derivatives exhibit chemopreventive activity against cancer.Broccoli is also a good source of polyphenolic compounds with high antioxidant activity, and it could play a significant role in the prevention of diseases associated with oxidative stress, such as cardiovascular and neurodegenerative diseases as well as cancer.Polyphenols demonstrate multidirectional antioxidant activity.They remove free radicals and reactive oxygen species, they act as complexing agents for iron and copper, they inhibit the activity of enzymes involved in the formation of reactive oxygen species, and block enzymatic and nonenzymatic lipid peroxidation.Most people consume only broccoli florets which account for around 30% of the vegetable’s biomass.For this reason, research studies generally focus on florets, whereas information about the nutritional properties of other broccoli parts is generally limited.Only several authors have described the nutritional composition and antioxidant activity of broccoli by-products.Broccoli by-products and broccoli florets have similar chemical composition, and they are rich sources of GLS, polyphenols, dietary fibre, proteins and other nutrients.Literature data suggest that broccoli leaves could constitute a functional food additive.The chemical composition and antioxidant potential of bioactive compounds found in broccoli leaves have to be analysed to validate their functional properties.The incorporation of broccoli leaves into functional foods could improve the quality of GF diets and facilitate the management of vegetable processing wastes.Therefore, the aim of this study was to evaluate the effect of broccoli leaf powder on the content of biologically active compounds and the antioxidant capacity of GF mini sponge cakes.2,2′-Azinobis diammonium salt, 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid, potassium persulphate, 2,4,6-tri-s-tiazine, methanol, were purchased from Sigma Aldrich.Sinigrin, n-hexane and acetonitrile were purchased from Merck.Ferric chloride was provided by Fluka.ACW and ACL kits for the photochemiluminescence assay were received from Analytik Jena AG.All other reagents were from POCh,.Water was purified with a mili-Q-system.Mature leaves of broccoli were generously donated by GEMIX.Leaves without signs of mechanical damage were selected, washed and blanched in hot water for 1 min to inactivate enzymes hydrolysing biologically active compounds.The petioles and the main midribs were removed.Leaf blades were freeze-dried and ground to produce broccoli leaf powder with particle size ≤0.60 mm.The powder was stored in a refrigerator in a tightly closed container for further use.Broccoli leaf powder was incorporated into mini sponge cakes in the following proportions: control – 0%, B1 – 2.5%, B2 – 5%, B3 – 7.5% by replacing an equivalent amount of potato and corn starch in the standard formulation of GF mini sponge cakes.The ingredients were combined in a 5-speed KitchenAid Professional K45SS mixer in a stainless steel bowl.Dough portions of 30 g were placed in paper cups and baked at 180 °C for 25 min.Baked GF mini sponge cakes were cooled, freeze-dried, ground into a fine powder, and stored in the refrigerator in a tightly closed container until analysis.The dry matter content and proximate chemical composition of GF mini sponge cakes, including protein and ash content, were determined with the use of standard methods.The dry matter content of GF mini sponge cakes was determined at 66.27 in control, 66.37 in B1, 66.40 in B2, and 66.77 in B3.The protein content of GF mini sponge cakes was determined at 8.46 in control, 8.48 in B1, 8.59 in B2, and 9.75 in B3.The ash content of GF mini sponge cakes was determined at 0.52 in control, 1.30 in B1, 1.41 in B2, and 1.91 in B3.The GLS content of GF mini sponge cakes was determined after degreasing.Samples of 2 g of freeze-dried GF mini sponge cakes were vortexed with 5 mL of n-hexane for 30 s.They were centrifuged for 15 min at 3500 rpm, and the supernatants were removed.Lipids were extracted in triplicate.Degreased powder was dried under a stream of nitrogen until n-hexane was completely removed.The content of GLS in GF mini sponge cakes and BLP was analysed according to the method described in the Official Journal of the European Communities.Briefly, 500 mg of degreased sponge cakes lyophilisates or 200 mg of BLP were extracted with 70% boiling methanol.The isolation, desulphation and HPLC separation of GLS were carried with the use of the methods described by Ciska, Honke, and Kozłowska.Separation was performed in an HPLC system with an autosampler and the SPD-M20A DAD detector.The compounds were separated in the LiChrospher® 100 RP-18 column with a flow rate of 1.2 mL/min.Desulpho-GLS was separated in a gradient of water and 20% acetonitrile as previously described.Glucosinolates were identified following their UV spectra in comparison to the available literature.The presence of glucoiberin and glucoraphanin was additionally confirmed with the analysis of respective degradation products using 7890A gas chromatograph coupled with 5975C mass selective detector, 7683B auto-injector and data station containing the NIST/EPA/NIH Mass Spectral Library.UV spectra of compounds eluted as a third and fourth GLS in both BLP and GF mini sponge cakes were not found in the available literature, therefore these compounds were not identified and they are presented in Tables as unidentified GLS 1 and unidentified GLS 2, respectively.Sinigrin was used as the external standard, and the GLS content of the samples was calculated using the relevant relative response factors as follows: 1.07 for glucoiberin; 1.07 for glucoraphanin; 1.00 for unidentified GLS; 0.95 for gluconasturtiin; 0.28 for glucobrassicin; 0.28 for 4-hydroxy-glucobrassicin; 0.25 for 4-methoxy-glucobrassicin; 0.2 for neoglucobrassicin.Total phenolic content was determined with the use of the Folin-Ciocalteu reagent based on the method described by Horszwald and Andlauer with some modifications.Methanol extracts were obtained from GF mini sponge cakes and BLP with 1 mL of 67% ethanol.Ultrasonic vibration and vortexing were repeated three times, and the samples were centrifuged for 10 min at 13,000 rpm at 4 °C.The above step was repeated five times, and the supernatants were collected into a 5 mL measuring flask.Ethanol extracts were prepared in triplicate.The TPC assay was performed in microplates, and aliquots of 15 μL of methanol extracts were placed in microplate wells.Subsequently, 250 μL of the Folin-Ciocalteu reagent was added, and the mixture was incubated in dark for 10 min at room temperature.Then, 25 μL of 20% sodium carbonate was added to each well, and the mixture was incubated for 20 min.The microplate was shaken automatically before readings, and absorbance was measured at λ = 755 nm with the Infinite M1000 PRO plate reader.Gallic acid was used for standard calibration, and the results were expressed in mg of gallic acid equivalents per one gram of dry matter of GF mini sponge cakes.The Trolox Equivalent Antioxidant Capacity assay was performed based on the method described by Horszwald and Andlauer.The ABTS•+solution was diluted with a 67% aqueous solution of methanol to the absorbance level of 0.70 ± 0.02 at 734 nm.10 μL of standard methanol extracts and blanks prepared in the TPC assay were placed in microplate wells.The reaction and time measurements were started upon the addition of 270 μL of the ABTS+solution.The reaction was carried out at 30 °C in dark for 6 min.After the reaction, absorbance was measured at 734 nm with a microplate reader.Trolox was used for standard calibration, and the results were expressed in μmol Trolox g−1 DM of BLP or GF mini sponge cakes.The Ferric Reducing Antioxidant Potential assay was performed according to the method proposed by Horszwald and Andlauer with some modifications.Briefly, 50 μL of the methanol extract prepared in the TPC assay was placed in microplate wells.Subsequently, 275 μL of a freshly prepared FRAP reagent chloride solution and 50 mL of 0.3 mM acetate buffer, pH 3.6) was injected, and absorbance was measured after 5 min at λ = 593 nm.Trolox was used for standard calibration, and the results were expressed in μmol Trolox g−1 DM of BLP or GF mini sponge cakes.A photochemiluminescence assay was performed to measure the antioxidant capacity of GF mini sponge cake extracts against superoxide anion radicals generated from the luminol photosensitizer under exposure to UV light in the Photochem apparatus.Antioxidant activity was analysed with ACW and ACL kits according to the manufacturer’s protocol.For ACW, a 50 mg sample was extracted with 1 mL of water, and for ACL – a 50 mg sample was extracted with 1 mL of the MeOH and hexane mixture.The assay was carried out according to the procedure described by Zieliński, Zielińska, and Kostyra.The concentration of the extract solution was adjusted to ensure that the generated luminescence was within the range of the standard curve.Antioxidant capacity was calculated by comparing the delay time of the sample with the Trolox standard curve, and it was expressed in µmol Trolox g−1 DM.A sensory evaluation of GF mini sponge cakes was conducted as previously described in Krupa-Kozak et al.Briefly, 30 panellists evaluated GF mini sponge cakes according to a hedonic scale.Each panellist was asked to rate the experimental sponge cakes for overall quality on a 9-point hedonic scale based on colour, aroma, taste and texture.The panellists were asked to briefly justify the scores awarded to the evaluated samples.The results are presented as mean values for each sample.The scores were grouped as follows: dislike, neutral, like.The results were expressed as the percentage of scores given to each sample.All experiments and analytical measurements were performed in triplicate.The results were processed by one-way analysis of variance.The significance of differences between the samples was determined by Fisher’s LSD test at p < 0.05.Broccoli leaf powder was characterized by high protein and nutrient content, determined as ash content, and it was a rich source of bioactive compounds.Nine GLS, including two aliphatic, four indole, one aralkyl and two unidentified GLS were found in BLP.Total GLS content was estimated at 5 μmol g−1 DM.The predominant GLS in BLP were glucobrassicin and glucoraphanin, which accounted for 30% and 25% of total GLS, respectively.In BLP, indole GLS were the predominant fraction, whereas other GLS accounted for 40% of total GLS.Broccoli leaf powder was characterized by a high content of total phenolics with multiple antioxidant properties.Lipophilic compounds were characterized by the highest O2− radical scavenging ability determined in the PCL-ACL assay, and the lowest ferric reducing ability determined in the FRAP assay.The influence of the BLP additive on GLS composition in GF mini sponge cakes is presented in Table 3.In general, total GLS content increased proportionally to an increase in BLP content.A positive correlation was noted between GLS content and BLP content, and the increase in the GLS content of B1, B2 and B3 was statistically significant.The GLS profile of fortified GF mini sponge cakes was identical to the GLS profile of BLP, but GLS composition was somewhat different.In contrast to BLP, unidentified GLS 1 was the predominant GLS in GF mini sponge cakes.The unidentified GLS 1 content of GF mini sponge cakes was determined at 2.91 μmol 100 g−1 FW in B3, where it accounted for 19% of total GLS, whereas the content of this compound in BLP was determined at only 5%.The concentrations of glucobrassicin and glucoraphanin, the major GLS fractions in BLP, was also high, accounting for 16% and 19% of total GLS in B3, respectively.The concentrations of the analysed GLS generally increased with an increase in BLP content, excluding unidentified GLS 1,4-hydroxy-glucobrassicin and gluconasturtiin.B2 was characterized by a higher average content of unidentified GLS 1 and 4-hydroxy-glucobrassicin and lower gluconasturtiin content than B3, but the observed differences were not statistically significant.The proportions of indole and other compounds changed in response to an increase in the unidentified GLS 1 content of fortified GF mini sponge cakes.In all GF mini sponge cakes fortified with BLP, indole GLS accounted for 29–33%, while other GLS accounted for 67–71% of total GLS.The addition of BLP to GF mini sponge cakes increased the antioxidant capacity of the analysed product.In comparison with control, the TPC of BLP-fortified GF mini sponge cakes increased significantly from 67% in B1 to 115% in B3.The highest TPC of approximately 1 mg GAE g−1 DM was determined in B3.TPC values were positively correlated with the BLP content of fortified samples, but the observed correlations were not statistically significant.The antioxidant capacity of all fortified GF mini sponge cakes was significantly higher in comparison with the control sample.Moreover, the antioxidant capacity of GF mini sponge cakes increased in a dose-dependent manner with an increase in the BLP content of each formulation, and was positively correlated with BLP content.As expected, the GF mini sponge cake with the highest BLP content was characterized by the highest antioxidant activity in all assays.In general, the fortification of sponge cakes with BLP had the greatest influence on antioxidant capacity in the PCL-ACW assay.Antioxidant capacity increased 13.2-fold even when starch was replaced with 2.5% BLP, whereas in B3, PCL-ACW increased 37-fold relative to control.A positive linear correlation was also observed between antioxidant capacity and TPC.The influence of BLP on the overall quality of experimental GF mini sponge cakes and consumer preferences is shown in Fig. 2.In general, the overall quality of the non-fortified control cake received significantly higher scores than fortified GF mini sponge cakes, regardless of their BLP content.According to the panellists, the colour, aroma and taste of the control cake were typical of sponge cake.However, taking into account the distribution of consumer preferences, sample B1 where starch was replaced with 2.5% BLP differed distinctively from the remaining BLP-supplemented GF mini sponge cakes.Sample B1 received the highest score from nearly 50% of panellists who described it as palatable, soft and characterized by an intriguing vivid green colour.Most samples with higher BLP content received the lowest scores on account of their dark green colour, hardness, intense taste and aroma of broccoli.The potential of fruit and vegetable processing wastes to enhance the nutritional value of food products has been widely reviewed.Fruit and vegetable by-products are very good sources of dietary fibre and phytochemicals.Numerous authors have remarked on the usefulness of broccoli by-products, in particular leaves, in the production of functional foods.Despite the above, broccoli leaves are rarely added to food products.Minimally processed broccoli leaves were recently used as a source of bioactive compounds in a new beverage.To the best of our knowledge, broccoli leaves have never been used to fortify a bakery product.Lifestyle diseases such as atherosclerosis are associated with oxidative stress.Considerable research has been devoted to the fortification of food products with antioxidant additives.The chemical composition and nutraceutical properties of a by-product have to be analysed before it can be incorporated into food.The BLP used in our study was characterized by higher protein content than the broccoli by-products examined in other studies.The nutritional content of broccoli is determined by variety and growing conditions.In our study, BLP was freeze-dried, and the preparation method could have additionally affected its nutritional properties.The analysed BLP was a good source of bioactive compounds, in particular GLS.Its qualitative and quantitative characteristics were similar to those described by other authors with glucoraphanin and glucobrassicin as a dominant aliphatic and indole GLS, respectively.Numerous authors have identified phenolic compounds in broccoli, including in broccoli leaves.Our results are consistent with the findings of Hwang and Lim who demonstrated that the TPC of broccoli leaves can range from 5.38 to 13.10 mg GAE g−1 DM across varieties.The presence of bioactive compounds contributed to the high antioxidant capacity of BLP.High antioxidant activity of broccoli leaves was also reported by other authors.Guo et al. compared the antioxidant capacity of different parts of broccoli plants, including flowers, stems and leaves, and observed that leaves and edible parts had similar antioxidant properties to the edible parts.Their results suggest that broccoli by-products can be effectively used as food supplements.In our study, BLP was characterized by very high PCL-ACL values.PCL-ACL provides information about the ability of lipophilic compounds, including fat-soluble vitamins and carotenoids, to quench O2− radicals.However, the content of these molecules in BLP was not analysed in our study, therefore, further experiments are required to examine the profile and activity of lipophilic compounds in broccoli leaves.Despite the above, the results of studies investigating the content of these compounds in broccoli florets and the high PCL-ACL values determined in BLP in our experiment suggest that leaves are an equally abundant source of lipophilic compounds as florets.The usefulness of food supplements should also be evaluated based on the stability and health benefits of bioactive compounds after processing.In our study, the stability of bioactive compounds was estimated in fortified GF mini sponge cakes based on their GLS content.The predicted values of GLS were also calculated.As expected, GLS was present in GF mini sponge cakes fortified with BLP.Surprisingly, the total GLS content of the experimental sponge cakes was 3 and 2 times higher than the predicted values for B1 and B3, respectively.The above observation could be attributed to the degradation of plant tissue during thermal processing and the release of partially bound GLS from cell walls.A similar increase in GLS content was reported by other authors in short-boiled Brussels sprouts and microwaved red cabbage.However, if the observed increase in the GLS content of GF mini sponge cakes had resulted only from higher extractability, the ratio of the predicted value to GLS content should be identical for B1, B2, B3.Meanwhile, the increase observed in B1 was higher than in B2 and B3, which could be attributed to the presence of synergistic interactions between bioactive compounds and food ingredients.This observation was confirmed by Sęczyk, Świeca, Gawlik-Dziki, Luty, and Czyż who found lower predicted values of bioactive compounds in wheat pasta fortified with parsley.The interactions between sucrose and GLS were analysed by Xu et al. who noted that the addition of sucrose to broccoli florets inhibited GLS degradation during storage.In our study, sugars accounted for 14% of the sponge cake formulation; therefore, interactions between sucrose and bioactive compounds were possible.Giambanelli et al. also reported that different additives, such as potato starch, corn starch or onion powder, could exert protective effects on GLS.The cited authors emphasized that the ratio of GLS to food components is a crucial consideration.The thermal degradation of GLS was inhibited when the ratio of broccoli powder to corn/potato starch was 1:9, whereas no changes were observed when the above ingredients were added in equal amounts.The interactions between GLS and the food matrix in bakery products require further examination.Unlike in BLP, GF mini sponge cakes were characterized by variations in the content of individual GLS, which could have resulted from differences in the thermal stability of GLS.The predicted values of the analysed GLS were generally higher than the experimental values, and they were lower only for indole compounds, glucobrassicin and their methylated derivatives.The results of other studies indicate that indole compounds are the most thermally labile GLS.In contrast, Hanschen, Rohn, Mewis, Schreiner, and Kroh reported that indole GLS containing a hydroxyl group are less stable.The above authors emphasised that the stability of GLS may also be influenced by other factors, including pH and water content.Glucoraphanin was one of the predominant GLS in GF mini sponge cakes.In B1, glucoraphanin concentration was almost 2-fold higher than the predicted value, whereas in B2 and B3, its content was somewhat lower but still 55 and 28% higher than predicted.High glucoraphanin content can be attributed to its stability during processing.Ciska et al. demonstrated that glucoraphanin was the most thermally stable compound in boiled Brussels sprouts.In our study, the internal temperature of GF mini sponge cakes did not exceed 100 °C.Similar results were reported by Hanschen, Plazt, et al. who found that glucoraphanin was stable at lower temperatures.Foods with high glucoraphanin content could deliver health benefits.Sulforaphane, an isothiocyanate that is produced during glucoraphanin degradation, has been linked with the anticarcinogenic effects of Brassica vegetables.Even if myrosinase, an enzyme responsible for GLS hydrolysis, is inactivated during blanching, gut microbiota are able to degrade GLS and release isothiocyanates.The effects of phenolic-rich additives with high levels of antioxidant activity were recently analysed in bread.Broccoli florets were used as a source of phenolics and antioxidant compounds to improve the quality of bread, cereal bars and beverages.There are no published studies analysing the applicability of broccoli leaves in food products.In our experiment, BLP has been used for the first time to fortify GF mini sponge cakes.Food is a heterogeneous matrix with various chemical properties, which is why we relied on a wide range of methods to determine the antioxidant capacity of the examined products, including Trolox equivalent antioxidant capacity, ferric reducing antioxidant power and the photochemiluminescence approach.As expected, the addition of BLP increased the antioxidant activity and TPC of GF mini sponge cakes.The greatest increase in antioxidant capacity was noted in the PCL-ACW assay.This parameter provides information about the antioxidant activity of hydrophilic compounds, in particular phenolics and vitamin C. Thermal processing did not affect TPC or the antioxidant activity of GF mini sponge cakes.Similar results were reported by Barakat and Rohr who analysed the antioxidant capacity of broccoli bars subjected to different thermal treatments and concluded that baking did not exert a significant effect on TPC or antioxidant capacity in the ABTS assay.This observation suggests that phenolic compounds are thermally stable.In contrast, the antioxidant capacity of lipophilic compounds in GF mini sponge cakes was lower than expected and accounted for 15%, 19% and 22% of the predicted PCL-ACL values in B1, B2 and B3, respectively.It can be speculated that lipophilic compounds were thermally degraded during baking.Fat-soluble vitamins and non-vitamin carotenoids are destroyed during thermal processing.The results of this study indicate that BLP is a good source of nutrients, in particular proteins and minerals, and bioactive compounds with potential health benefits.Our findings suggest that BLP can be a valuable supplement for GF mini sponge cakes.Gluten-free products often have low nutritional value, and the addition of BLP can compensate for that deficit.Broccoli leaf powder enhanced the antioxidant activity of GF mini sponge cakes mainly due to its high content of bioactive compounds.Interestingly, the matrix of GF mini sponge cakes had a protective effect on the stability of GLS compounds with anti-carcinogenic properties.Thus, the supplementation of GF cakes with BLP could be an effective strategy for delivering chemopreventive compounds to the human body.The results of the sensory evaluation indicate that BLP should be added in moderate amounts to preserve the desirable sensory attributes of GF mini sponge cakes, including colour, aroma and taste.The addition of 2.5% BLP as a starch substitute resulted in an optimal improvement in the nutraceutical potential of GF cakes without compromising their sensory quality.The results of our study indicate that the incorporation of broccoli by-products enhances the nutritional value and the health benefits of GF products.
This study describes the successful development of new gluten-free (GF) mini sponge cakes fortified with broccoli leaves. The aim of this study was to evaluate the effect of broccoli leaf powder (BLP) on the content of biologically active compounds and the antioxidant capacity of GF mini sponge cakes. Broccoli leaf powder was a good source of nutritional components, including proteins and minerals, as well as bioactive compounds such as glucosinolates and phenolics. Glucosinolate content was higher than expected, which points to a synergistic interaction between bioactive compounds and the food matrix. The incorporation of BLP into GF mini sponge cakes significantly (p < 0.05) increased their antioxidant capacity. The overall sensory acceptance of GF mini sponge cakes was affected by increasing BLP content. The addition of 2.5% BLP as a starch substitute resulted in an optimal improvement in the nutraceutical potential of GF cakes without compromising their sensory quality.
244
Leaching and transport of PFAS from aqueous film-forming foam (AFFF) in the unsaturated soil at a firefighting training facility under cold climatic conditions
Aqueous film-forming foam containing per- and polyfluoroalkyl substances has been used extensively since the first development by 3M, Ansul and National Foam Companies in the mid 1960’s.AFFF has surface-tension lowering properties and spreads rapidly across the surface of hydrocarbon fuels, cooling the liquid fuel by forming a water film beneath the foam, resulting in superior firefighting capabilities.The use of AFFF has resulted in PFAS and especially perfluorooctanesulfonic acid contamination of soil, groundwater, surface waters and biota worldwide.There is a growing concern for negative consequences for the environment and human health with regards to use and exposure to PFAS."The persistency, bioaccumulation and toxicity of long-chain PFASs define them as persistent organic pollutants and those of most concern are listed as substances of very high concern in the European Chemical Agency's.Fate of PFAS released to the soil environment is primarily dependent on infiltration and sorption to the solid matrix.Partitioning coefficients for PFOS and other PFASs to soil and sediments have been thoroughly investigated and reported in literature.Recent studies have shown that sorption to the air-water interface can be a major contributor to the retention of PFOS and PFOA under partially saturated conditions.The Norwegian Aviation Organization has made an inventory of PFAS contaminated soil, groundwater, surface water and biota at several firefighting training facilities throughout Norway.The studies have shown widespread leaching of PFAS from the soil to nearby water courses and exposure of biota in both fresh water and marine environments.This study focuses on one specific FTF where AFFF containing PFOS has been used extensively since the early 1990’s until it was phased out in 2001 and replaced by fluorotelomer containing AFFF.All use of PFAS containing firefighting foams was banned at the airport in 2011.The aviation authorities have raised the question how much of the PFOS is still present in the source zone after 15 years without PFOS application.Can these residual levels form a long-term source, potentially contaminating the groundwater at the site?,To answer this question, extensive field studies at the FTF site were performed to map the present PFAS contamination situation in the unsaturated soil profile, and the groundwater.To reconstruct the initial release scenario of AFFF used at this site, unsaturated column studies were performed under environmentally relevant experimental conditions.This model study allowed to compare field observations with an unknown contamination history with well controlled exposure of a similar soil.The objective was to improve our understanding of the contamination history and potential transport and attenuation processes governing PFAS behavior at this site.To our knowledge this is the first study where the PFAS contamination history at a FTF site has been studied in a controlled unsaturated column experiment using a complete AFFF mixture.The FTF investigated in this work has a total size of 25,000 m2 and was established in 1989–1990.Extensive use of AFFF at the site started after the airport opened for civil aviation in 1998, but the area has previously been used for firefighting training activities.There are six firefighting training platforms at the FTF, each designed with membranes and a collection system for fuel, water and firefighting foam to protect the underlying groundwater.Most of the liquids used at the site have been collected and discharged to the local sewage system.Foam and water have also spread to the soil outside the training areas, due to wind and the increased spraying range of modern firefighting engines.Soil and groundwater sampling in 2008 revealed PFAS contaminated soil outside the collection areas and leaching to the groundwater 4 m below the surface.The amount of AFFF spread outside the collection areas and the water infiltration rate has varied across the FTF.In some AFFF source zones, there has been extensive use of water during firefighting training activities, resulting in high infiltration, while in other parts of the FTF infiltration has been limited to the yearly precipitation, which is dominated by snow melting.The yearly precipitation in the area of the FTF is approximately 800 mm.The soil at the site is a uniform medium fine sand over the whole unsaturated zone down to the groundwater at 4 m depth, with an organic carbon content below 1%.The hydraulic conductivity at the site ranges from 10−3 to 10−5 m/s.The water balance developed by Jørgensen and Østmo for the area, showed that 50% of the annual precipitation is lost due to evapotranspiration and close to 60% of groundwater recharge is occurring during a 3–5 week long snowmelt period in spring, while the remainder of the annual infiltration occurs during the autumn months.It was estimated that water infiltrates through the unsaturated zone during the snowmelt period with a mean vertical velocity of 20 cm/day at 20% saturation.French et al. showed that the mean vertical velocity of infiltrating water in the springtime snowmelt period was approximately 5.2 mm per mm infiltration at an estimated saturation level between 18.5 and 20.8%.Infiltration was lower during the autumn and the vertical pore water velocity in the soil was estimated to be 7.7 mm/mm.In the summer months precipitation was balanced by evapotranspiration with no net infiltration.Soil and groundwater investigation was performed at the FTF site in 2016, and included soil sampling at 80 locations in the unsaturated zone around the firefighting training platforms.Trial pits were excavated for sampling of the soil profile.All equipment was rinsed with methanol before sampling each point.Samples were collected from 0 to 1 m, 1–2 m, 2–3 m and from 3 m and down to the groundwater level at 4 m. Soil was transferred into sampling bags and stored at 4 °C in the laboratory before being shipped to a commercial laboratory for analysis.A total of 288 soil samples were analyzed for PFAS content.Groundwater was sampled at 5 pumping wells installed as part of a pump and treat remediation system down gradient of the site to intercept the plume spreading from the FTF.A total of 19 sampling campaigns were performed during 2016.The samples were stored in HDPE bottles at 4 °C before being shipped to a commercial laboratory for PFAS analysis.Pristine soil from an area close to the FTF, with similar texture and mineralogy, was collected and used to construct columns for the infiltration experiment.Acrylic glass columns with an inner diameter of 14 cm were equipped with a water drain at the bottom, consisting of a coarse metal grid covered by a fine stainless steel filter to prevent the sand from washing out.The columns were packed with sand by adding layers of 6 cm, compacting each layer with a 900 g weight dropped four times until each column had a total length of approximately 1 m.The total weight of the sand added to each column was recorded.The sand was analyzed for grain size distribution, water content, total organic carbon and background levels of PFAS.The columns were placed in a temperature controlled room at 10 °C.A water reservoir at the top of each column was used for infiltration.The reservoir emptied through 25 needles to ensure uniform distribution over the whole surface area of the column, imitating natural infiltration of rain droplets.The soil surface in the columns was approximately 30 cm below the outlet of the needles.The flow through the columns was gravity driven to simulate field conditions.Leachate was collected and sampled at the bottom of the columns in plastic buckets.The experiments were performed at low and high infiltration rates of 4.9 mm and 9.7 mm per day for respectively 14 and 7 weeks.A total of 477 mm water was added in both the low and high infiltration experiment.The water was weighed and manually poured into the top reservoir and allowed to drip on the soil, 3 times per week.Non-reactive tracer experiments were performed in the columns to estimate the unsaturated water transport through the columns at low and high infiltration rates by adding a solution of NaCl in the top reservoir during steady-state water infiltration at the stated rates.Leachate was sampled 3 times per week and analyzed for temperature, pH, and electrical conductivity.AFFF concentrate, containing PFOS as the main PFAS, from the same supplier as assumed to have been used historically at the site was used in the experiments.The AFFF concentrate was mixed with Milli-Q water in a 1:100 ratio and whipped to a stable foam, which was then applied to the soil surface of both columns.The 1:100 diluted AFFF concentrate was analyzed to determine the exact PFAS composition.When applied, the AFFF was a dense foam.24 h after the first infiltration of water, the foam had disappeared from the surface, but foam bubbles could still be observed in the voids of the upper part of the soil columns.The infiltration period was 14 and 7 weeks respectively for the low and high infiltration experiments.Leachate samples were collected 3 times per week for the low infiltration experiment, and daily for the high infiltration experiment.For leachate volumes <50 ml, samples were combined with the sample from the subsequent day to ensure sufficient sample size for analytical requirements.The samples were stored at 4 °C before shipping to a commercial laboratory for PFAS analysis.After the infiltration experiments were completed, the soil was extracted from the columns in 5 cm intervals and analyzed for PFAS content.The water content was measured in the soil samples after the experiment to quantify and confirm the unsaturated conditions in the soil columns.The samples were stored at 4 °C before shipping to a commercial laboratory for analysis.The list of target PFAS analyzed varied between the field study and the laboratory experiments.The field samples were analyzed for 12 PFAS compounds for soil and groundwater.In the column study 30 PFAS compounds were analyzed in soil and 23 PFAS compounds were analyzed in leachate and the AFFF foam.For a complete overview of the compounds analyzed, see Appendix A and C. Soil and leachate analyses were carried out at the accredited laboratory Eurofins GfA Lab Service GmbH, using method DIN 38414-S14, based on acetonitrile extraction followed by analysis using liquid chromatography coupled with mass spectrometry for soil samples and DIN 38407-F42, and quantification using LC/MS-MS for leachate samples.Following the NRT experiments, the infiltration reservoirs were thoroughly cleaned, and all needles changed.No material that could influence PFAS sorption behavior was used when handling samples from the AFFF experiment.HDPE bottles were used to store samples until analysis.Reference samples of the soil used in the columns and water used for infiltration were analyzed to determine the background levels of PFAS.PFAS analysis was carried out at an accredited laboratory.Internal isotopically labelled standards were added to all soil and leachate samples prior to PFAS analysis.PFAS identification was based on retention time and molecule or fragment ions and quantification was carried out by comparison with the internal isotopically labelled standards.Analytical detection limits varied from 0.2–1 μg/kg for the respective PFAS in soil and was 0.3 ng/l for each PFAS in leachate.PFAS concentrations found in the soil samples at the FTF varied from <0.3 to 6500 μg/kg.PFOS accounts for 96% of the Σ 12-PFAS analyzed in the soil samples and is therefore focused on in the following discussion of source zones with high and low AFFF impact.In source zones with low impact of AFFF, the soil sampled from 0 to 1 m depth revealed concentrations of PFOS in the range of 100 to 900 μg/kg.The soil samples from 1 to 2 m depth contained much lower concentrations of PFOS from <0.3–70 μg/kg.The PFOS concentrations at sampling locations with low AFFF impact are given in Fig. 3a.The soil was not analyzed below 2 m due to low PFOS concentrations in the soil samples from 1 to 2 m.The PFOS concentrations in the unsaturated soil in source zones with high impact of AFFF are shown in Fig. 3b. Concentrations of PFOS in the soil from 0 to 1 m were in the range of 500 to 3000 μg/kg.The soil from 1 to 2 m revealed the highest PFOS concentrations, ranging from 1000 to 6500 μg/kg.From 2 to 3 m the PFOS concentrations ranged from 1000 to 3500 μg/kg, with a further reduction 1000 to 1200 μg/kg at 3–4 m depth.Differences between the low and high impacted areas might indicate that the attenuation and transport processes of PFOS differ at these sites.The concentration of PFOS in the soil profiles are in the same order of magnitude as reported previously at FTF sites in Sweden, Australia and the US.In France Dauchy et al. took 44 soil cores at a FTF belonging to an abandoned refinery, where activities probably ceased in 1984.The median concentration at the highest impacted areas was respectively 8701 and 12,112 μg/kg.The highest concentrations were found in the top 1 m and 50% to>99% of the PFAS content was identified as fluorotelemers, dominated by 6:2 fluorotelomer sulphonamide alkylbetaine.Perfluorosulfonates represented <1 to 46% of the PFAS in the soil, dominated by PFOS.PFOS precursors were in general below <1% of the quantified PFAS.The groundwater concentration varies between the 5 pumping wells.The yearly average concentration for PFOS in the wells was 22 μg/l, accounting for 71% of the Σ 12-PFAS concentration.The average concentrations of PFHxS and 6:2 FTS were 2.9 μg/l and 2.7 μg/l, respectively, accounting for 9 and 8% of the Σ 12-PFAS concentration in the groundwater.Other PFAS detected in the groundwater represented approximately 12% of Σ 12-PFAS quantified in the groundwater.The PFAS concentration in the groundwater varies during the year, as can be seen by the high standard deviations in the yearly average concentrations presented in Fig. 4b.Filipovic et al. reported similar values at a Swedish airport, PFOS up to 42 μg/l was reported, representing >80% of the 4 PFAS analyzed.At a US military site a maximum of 78 μg/l PFOS was found as well as PFHxS, PFHxA, 6:2 FTS, PFOA, PFBS, and 120 μg/l PFPeA.Other studies have reported considerably lower groundwater concentrations.Dauchy et al. found PFAS concentrations in the range of 4 to 8277 ng/l, where perfluorosulfonates represented 16 to 100% of Σ 34-PFAS dominated by PFHxS and PFOS.The remainder were perfluorocarboxylic acids, 6:2 FTS and 6:2 FTAB.The non-reactive tracer tests in the unsaturated column study showed the first breakthrough of the tracer after 28 and 18 days, at low and high infiltration rates, respectively.A maximum electrical conductivity was measured in the leachate after 56 and 28 days, as shown in Fig. 5.The results showed average water flow velocities during the low and high infiltration experiments of 1.7 cm/day and 3.2 cm/day respectively.The observed pore water velocities in the columns are in the same order of magnitude as estimated for snow melting by French et al.This shows that the hydrological behavior of our re-packed columns is representative of the field conditions at the site.PFOS is the most dominant constituent of the Σ 23-PFAS determined in the AFFF used in the column experiments, with a concentration of 100 mg/l after dilution, accounting for 90% of the Σ 23-PFAS amount applied.The other PFASs detected in amounts larger than 0.5% of the Σ 23-PFAS amount in the diluted concentrate are PFHxS contributing with 6.3%, PFBS with 1.2%, PFHpS with 1.4%, PFHxA with 0.7% and PFOA with 0.8% of the Σ 23-PFAS amount in the diluted AFFF.The PFAS concentrations measured in the diluted AFFF concentrate are presented in appendix A.The concentrations of PFAS detected in the column leachate for the low infiltration experiment are shown in Fig. 6a. PFBS shows a first breakthrough after 14 days, reaches a maximum concentration of 53 μg/l after 56–63 days and decreases thereafter.It seems to move through the column without retardation, at a rate similar to that observed for the NRT.Subsequently a breakthrough and decrease of PFHxA, PFPeA and PFHpA are seen.PFHxS, by definition a long-chain PFAS, is not attenuated to the same degree as other long-chained PFASs in the column soil.PFHxS is showing a breakthrough after 35 days and reaches a maximum concentration of 130 μg/l at the end of the experiment at 98 days.100% of the initial amount of PFBS added to the column has been detected in the leachate at the end of the experiment while 45% of PFHxS and 29% of PFHpS added has leached through the column.PFOS was not detected over 15 ng/l during the experiment.6% of the Σ 23-PFAS in the AFFF applied had leached through the column at the end of the experiment, but only 0.006% of the PFOS in the applied AFFF.PFAS concentrations detected in the leachate for the high infiltration rate experiment are shown in Fig. 6b.The first detection of any PFAS was after 7 days but the onset of a breakthrough can be observed from 14 days and onwards.PFBS, PFHxA and PFHxS seem to breakthrough simultaneously.However, PFHxS is the most dominant compound in the column leachate thereafter, similar to the low infiltration experiment.The maximum PFHxS concentration in the leachate is 71 μg/l after 47 days.PFBS and PFHxA reach an apparently stable concentration at 17 and 15 μg/l respectively.Considerably lower than observed in the low infiltration column.PFOS concentrations are difficult to observe in Fig. 6b, but PFOS was detected in the leachate after 21 days at a concentration of 20 ng/l, which continued to increase to 2200 ng/l at the end of the experiment.This is a major difference between the two experiments since PFOS was not detected above 15 ng/l in the low infiltration experiment.87% of the total PFBS amount had leached through the column at the high infiltration rate in contrast to 100% at low infiltration.PFHxS leaching was comparable in both experiments at 47% of the added amount.In contrast only 2% of PFHpS leached out at high infiltration compared to 29% at low infiltration.The total amount of PFOS leached in the low infiltration experiment was 0.006% while it was 0.05% in the high infiltration experiment.For the sum detected PFAS in the leachate the amount leached was slightly lower for the low infiltration experiment with 5.89% compared to 5.93% in the high infiltration experiment of Σ 23-PFAS added.PFAS analysis of the soil in the column at the end of the experiment showed that short-chain PFAS were less attenuated than long-chained PFAS, as presented in Figs. 7a and 8a. PFOS was the most retarded PFAS at low infiltration with concentrations ranging from 0.21–1700 μg/kg in the soil and a maximum concentration detected at 30 cm depth in the soil columns.Previous column studies have not analyzed remaining PFAS concentrations in the soil matrix.However, similar retention patterns have been reported, where short chain PFAS are less retarded in the soil than long chained PFAS.The PFAS analysis of the soil in the column at the end of the high infiltration experiment showed, similar to the low infiltration experiment, that short-chain PFAS were attenuated less than long-chained PFAS.The PFOS concentration in the soil ranged from 7.4 to 1000 μg/kg with the highest concentrations observed in a zone at 22–32 cm depth.The highest PFOS concentration was approximately 60% of the maximum concentration in the low infiltration experiment.This might be a consequence of a reduced pore water concentration at the high infiltration rate, resulting in reduced sorption to the soil matrix.The presence of PFOS in the column leachate is indicative of reduced sorption and increased vertical transport.The retardation factors for PFOS in the unsaturated zone at the FTF are estimated based on an assumed average vertical water velocity from snowmelt infiltration of 4.9–7.5 cm/day during 3 weeks.In addition autumn infiltration of 160 mm precipitation with an infiltration rate of 7.7 mm/mm, results in an estimated yearly vertical pore water transport of.For the low impact areas, PFOS is mainly found at 1 m depth.Assuming PFOS use ended 15 years ago, this results in an average yearly vertical PFOS transport rate of 6.7 cm/year during the 15 years that have passed.From these data a retardation factor of 33–42 can be estimated.For the high impact areas, the yearly average vertical velocity might be higher due to the extra water added during firefighting activities.Assuming the same average vertical water velocity as for low impact areas vertical transport for PFOS down to 2 m is observed, resulting in a PFOS transport rate of 13.3 cm/year and a retardation factor of 16–21 for PFOS at high impact areas.In the soil columns, the vertical distance travelled for the center of mass of PFOS in the soil columns was 25.5 and 27 cm, respectively for the low and high infiltration experiment.This results in mean PFOS transport rates of 0.26 cm/day and 0.55 cm/day, respectively.The NRT showed average vertical water velocities of 1.7 cm/day and 3.2 cm/day for low and high infiltration, respectively.This gives retardation factors of 6.5 and 5.8 at low and high infiltration rate, respectively.Based on these retardation factors and assuming a mean volumetric water content of 20%, apparent distribution coefficients for PFOS of 4.0–5.1 l/kg and 1.9–2.5 l/kg can be estimated for low and high impacted areas at the FTF, respectively.Apparent KD values of 0.8 and 0.7l/kg for the column studies with low and high infiltration, respectively, can be estimated with the same volumetric water content as for the field estimates.These KD values are in the range of values reported for PFOS in literature compiled in Appendix B. Zareitalabad et al. reports KD values for the sorption of PFOS to various soils and sediments in the range of <1 and 35.3 l/kg.For the sandy soil in our study the value reported for Ottowa sand of 2.8 l/kg could be expected.The lower retardation factors for PFOS in the column experiments compared to those estimated for field conditions at the FTF might be explained by the fact that the column infiltration was continuous in a relatively short time period compared to the field conditions, where infiltration will be intermittent.At the FTF there has only been net infiltration in the unsaturated zone during annual snowmelt and autumn precipitation.Long periods with stagnant pore water during summer and winter months are observed where the saturation is at field capacity and no groundwater recharge occurs.The annual changes in saturation level of the soil pores can have an effect on the attenuation processes for PFOS in the unsaturated soil under field conditions.Both sorption kinetics and the potential effect of disequilibria complicate the comparison of the field and column study results but were not studied in this work.Brusseau showed that retention processes other than sorption to the solid-phase can influence PFAS transport.Air-water interface adsorption alone accounted for >50% of the total retention observed in these studies.The adsorption of PFOA to the air-water interface during transport in unsaturated porous media was further investigated by Lyu et al. and the results for the experiments showed that adsorption to the air-water interface was a significant source of retention contributing to approximately 50–75% of the total observed retention of PFOA.The potential contribution of this process to PFOS retention requires further attention.The use of AFFF at the FTF in this study has resulted in PFAS and especially PFOS contamination of the soil.At high impacted areas soil contamination down to the groundwater level at 4 m below surface could be observed.Unsaturated column studies with AFFF applied to pristine soil, to better understand the historic contamination history, showed that when exposed to low infiltration rates, PFOS was not detected in the column leachate, while exposed to high infiltration rates, PFOS was detected in increasing concentrations up to 2200 ng/l during the experimental period.Estimated retardation factors for PFOS in the field were 33–42 and 16–21 for low and high impacted areas compared to 6.5 and 5.8 for low and high infiltration column studies.The leaching of PFAS from source zones in the unsaturated zone at this site can represent a long-term risk for contamination of the groundwater and transport to nearby surface water bodies.Better insight in the retention processes in the unsaturated zone is essential to achieve a more accurate prediction of leaching rates and improve risk assessment and remediation design at PFAS contaminated sites.
The contaminant situation at a Norwegian firefighting training facility (FTF) was investigated 15 years after the use of perfluorooctanesulfonic acid (PFOS) based aqueous film forming foams (AFFF) products had ceased. Detailed mapping of the soil and groundwater at the FTF field site in 2016, revealed high concentrations of per- and polyfluoroalkyl substances (PFAS). PFOS accounted for 96% of the total PFAS concentration in the soil with concentrations ranging from <0.3 μg/kg to 6500 μg/kg. The average concentration of PFOS in the groundwater down-gradient of the site was 22 μg/l (6.5–44.4 μg/l), accounting for 71% of the total PFAS concentration. To get a better understanding of the historic fate of AFFF used at the site, unsaturated column studies were performed with pristine soil with a similar texture and mineralogy as found at the FTF and the same PFOS containing AFFF used at the site. Transport and attenuation processes governing PFAS behavior were studied with focus on cold climate conditions and infiltration during snow melting, the main groundwater recharge process at the FTF. Low and high water infiltration rates of respectively 4.9 and 9.7 mm/day were applied for 14 and 7 weeks, thereby applying the same amount of water, but changing the aqueous saturation of the soil columns. The low infiltration rate represented 2 years of snow melting, while the high infiltration rate can be considered to mimic the extra water added in the areas with intensive firefighting training. In the low infiltration experiment PFOS was not detected in the column leachate over the complete 14 weeks. With high infiltration PFOS was detected after 14 days and concentrations increased from 20 ng/l to 2200 ng/l at the end of the experiment (49 days). Soil was extracted from the columns in 5 cm layers and showed PFOS concentrations in the range < 0.21–1700 μg/kg in the low infiltration column. A clear maximum was observed at a soil depth of 30 cm. No PFOS was detected below 60 cm depth. In the high infiltration column PFOS concentration ranged from 7.4 to 1000 μg/kg, with highest concentrations found at 22–32 cm depth. In this case PFOS was detected down to the deepest sample (~90 cm). Based on the field study, retardation factors for the average vertical transport of PFOS in the unsaturated zone were estimated to be 33–42 and 16–21 for the areas with a low and high AFFF impact, respectively. The estimated retardation factors for the column experiments were much lower at 6.5 and 5.8 for low and high infiltration, respectively. This study showed that PFOS is strongly attenuated in the unsaturated zone and mobility is dependent on infiltration rate. The results also suggest that the attenuation rate increases with time.
245
Analysis of working parameters for an ammonia-water absorption refrigeration system powered by automotive exhaust gas
The automotive internal combustion engine is of primary importance because of its ample utilisation, economic and social relevance, as well as its impact on gas emissions worldwide .In terms of energy consumption, one of the disadvantages of ICEs is their low efficiency, normally less than 40%, with approximately one third of the total energy being released as heat in the exhaust gases .As such, several studies have recently been developed with the objective of recovering energy wasted in ICE systems .Unlike conventional vapour compression refrigeration systems, which require an active power source, an absorption refrigeration system can be driven by the waste heat generated from an ICE or other sources .As such, studies conducted in the late 1980s and 1990s attempted to investigate whether energy from automotive exhaust gas heat could be recovered to power on-board refrigeration systems.These studies demonstrated the theoretical viability of the intended applications .More recent investigations have furthered experimental analysis and have suggested new possibilities for ARS .Recently, in order to increase performance of ARSs powered by exhaust gas heat in search of viable working applications, some researchers have directed their efforts toward investigating control systems that allow the amount of power diverted from the exhaust gases to be regulated .In the present work, an experimental investigation was conducted on the operating parameters for an ammonia-water absorption refrigeration system, powered by exhaust gas heat generated from an automotive ICE, with the objective of assessing the requirements for adequate working conditions for the system.A commercial absorption refrigerator with 0.215 m3 capacity, originally designed to operate with the heat supplied by an LPG burner, was adapted to be powered by the exhaust system of an automotive ICE, as in previous investigations .A FIAT 1.6 l, 8-valve, four-cylinder automotive ICE with multipoint electronic fuel injection was used in the experiments.The engine featured a compression ratio of 9.5:1, 86.4 mm bore, and 67.4 mm stroke.In order to monitor speed and torque during the tests, the engine was mounted on a Heenan & Froude G4-1 hydraulic dynamometer equipped with a Transducer Techniques MPL-500 load cell and a Magneti Marelli magnetic speed sensor.In order to divert the exhaust gases to the ARS, two step-motor control valves were installed according to the scheme presented in Fig. 1.When valve V1 is completely open and valve V2 is completely closed, exhaust gas flow is directed entirely to the generator element of the ARS.More details on the control system employed to regulate TG can be found in a previous report .The exhaust gas temperature before and after heat exchange with the ARS generator are labelled in Fig. 1 as TIN and TOUT, respectively.The temperatures TIN, TOUT, and TG were monitored using type K thermocouples.The temperature of the evaporator element of the ARS was monitored using a Texas TMP100 digital sensor.The variation over time of the exhaust gas temperature, generator temperature, and evaporator temperature are presented in Fig. 2 for the set TG points of 270, 240, 200, and 180 °C, respectively.The engine working conditions in each case are also shown in Fig. 2.These initial tests were performed for a limited time of 40 min, except for TG = 270 °C in which the system ran for only 24 min.It is worth noting that for TG = 200 °C, Fig. 2c, the system goes through an initial transient regime before achieving steady state after close to 10 min operation.This behaviour was found to be consistent for the other set temperature points and was not presented in the remaining cases simply because the acquisition system was adjusted to start registering data after stabilisation.Table 1 presents the following data: the average, maximum, and minimum TG, the minimum attained TE, and the average TIN-TOUT values registered as a function of TG set point.By analysing the results presented in Table 1 and Fig. 2, it is possible to notice that the ARS did not perform satisfactorily with the reference generator temperature set at 270 °C.At this temperature, an oscillation of only 1 °C was noticed between 2 and 20 min, approximately.A similar situation was observed for TG = 240 °C.Under this condition, although the temperature at the evaporator initially decreased to approximately 13 °C after 10 min runtime, it gradually increased to 14.6 °C after 40 min.When the system was operated with TG = 200 °C, TE initially at 23 °C dropped after approximately 16 min runtime and remained low, achieving a finale temperature of 1.1 °C.Finally, for the reference TG = 180 °C, a small decrease in TE was noticed.Although the drop in TE appears consistent, it is much less significant than that observed for the set 200 °C TG point.The results obtained for the 180 °C reference temperature indicates that the energy supplied by the exhaust gases, in this case, was insufficient for adequate performance of the ARS.It is possible to conclude that superior performance observed at 200 °C was caused by a balance in the amount of exhaust heat transferred to the ARS.The reason for this behaviour is that 200 °C is close to the designed operating temperature for this system .In addition, while at 180 °C insufficient heat is available to power the ARS, at higher temperatures, the low performance is attributable to excessive heat supply, which prevents adequate refrigerant condensation in the absorption system.Thus, the temperature of the water-ammonia mixture at the evaporator is too high, and no heat can be removed from the refrigerator, congruent with the observations of .Since the reference temperature of 200 °C exhibited the best performance among the analysed cases, long term test runs were then performed with this set temperature point.The results are presented in Fig. 3 that shows the temperature evolution for TIN, TOUT and TG, the temperature evolution for TE and the interior of the refrigerator, the evolution of relative humidity, and different calculations of the heat transfer rate.In Fig. 3a, it is clear that the system is sensitive to oscillations in exhaust temperature, but overall it is possible to achieve temperature control by diverting the exhaust gas flow.After an initial transient period, the average TG registered was 201 °C.Fig. 2b shows an initial sharp drop in TE: from close to 25 °C down to approximately 4 °C in the first 20 min of operation, and after 240 min, a minimum temperature of −13 °C was registered in the evaporator.The minimum temperature observed at the centre of the refrigeration chamber was 0 °C, while in the lower portion of the refrigeration chamber the minimum temperature was 4 °C after 240 min.Different plots showing the evolution of heat transfer rates are presented in Fig. 3d. Heat transfer rates were calculated based on the temperature differences between the internal and external surfaces based on a previous model formulated for the present system .The measured values presented best agreement with the average of the heat transfer rates, as indicated in Fig. 3d.The results reveal a tendency for stabilisation of both the temperatures in the refrigerator and heat transfer rate.The evolution of the instantaneous cooling capacity and the instantaneous heat transfer rate from the energy source are presented in Fig. 4a, while the calculated COP itself is presented in Fig. 4b.Since the engine was operated at constant speed and torque throughout the test, the supplied heat remained virtually constant during the tests at approximately 650 W.The cooling capacity was gradually raised until a final value of 32 W was observed after 240 min operation.A significantly low COP was obtained with a maximum value of approximately 0.05 after 240 min operation, as shown in Fig. 4b, probably because the system was originally designed to be operated with an LPG burner and is therefore not optimised for the present condition.Additionally, Srikhirin et al. report that a diffusion absorption system that operates using a water.NH3·H2 working fluid has a COP in the range from 0.05 to 0.2.The COP results here are then in accordance with those recorded in the literature.In the present work, an ammonia-water ARS was powered by the exhaust gas waste heat from an automotive ICE, and a control system was used in order to regulate the amount of heat transferred to the generator of the ARS.As a consequence, it was possible to assess the performance of the system for different generator temperatures."Effective refrigeration was only noticed for the set 200 °C reference point, and for this operational condition further long term tests allowed determination of the system's COP.The system was found to exhibit low performance.This is a likely consequence of the fact that the ARS was not designed for operation with heat supplied by automotive exhaust gases.
In the present work, an experimental investigation was performed to determine effective working parameters for an ammonia-water absorption refrigeration system powered by waste heat from the exhaust gases of a vehicular internal combustion engine. The automotive exhaust system was connected to the generator of a commercial absorption refrigerator originally intended to operate with the heat generated from an LPG burner. In an attempt to increase performance, a close-looped exhaust gas flow control system was designed and implemented that allowed the generator temperature to be maintained at pre-determined values. Thus, a series of tests were performed with varying generator temperatures (180, 200, 240, and 270 °C) while monitoring engine torque, speed, and temperature at different points of the system. Using this methodology, it was found that the system is significantly sensitive to generator temperatures, and satisfactory performance was only noted for the set value of 200 °C. Under this operating condition, after 240 min test runs, minimum temperatures of -12.5 and -0.6 °C were obtained, respectively, at the evaporator element and interior of the absorption refrigerator, while the maximum coefficient of performance (COP) registered was almost 0.05.
246
Data related to dislocation density-based constitutive modeling of the tensile behavior of lath martensitic press hardening steel
The strain hardening behavior during tensile deformation of a lath martensitic press hardening steel was described using a dislocation density-based constitutive model.The Kubin–Estrin model was used to describe strain hardening of the material from the evolution of coupled dislocation densities of mobile and immobile forest dislocation.Two models with different parameter values are presented, and the results include stress–strain curves and the evolution of mobile and forest dislocation density with strain, calculated by the models.The parameter values used for modeling are presented in a table.A cold-rolled 0.35 wt% C PHS was used .The tensile samples were austenitized and then quenched to room temperature in order to make fully martensitic microstructure.The specimens were tested in tension in an electromechanical universal testing machine using a strain rate of 10−3 s−1.The experimental true stress-strain curve of the as-quenched PHS is shown in Fig. 1.Here, σ0 is contributions from Peierls stress and solid solution strengthening, M is the Taylor factor, G is the shear modulus, b is the magnitude of the Burgers vector and ρ is the total dislocation density.The equation for σ0 derived by Rodriguez and Gutierrez yielded 201 MPa considering the chemical composition of the investigated PHS.The present work did not consider solid solution hardening by carbon.Using the block size of 500 nm, the second term in Eq. was estimated to be 424 MPa.The average initial forest dislocation density in the PHS was estimated to be 2.21×1015 m−2 by subtracting the sum of contributions from the first term, i.e. 201 MPa, and the packet size strengthening term, i.e. 424 MPa, from the experimental YS, 1354 MPa.The estimated dislocation density is in a reasonable agreement with the measured dislocation density of a Fe-0.4 wt%C martensitic steel, i.e. 1.42×1015 m−2, reported by Morito et al. .In these equations, C1 specifies the magnitude of the dislocation generation term, with forest obstacles acting as pinning points for fixed dislocation sources.C2 takes into account the mobile dislocation density decrease by interactions between mobile dislocations.C3 describes the immobilization of mobile dislocations assuming a spatially organized forest structure.C4 is associated with dynamic recovery by rearrangement and annihilation of forest dislocations by climb or cross slip.C2 and C4 account for thermally activated mechanisms such as cross-slip and climb .The parameters C1, C2, C3 and C4 used in the present work are listed in Table 1.The parameters in original Kubin–Estrin model were chosen based on typical FCC metals and alloys .In the present work, a much higher values of C3 were used to describe the high initial work hardening of PHS as compared to the value in the original Kubin-Estrin model.The numerical values of the parameters were G=81.6 GPa, b=0.248 nm and M=3.067.Two models were analyzed.In the first model, i.e. model 1, high values of C2 and C4 were used since BCC metals and alloys generally have higher cross-slip activity as compared to FCC metals and alloys.As shown in Fig. 1, the experimental flow stress is much higher than the calculated flow stress by model 1.In the second model, i.e. model 2, lower values of C2 and C4 were used in order to match the experimental and calculated flow stresses.Neither model could however describe the high initial work hardening rate shown in the experimental flow curve.
The data presented in this article are related to the research article entitled “On the plasticity mechanisms of lath martensitic steel” (Jo et al., 2017) [1]. The strain hardening behavior during tensile deformation of a lath martensitic press hardening steel was described using a dislocation density-based constitutive model. The Kubin–Estrin model was used to describe strain hardening of the material from the evolution of coupled dislocation densities of mobile and immobile forest dislocation. The data presented provide insight into the complex deformation behavior of lath martensitic steel.
247
Data on evolution of intrinsically disordered regions of the human kinome and contribution of FAK1 IDRs to cytoskeletal remodeling
Data reported here are related to the article entitled “Structural Pliability Adjacent to the Kinase Domain Highlights Contribution of FAK1 IDRs to Cytoskeletal Remodeling” .Six figures and nine tables are presented in this article.The figures illustrate the function of IDRs in FAK1 and its effects on cytoskeletal remodeling.The tables provide raw data utilized to build PPI networks.Evolutionary scores of IDRs, kinase domains, and whole kinases are also reported in tables.One single Microsoft Excel file is provided with one table on each of the nine sheets.We predicted intrinsic disorder in the human kinome using Pondr-FIT software .PONDR-FIT is an artificial neural network-aided meta-predictor of disordered residues.PONDR-FIT combines output of 7 different individual disorder predictors to increase confidence of disorder prediction by an average of 11% as compared to individual disorder predictors .PONDR-FIT utilizes the following amino acid characteristics to predict disorder residues: Amino composition, amino acid sequence complexity, amino acid position specific scoring matrices, hydrophobicity and net charge of amino acid sequence, and pairwise interaction energy between amino acids of a given protein We considered the residues with disorder scores of ≥0.5 to have structure breaking propensities, or as we call it intrinsically disordered residues, as previously described .A long disordered region with a stretch of at least 25 such amino acids constituted an IDR in our analysis.Previously reported disorder prediction of 504 kinases was used to calculate the fraction of total disordered amino acids in each kinase .An amino acid labeled with disorder score of 0.5 or greater was considered as contributing to protein disorder.The total number of disordered amino acids was divided by the total number of amino acids present in a given kinase to calculate % DO in the kinome.KINOMErender , a visualization tool for overlapping annotations on a phylogenic tree of protein kinases annotated the kinome dendrogram with % DO for each of the 504 kinases.We have excluded proteins without confirmed kinase domains in UniProt database.Kinome dendrogram illustration was reproduced, courtesy of Cell Signaling Technology, Inc.Intrinsic disorder prediction of FAK1 and its orthologs was performed using Pondr-FIT , IUPred-L , IUPred-S , VSL2 , VSL3 , VLXT , Espritz , PrDOS .The relative rates of evolution for the proteins and their domains were calculated as described by Kathiriya et al. .Core analysis of 36 kinases was performed using Ingenuity Pathway Analysis as described by Kathiriya et al. .Network of cellular migration as a significantly enriched function of the 36 kinases was identified.Disease and functional enrichment was performed as described by Kathiriya et al. .Experimentally validated protein-protein interaction data of the 36 kinases and that of FAK1 interacting proteins was assembled using manual data curation and various softwares including as described previously .PPI network was constructed and visualized using Cytoscape .Network analysis was performed to identify topologically significant hubs from the PPI networks using Network Analyzer and CentiScaPe plug in tools .Further, canonical signaling pathways by IDR-interacting proteins of FAK1 interactome were enriched.
We present data on the evolution of intrinsically disordered regions (IDRs) taking into account the entire human protein kinome. The evolutionary data of the IDRs with respect to the kinase domains (KDs) and kinases as a whole protein (WP) are reported. Further, we have reported its post translational modifications of FAK1 IDRs and their contribution to the cytoskeletal remodeling. We also report the data to build a protein-protein interaction (PPI) network of primary and secondary FAK1-interacting hybrid proteins. Detailed analysis of the data and its effect on FAK1-related functions have been described in “Structural pliability adjacent to the kinase domain highlights contribution of FAK1 IDRs to cytoskeletal remodeling” (Kathiriya et. al., 2016) [1].
248
A proposed mechanism for material-induced heterotopic ossification
The skeleton provides mobility, support for the organs, and has a very important esthetic function.Therefore, a loss of skeletal integrity can have dramatic consequences, such as a reduction in life expectancy , and a poor well-being following disfiguration .Various therapeutic approaches have been proposed for the treatment of large bone defects, but all present important drawbacks.Indeed, surgical techniques such as the Ilizarov technique , the Masquelet technique , or vascularized flaps are challenging for the patient and the surgeon, require several surgical steps spread over months, and may lead to complications .Bone morphogenetic proteins have generally been associated with positive clinical outcomes .However, several limitations have also been reported including additional material costs and excessive BMP dose leading to potential inflammation/edema .In fact, the European Commission has withdrawn its approval of “InductOs®” product in 2015 .Lastly, tissue engineering strategies including bone marrow extraction , platelet-rich plasma , and other regenerative biomaterials have all displayed limited regenerative potential.Therefore, the reconstruction of large bone defects remains a prominent clinical challenge requiring alternative approaches.In 1969, Winter and Simpson described the formation of bone in a polyhydroxyethylmethacrylate sponge implanted subcutaneously in pigs after 6 months .Since then, various metals , composites, and ceramics have demonstrated their ability to trigger bone formation in heterotopic sites, what researchers sometimes refer to as “intrinsic osteoinduction” .This should not be confused with “osteoinduction” which refers to the “induction of undifferentiated inducible osteoprogenitor cells that are not yet committed to the osteogenic lineage to form osteoprogenitor cells” .In 2010, Yuan et al. showed equivalent potential for new bone formation in an orthotopic sheep model at 12 weeks of a BMP2 product, autograft, and a synthetic β-tricalcium phosphate2).In 2017, a prospective study of lumbar interbody fusion rates in humans reported almost equivalence between a 95% β-TCP-5% hydroxyapatite product and a BMP2 product .Unfortunately, “intrinsic osteoinduction” may happen months or years after implantation and is considered unpredictable in various animal models.In addition, the mechanism of intrinsic osteoinduction remains unknown , which prevents any rational design of more sophisticated and potent osteoinductive biomaterials than the currently reported osteoinductive biomaterials.The mechanism of intrinsic osteoinduction is often related to the release of calcium and phosphate ions.Ripamonti et al. speculated for example that calcium ion release plays a key role for angiogenesis, and stem cell differentiation.For Habibovic et al. , the release of calcium and phosphate ions must be followed by the precipitation of a “biological apatite layer”, which can then bind or adsorb osteogenic proteins .The aim of this study is to demonstrate that it is in fact not the local accumulation/release but the local consumption/depletion of calcium and phosphate ions through apatite formation that is at the origin of intrinsic osteoinduction.With this very simple conceptual change, which is supported by theoretical and experimental evidence, it is possible to explain a number of currently unexplained findings, such as why materials devoid of calcium phosphates like metals and polymers are sometimes osteoinductive, why intrinsic osteoinduction takes weeks to months to occur, and why ectopic bone formation happens first in the core of implanted materials .It could also provide an explanation for heterotopic ossifications.Prior to going into the details of the newly proposed mechanism, the most important observations that have been made in the past 50 years on intrinsic osteoinduction are recapitulated.The mechanism that is generally considered to explain intrinsic osteoinduction is then critically assessed.Lastly, the new proposed mechanism is explained and discussed with an attempt being made to relate material-induced and material-free heterotopic ossification.The observations made over the past 50 years in the field of intrinsic osteoinduction underline the importance of physical, chemical, and biological factors for this currently unexplained phenomenon:The formation of a biomimetic apatite layer on the material is a pre-requisite , but not a determinant for intrinsic osteoinduction .Intrinsic osteoinduction happens first on the surface of pores present in the core of a material, and then spread toward the periphery .This contrasts with osteoconduction, which starts first at the periphery and then spreads into the material .Intrinsic osteoinduction is more often seen in large animals than in small animals .The scaffold architecture plays a very important role for intrinsic osteoinduction.Bone is generally found in concavities rather than convexities, and an increase of microporosity positively affects osteoinduction .Intrinsic osteoinduction does not depend on the chemical composition because it has been observed in polymers , metals , calcium phosphate-polymer composites , and calcium phosphates.However, calcium phosphates are particularly prone to induce bone formation .Ingrowth of blood vessels into the material is a necessary but not sufficient condition for intrinsic osteoinduction.Intrinsic osteoinduction is a very slow process: bone formation may take a few weeks up to one year to occur .This is in contrast with the very rapid woven bone formation in bone defect healing .Even though both calcium and phosphate ions are considered to play a key role in intrinsic osteoinduction , a large number of studies point out to the importance of Ca ions and the Ca sensing receptor .Cartilage formation has been observed and suggested to occur during intrinsic osteoinduction , but it is generally admitted that intrinsic osteoinduction provokes an intramembranous ossification with the formation of woven bone and then lamellar bone.Macrophages and osteoclasts are considered to play an essential function in intrinsic osteoinduction.Intrinsic osteoinduction is a complex process involving physical, chemical, and biological factors.This was already recognized by Yamasaki in 1990 .In 1991, Ripamonti underlined the importance of surfaces, and speculated that “circulating or locally produced growth and inducing factors, or both,” adsorbed on the scaffold and then were released during “mesenchymal-tissue invasion” leading to “the differentiation of bone”.This concept has only slightly evolved over the years.It is generally assumed that growth factors are included into the biomimetic apatite layer that forms prior to bone formation from a “continuous dissolution-precipitation” .Differentiation of MSCs is then provoked by surface topography, or by the action of inflammatory cells resulting in the release of growth factors, calcium and phosphate ions .Additionally, it has been speculated that Ca and phosphate ions can better accumulate in the material cores, in materials with higher surface area, and in concave versus convex pores .Unfortunately, this mechanism does not explain a number of experimental findings, such as why materials devoid of calcium phosphates like metals and polymers are sometimes osteoinductive, why intrinsic osteoinduction takes weeks to months to occur, and why ectopic bone formation happens first in the core of implanted materials .Also, the involvement of endogenous growth factors put forward by Ripamonti is questioned by the fact that ossification in intrinsic osteoinduction does not proceed endochondrally, as often observed with growth factors, but intramembraneously .In fact, there is a general agreement that the mechanism of intrinsic osteoinduction is unknown or at best unclear .One popular mechanism used to explain intrinsic osteoinduction is based on the assumption that there is at some point during implantation the release of calcium and phosphate ions, thus leading to supra-physiological calcium and phosphate concentrations.These supra-physiological concentrations are assumed to drive stem cells into the osteogenic lineage.The opposite seems to be much more likely.This statement is not only based on in vitro and in vivo data, but also by thermodynamic considerations.Indeed, calcium and phosphate concentrations decrease in cell culture media in contact with osteoinductive materials due to apatite precipitation .In vivo, the formation of a biomimetic layer, which by definition consumes calcium and phosphate ions, is a pre-requisite for intrinsic osteoinduction .Thermodynamically, physiological fluids are supersaturated toward hydroxyapatite at pH 7.4 .According to Bohner and Lemaître , the supersaturation of serum is in the range of 101.4 which means in a very crude approximation that 96% of all calcium and phosphate ions could precipitate to reach the chemical equilibrium between hydroxyapatite and serum.In healthy human beings, soft tissue mineralization does not occur due to the presence of nucleation and growth inhibitor such as proteins, citrate or Mg ions.However, as stated by Posner , this metastable state can be disturbed by a local increase of supersaturation, by the neutralization of bone mineral inhibitors, or by providing substances which create nucleation sites or remove barriers to these sites.The implantation of a bone substitute falls in the latter category because it may act as nucleation site for the precipitation of apatite crystals, thus triggering mineralization via the so-called heterogeneous precipitation.This is the “bioactivity” concept introduced by Kokubo .A demonstration of this concept may be conceptualized by the observation that the in vivo weight of a sintered hydroxyapatite block continuously increases during implantation into soft tissue .Interestingly, several studies show that an increase of in vitro bioactivity leads to an increase in intrinsic osteoinduction .Osteoclast-mediated resorption is mentioned to explain the release and thus accumulation of calcium and phosphate ions in the core of osteoinductive materials.However, a number of studies have reported the absence of resorption of scaffolds prior to bone formation .Also, cell culture studies of MSCs have shown that differentiation of MSCs into the osteogenic lineage may occur in the absence of osteoclasts .Additionally, there is contradictory evidence about the role of osteoclast stimulation/inhibition on osteoinduction .Furthermore, woven bone, the first bone seen during intrinsic osteoinduction, is known to start forming before osteoclastic resorption .Finally, β-TCP has a lower osteoinductive potential than hydroxyapatite and biphasic calcium phosphate, even though it is more resorbable .Similarly, α-tricalcium phosphate, which is more soluble than β-TCP and prone to form hydroxyapatite in vivo, is not osteoinductive .To summarize, it appears likely that intrinsic osteoinduction is not triggered by osteoclasts.However, osteoclasts may still play an important role in the events leading to intrinsic osteoinduction, as discussed hereafter.Based on the above, we propose that intrinsic osteoinduction is not caused by an accumulation of but by a reduction in the local calcium and/or phosphate ion concentration.This is possible if the local consumption of calcium and phosphate ions due to the precipitation of carbonated apatite is larger than the supply of these ions by diffusion and convection processes.The statement about the local depletion in calcium and phosphate concentration is supported by direct evidence and indirect evidence.We propose therefore the following paradigm for intrinsic osteoinduction:A material is osteoinductive if: it mineralizes in vivo . it is porous. the pores are large enough to allow blood vessels ingrowth and cell transport into the core of the material .The minimum pore interconnection size can be inferred to be well below 50 μm . blood supply is insufficient to maintain physiological calcium and/or phosphate ion concentrations.The proposed mechanism is not markedly different from the concept of “biological apatite formation” proposed by Habibovic et al. except that it excludes the concept of “dissolution” and “calcium release” prior to apatite precipitation.It does neither include, nor exclude the involvement of growth factors or inflammation .With this mechanism, it becomes possible to understand the observations “a” to “g” mentioned herein:The formation of a biomimetic apatite layer on the material is a pre-requisite , but not a determinant for intrinsic osteoinduction .Indeed, not all bioactive materials fulfill condition C4.This is in particular the case at the surface of bioactive materials.Intrinsic osteoinduction happens first inside the pores of a material, and then spreads to the periphery .As explained above, condition C4 is more easily met in the core of porous implants.Intrinsic osteoinduction is more often seen in large animals .It is well known that small animals have a faster metabolism than large animals and therefore a higher blood supply per volume .As such, it is more difficult for small animals to fulfill condition C4.This effect is reinforced when the implanted material volume is reduced in small animals.Condition C4 implies that the scaffold architecture must play a very important role for intrinsic osteoinduction.Indeed, condition C4 is more likely to be met in concavities rather than in convexities, and when the material surface is enlarged, for example with an increase of microporosity .An increase of macropore size or porosity is also detrimental to intrinsic osteoinduction.All materials that trigger the formation of a biomimetic apatite layer can be osteoinductive, thus explaining why polymers , metals , calcium phosphate-polymer composites , and calcium phosphates have all been observed to possess intrinsic osteoinduction.Since apatite compounds like HA and BCP already contain hydroxyapatite crystals, they are particularly prone to fulfill condition C1 and hence trigger intrinsic osteoinduction .Ingrowth of blood vessels into the material is a necessary but not sufficient condition for intrinsic osteoinduction because not all vascularized tissues are remote enough to fulfill condition C4.Intrinsic osteoinduction is a very slow process: bone formation may take a few weeks up to one year to occur .This is related to the time it takes for the biomimetic apatite layer to form and for blood vessels to grow into the scaffold and to bring the cells that will eventually trigger the osteoinductive response.Apatite layer formation may easily take weeks to months to occur.For example, the ISO 23317:2014 standard testing the “in vitro evaluation for apatite-forming ability of implant materials” recommends to perform the test during 4 weeks.Even though the bioactive layer would form spontaneously, it is likely that intrinsic osteoinduction would only start after a few weeks because blood vessels ingrowth is in the order of a few hundred micrometers per day.For example, Nomi et al. showed that blood vessels may take 1–2 weeks to penetrate a 3-mm-thick scaffold.Osteoblasts and woven bone have similar motility/ingrowth rates.We would like to point out a few interesting studies and explain how the observations made in these studies can be explained with the proposed mechanism.First, Fukuda et al. observed that elongated pores promoted more new bone formation and closer to the surface when the diameter was reduced.Since thinner cylindrical pores have a higher surface-to-volume ratio, more apatite should precipitate per volume, thus leading to higher ionic gradients, and using our mechanism, to earlier and closer to the pore orifice bone formation.Considering also that bone formation spans over a period of 2–3 months , there is more bone in smaller pores and the maximum bone fraction is closer to the pore orifice.In another study, Wang et al. observed that a decrease in granule size promoted osteoinduction.In fact, they showed that smaller granule agglomerates had a smaller permeability, which is a good pre-requisite for the fulfillment of condition C4.They also showed that the smallest granules had no blood vessel ingrowth and accordingly no osteoinductive response.Shifting from physical to chemical factors, Tang et al. stated that the propensity for calcium phosphates to trigger intrinsic osteoinduction is in the following order: BCP > β-TCP > HA ≫ α-TCP.α-TCP is soluble in physiological conditions, which means that it continuously releases calcium and phosphate ions.As such, the poor osteoinduction of α-TCP cannot be explained by the currently most accepted osteoinduction mechanism, i.e. the accumulation of Ca and/or phosphate ions, but can be easily understood under the newly proposed mechanism.Interestingly, α-TCP is “bioactive”, i.e. it readily transform into calcium-deficient hydroxyapatite.The reaction occurs at the external surface of α-TCP and is controlled by the diffusion of ions through the growing CDHA layer .Contradictory results have been published on the intrinsic osteoinduction of β-TCP.It is sometimes considered to have limited or no osteoinduction , and in other cases be very osteoinductive .This may be related to the inconsistent bioactivity of β-TCP, which is a pre-requisite for osteoinduction.TEM studies of implanted β-TCP scaffolds did not reveal any formation of a bioactive layer prior to bone bonding .However, it is known that β-TCP bioactivity can be triggered by autoclaving , and a number of heterotopic implantation studies revealing the intrinsic osteoinduction of β-TCP have used autoclaved β-TCP samples .Therefore, we speculate that β-TCP osteoinduction varies according to its surface properties.A number of cells have been considered to play an important role in intrinsic osteoinduction, such as stem cells , macrophages , osteoclasts , and pericytes .The material-driven osteoinduction mechanism proposed herein cannot elucidate the biological mechanisms leading to intrinsic osteoinduction, but it underlines the potential role of calcium, phosphate, carbonate, hydronium ions, or a combination thereof).Since a high local calcium concentration provoked by the action of osteoclasts is generally considered to trigger osteoinduction , two studies looked at the importance of osteoclasts and the Calcium-Sensing Receptor on osteoinduction.They showed that blocking CaSR does indeed inhibit the osteoinductive response .This suggests that CaSR and accordingly the calcium concentration is involved in intrinsic osteoinduction.Nevertheless, the Ca dose-dependency of this effect has not been elucidated yet.According to Baird and Kang , “heterotopic ossification is defined as the process by which trabecular bone forms outside of the skeletal structure, occupying space in soft tissue where it does not normally exist”.From that respect, it is similar to intrinsic osteoinduction, with the exception that it occurs in the absence of any implanted material.Prevalence may reach 50–90% but remains mostly asymptomatic .Three causes have been identified, namely neurological, genetic, and traumatic .Interestingly, there are many similarities between intrinsic osteoinduction and trauma-induced heterotopic ossification: the mechanism of heterotopic ossification is unclear ; it appears weeks to months after the traumatic or neurological event ; it starts by an edema/swelling .It is interesting to point out that Hung associated edema formation to hypocalcaemia); precipitation of hydroxyapatite crystals occurs prior to ossification ; size matters: the greater the trauma, the more likely it is that heterotopic ossification will develop .It is therefore tempting to speculate that the same mechanism proposed here to explain intrinsic osteoinduction might also be involved in heterotopic ossification.Conceptually, the first step of heterotopic ossification would be the precipitation of apatite crystals) in soft tissue .This could be provoked by the release of matrix vesicles resulting from cell apoptosis and/or the high calcium concentration prevailing at an injury site .These crystals would then grow and retrieve calcium and phosphate ions from the local environment.The initial mineralization is likely to occur within a few days after injury , but since apatite crystals are nano-sized, no signals would be observed by CT and MRI .Provided blood supply is insufficient to keep physiological calcium and/or phosphate ion concentrations, the low calcium and/or phosphate ion concentrations would then trigger a biological response eventually leading to heterotopic ossification.The difference in ossification pathways between intrinsic osteoinduction and heterotopic ossification could be related to the presence/absence of a hard surface and/or differences in mechanical stability.Indeed, the mechanism of bone repair switches from endochondral to intramembranous ossification when fractures are treated by osteosynthesis rather than casts.Experts in the field agree that chemical, physical, and biological aspects are involved in intrinsic osteoinduction.The physical and chemical aspects have been discussed herein.In this last section, we would like to address the biological aspects.During the process of intrinsic osteoinduction, stem cells are differentiated into osteoblasts.This simple statement hides many unanswered questions related to the origin of the stem cells and the signal triggering the differentiation of stem cells into the osteoprogenitor lineage .Regarding the first question, Song et al. presented evidence that “stem cells can migrate from bone marrow through blood circulation to non-osseous bioceramic implant site to contribute to ectopic bone formation in a canine model”.However, this does not exclude other stem cell origins, for example pericytic or endothelial .For Ripamonti , these stem cells would then be driven into the osteoprogenitor lineage by their interactions with growth factors present on the surface of the osteoinductive bone substitutes.The growth factors could be either adsorbed on the surface, secreted by local inflammatory cells, or integrated into the apatite layer formed on the material prior to the osteoinductive response .A critical parameter would be the amount of proteins present on the surface .However, this explanation is questioned in the literature because the mechanism of bone formation is generally different in intrinsic osteoinduction compared to BMP-related osteoinduction.Calcium ions have also been discussed as chemotactic agent for bone marrow progenitor cells or pre-osteoblasts .Currently, most efforts to understand stem cells differentiation in the context of intrinsic osteoinduction are dedicated to the role of the immune system, as described in more details hereafter.In order to fully appreciate the complexity of cell interactions during biomaterial integration, it is important to note that immune cells are the first cell-type in contact with implanted biomaterials .For many years, basic research focused on the ability of bone-forming osteoblasts to differentiate on various bone biomaterial surfaces.More recently however, it has been markedly determined that in fact macrophages and immune cells play a pivotal and substantial role demonstrating key functions during bone formation and remodeling .Nevertheless, despite these essential findings convincingly showing the key role of macrophages during biomaterial integration , little information is available concerning their response to biomaterials with the majority of investigation primarily focused on their role during foreign body reactions.Today it is known that macrophages demonstrate extremely plastic phenotypes with the ability to differentiate toward classical pro-inflammatory M1 or tissue regenerative M2 macrophages.Macrophages play an important function during bone formation , play a key role during heterotopic ossification in various injury-related disorders , are highly implicated during calcification of arterial tissues , and associated with the heterotopic ossification of implanted biomaterials into soft tissues .We therefore hypothesize that in each of the above-mentioned scenarios, changes in physiological calcium and/or phosphate levels within the local micro-environment may be a driving factor associated with bone-induction.It is now understood that macrophages are the major effector cell during biomaterial integration where they are indispensable for osteogenesis.Various knockout models have demonstrated that a loss of macrophages around inductive BCPs entirely abolishes their ability to form ectopic bone formation, thus confirming the potent role of immune cell modulation during osteogenesis .Ongoing research has further shown that macrophage deletion between days 0 and 3 following biomaterial implantation completely attenuates the ability for BCP grafts to induce ectopic bone formation, yet their later deletion has no effect on the graft’s osteoinductive potential.It is also known that the fate of biomaterials is determined rapidly during the integration process.While the behavior of macrophages in calcium-rich and calcium-poor environments and their ability to polarize under various physiological conditions remains completely unstudied, future research aimed at better characterizing their function around biomaterial’s causing Ca/PO4 depletion/accumulation may be key toward our understanding of osteoinductive events.Prior to these discoveries however, complex studies from basic research have revealed the dynamic interactions between bone tissues and the immune system .Over a decade has passed since it was revealed that macrophages are the main effector cell responsible for dictating bone formation .Although initial bone fracture healing experiments have been characterized by infiltration of inflammatory cells, most preliminary research focused primarily on their secretion of various cytokines and growth factors important for the inflammatory process including cell recruitment and neovascularization .Although macrophages in general were implicated as contributors to inflammation, a series of experiments later revealed their essential roles in bone repair.This is best exemplified in a study by Chang et al. that showed that by simply removing macrophages from primary osteoblast cultures, a 23-fold reduction in mineral deposition was observed .Interestingly, in vivo depletion of OsteoMacs by various knockout systems has also been shown to markedly reduce bone formation .Initial basic research studies identified macrophages into 2 specific cell types, classical M1 pro-inflammatory macrophages and M2 tissue resolution/wound healing macrophages.Classical pro-inflammatory stimuli in response to lipopolysaccharides secrete a wide array of pro-inflammatory cytokines including TNF-alpha , IL-6 and IL-1β .M2 macrophages typically produce including transforming growth factor β , osteopontin , 1,25-dihydroxy-vitamin D3 , BMP-2 and arginase, all factors implicated in tissue-repair processes .The plasticity of macrophages suggests that their trophic role in bone tissues is highly regulated by changes to the microenvironment.While their study remains in its infancy with respect to their ability to contribute toward biomaterial integration, it remains logical to assume that under various non-physiological conditions such as low/high calcium/phosphate levels, they would be keen regulators to respond accordingly.Future studies are therefore imminently needed to better understand the role of macrophages under the above-mentioned scenarios.One interesting yet rarely reported phenomenon in the bone biomaterial field are the implications of macrophages during heterotopic ossification of various pathologies.For instance, macrophages have been highly implicated in the development of atherosclerosis.Atherosclerotic plaque contains high levels of IFN-gamma, a T-helper1 cytokine that is a known inducer of the classically activated M1 macrophage.Interestingly, resident macrophages found in arteries are known to contribute to ectopic bone formation in and around vascular tissues, an area where bone should otherwise not form .Similarly, heterotopic ossification is a common complication of the high-energy extremity trauma sustained in modern armed victims returning from war where some reports demonstrate as high as 60% of military combat trauma and limb amputations .Pre-clinical animal models have further demonstrated that HO is precipitated in burn victims, implicating the role of inflammation in both processes .In combination with these findings, it has also been shown that macrophage depletion reduces osteophyte formation in osteoarthritic models and macrophages have been key players in various other bone loss disorders .The combination of these findings has strongly suggested the implication of macrophages during heterotopic induction of bone.One interesting finding coming from the atherosclerosis field was that it was originally thought that all macrophages involved in atherosclerotic plaque were classical M1 phenotype macrophages.However, in 2012 Oh et al. demonstrated that alternatively it was M2 macrophages that were primarily activated by endoplasmic reticulum stress .This disease highlights the extreme plasticity of these cell types and ability to sense changes to the local micro-environment.While the entire mechanism driving heterotopic ossification is not fully understand, researchers now attempt to quantify inflammatory cytokines implicated in the pathogenesis of HO via either marker detection in blood or urine as a potentially useful diagnostic tool for the early detection of pathological HO .A material is osteoinductive if: it mineralizes in vivo; it is porous; the pores are large enough to allow blood vessel ingrowth and cell transport into the core of the material; blood supply is insufficient to keep physiological calcium and/or phosphate ion concentrations.This paradigm shift describing the osteoinductive phenomenon is not only able to address for the first time past results, but can in particular elucidate a number of unexplained results, such as why materials devoid of calcium phosphates like metals and polymers are sometimes osteoinductive, why intrinsic osteoinduction is so slow, and why ectopic bone formation happens first in the core of implanted materials.Even though the biological mechanism involved in intrinsic osteoinduction remains unclear, this new paradigm will favor the design of much more potent biomaterials by following these proposed guidelines.Interestingly, the similarities between intrinsic osteoinduction and trauma-related heterotopic ossification suggest that the present paradigm could also be involved in trauma-related heterotopic ossification.
Repairing large bone defects caused by severe trauma or tumor resection remains one of the major challenges in orthopedics and maxillofacial surgery. A promising therapeutic approach is the use of osteoinductive materials, i.e. materials able to drive mesenchymal stem cells into the osteogenic lineage. Even though the mechanism of this so-called intrinsic osteoinduction or material-induced heterotopic ossification has been studied for decades, the process behind it remains unknown, thus preventing any design of highly potent osteoinductive materials. We propose and demonstrate for the first time that intrinsic osteoinduction is the result of calcium and/or phosphate depletion, thus explaining why not only the material (surface) composition but also the material volume and architecture (e.g. porosity, pore size) play a decisive role in this process.
249
Lifetime instabilities in gallium doped monocrystalline PERC silicon solar cells
The majority of today's solar cells are made from p-type silicon wafers with boron as the electrically active dopant.The excess charge carrier lifetime in silicon can reduce under illumination leading to reduced solar cell efficiencies, and this light-induced degradation can occur in several ways.An important form of LID occurs in boron doped silicon, in which recombination centres form in a way related to the boron and grown-in oxygen levels.This degradation mechanism has been studied for almost 50 years and the large body of literature in this area has been recently reviewed .Another effect occurs in thermally processed wafers and is referred to as light and elevated temperature induced degradation.LeTID has been observed in multicrystalline silicon , float-zone silicon and Czochralski silicon .It involves an initial lifetime degradation but typically recovers over time, with degradation and recovery rates depending on thermal history.The physical origin of LeTID is unclear, but similarities across the range of material types suggest a common mechanism .Other forms of LID occur in silicon which is contaminated with metals, such as copper .Workarounds for boron-oxygen related LID exist ."Today's commercial boron doped silicon solar cells are exposed to a so-called re-generation process by annealing them at the end of their fabrication with high temperature either combined with high illumination intensity or high current density in the dark.From the experience of the industrial co-author of this paper, the majority of large manufacturers in China apply the second annealing method and often warrant a degradation rate smaller than 0.5%relative per year over 30 years in their modules.This means that an initially 20% efficient PERC module degrades to not lower than 17%absolute efficiency in 30 years.After the warranty runs out and the module is depreciated, many modules are expected to deliver power for another 10–20 years with one other replacement of inverters, which makes the after-warranty phase economically viable.If the degradation rate stays the same, such modules will deliver more than 16%absolute or 15.5%absolute after another 10 or 20 years, respectively.The lower range of warranties given by various manufacturers is, in the example of the 20% PERC module, 16%absolute after 25 years.For these reasons, better stabilizing efficiency has very positive economic and environmental consequences.An alternative route to stabilise efficiency could be provided by doping with a different Group III element, with the aim of improved lifetime stability.Aluminium doping is not likely to be viable due to the strong recombination activity of aluminium-oxygen complexes .Indium doped silicon has been used to make passivated emitter and rear cell devices which are reported to be stable under illumination ."Unfortunately indium's acceptor level is moderately deep relative to the valence band edge, and this means that at room temperature it is not fully ionized .The un-ionized indium acts as a recombination centre and the variation in effective doping level with temperature may be problematic in cell optimisation.Gallium is the most promising of the alternative Group III dopants, and has been demonstrated to be viable from an industrial perspective .Lifetimes in gallium doped monocrystalline silicon wafers are reportedly stable under low-temperature illumination, regardless of ingot position and oxygen levels .Gallium doped passivated emitter and aluminium back surface field monocrystalline cells are reported to give stable efficiencies which are similar to initial efficiencies of cells processed under the same conditions made from boron doped substrates.Although early work reported lifetime in gallium doped mc-Si was stable under illumination , recent work has shown that fired mc-Si PERC devices degrade but to a lesser extent than boron doped cells .Fired gallium doped mc-Si lifetime samples are also found to experience LeTID-like behaviour, with the degradation being slower than in the boron case .Compared to boron doped silicon, there are relatively few published fundamental studies of what determines the lifetime in gallium doped silicon, but the formation and dissociation of FeGa pairs is known to be an important issue where Fe is present .Copper contaminants also induce LID in gallium doped wafers, but to a lesser extent than in boron doped wafers .To the best of our knowledge, no studies have been published which determine whether monocrystalline gallium doped wafers and cells change with low-temperature processing.This paper provides the results of experiments into the bulk lifetime behaviour of gallium doped monocrystalline silicon wafers and completed PERC devices made from gallium doped monocrystalline substrates.We first briefly report results for as-grown wafer material, showing lifetimes can be influenced by low-temperature annealing and illumination due to dissociation of metastable defects.With the knowledge of how to control the influence of metastable defects, we then study commercially processed PERC devices, and compare our results to PERC devices produced with the same fabrication process using boron doped substrates with the addition of a final stabilisation step.Using a proxy non-contact method based upon photoluminescence imaging to characterise the cell properties, we find that Ga PERC devices not annealed after fabrication are stable within 5%, but if annealed at 200 °C–300 °C they exhibit a noticeable level of degradation.Ga PERC devices without stabilisation show considerably better stability than the destabilised B PERC devices.Finally, we then strip processed cells at various stages of degradation to link the cell level changes to changes in bulk lifetime in the substrate.Experiments were performed on commercial 15.6 cm × 15.6 cm PERC solar cells fabricated from either Ga and B doped-orientation Czochralski silicon substrates, with an identical fabrication procedure on the same fabrication line.The cells were taken from a standard manufacturing line, processed in a standard way with no unusual processing steps.All cells were therefore fired after screen printing.The wafer resistivity and thickness range were 1.2–1.5 Ωcm and 140–150 μm, respectively.The B PERC devices were stabilised with the dark-current procedure described in the introduction, where the temperature is not measured nor controlled very precisely, but the procedure has been optimized with extensive field tests.In contrary, the Ga PERC cells were not exposed to any after-treatment.Experiments were also performed on ‘as-received’ 190 μm thick 156 mm diameter 1.7 Ωcm Ga doped-orientation Cz-Si wafers, which were similar to those used in the Ga PERC devices.For control purposes, 360 μm thick 2 Ωcm phosphorus doped n-type float-zone silicon samples were used to monitor surface passivation stability under illumination, and n-type material was chosen as it is less susceptible to bulk lifetime instabilities due to metastable metal-acceptor pairs and boron-oxygen LID often found in p-type silicon.In this study, it was sometimes necessary to strip the PERC devices back to their underlying substrate and to passivate the surfaces using either a room temperature passivation treatment or by aluminium oxide deposited with atomic layer deposition.Temporary passivation schemes can enable the lifetime to be measured without thermal processing which can otherwise modify the material under investigation, so where this is a requirement we use superacid-derived passivation .For the as-received Ga doped samples, superacid-derived passivation was performed using bissulfonimide from Sigma-Aldrich in anhydrous hexane using a procedure described in detail in Ref. .For the stripped PERC devices, a slightly modified superacid-derived passivation method was required due to the remaining effects of the metallisation on the surface, as follows:Dip in 1% HF for 1 min to remove the native oxide.Immersion in a silver etch solution consisting of NH4OH and H2O2 in the ratio 1:1 for 10 min at ~75 °C.Immersion in ~50% HF for 5 min.Standard clean 1 consisting of de-ionized H2O, NH4OH, H2O2 in the ratio 5:1:1 for 10 min at ~75 °C.Dip in 1% HF for 1 min.Aqua regia metal etch consisting of HCl and HNO3 mixed in the ratio 3:1.This was left to react for 15 min before adding the samples, which were etched for 15 min.Dip in 1% HF for 1 min, followed by SC 1, followed by dip in 1% HF.Standard clean 2 consisting of DI H2O, HCl, H2O2 in the ratio 5:1:1 for 10 min at ~75 °C.Dip in 1% HF for 1 min.Etch in 25% tetramethylammonium hydroxide for 30 min at ~80 °C.Dip in 1% HF for 1 min, followed by SC 2 and a DI H2O rinse.Soak in a mixture of 1% HF and 1% HCl diluted with DI H2O for 10 min, then pull dry the samples from the solution ready for superacid treatment.Dip in TFSI-hexane mixture for ~60 s in a glovebox with an ambient relative humidity of ~25%.ALD Al2O3 surface passivation was used when longer term stability was required.For this, the surface preparation procedure involved a dip in HF, immersion in a silver etch solution for 10 min above), immersion in ~50% HF for 5 min, SC1 clean for 5 min, a dip in HF, a TMAH etch at 80 °C for 10 min, a dip in HF, an SC2 clean for 10 min, and a final HF dip.Samples were pulled dry from the final HF dip and were immediately transferred to a Veeco Fiji G2 ALD system where they were rapidly held under vacuum to prevent surface oxidation.Al2O3 was deposited at 200 °C using a plasma O2 source and a trimethylaluminium precursor for 300 cycles to give films ~30 nm thick.The samples were then turned over and the same deposition conditions were used to deposit Al2O3 on the other surface.To activate the passivation, a post-deposition anneal in air was performed in a clean tube furnace at either 420 or 460 °C for 30 min.Annealing other than for passivation activation was performed in one of two ways.As-received wafer samples were first subjected to a cleaning procedure, which including a rinse in DI water, a 2% HF dip, a SC 2 clean, a DI water rinse, followed by another 2% HF dip.They were then annealed in a clean tube furnace in air at 200–500 °C, followed by cleaning as part of the surface passivation procedure.PERC samples could not be cleaned due to the metallisation and were annealed in a standard box furnace in air at 200–300 °C.Effective lifetime measurements of passivated samples were made by transient and generalised photoconductance methods using a Sinton WCT-120 lifetime tester which uses a Quantum Qflash X5d-R flash lamp with an infrared pass filter.This was calibrated using a recently introduced method .To monitor the performance of processed cells after thermal and illumination treatments, a proxy method was developed using PL imaging.This was performed in a BT Imaging LIS‐L1 system in which a 630 nm light emitting diode array is used to illuminate the samples.As illustrated in Fig. 1, the method involved selecting an approximate 2 cm × 2 cm region of interest away from the edges of a 5 cm × 5 cm sample and recording the total number of PL counts after an 1 Sun exposure for 0.1–0.5 s. Care was taken to monitor the exact same region after each illumination step.The ROI was always scratch free and a significant region around the ROI was also ensured to be scratch free as lateral conduction could affect the PL signal."The advantage of this method is that it is contactless and so the sample does not get damaged as a consequence of a large number of measurements, which would be much more difficult to achieve with a contacted cell measurement considering today's fine-line screen printing of the 12 narrow busbars.A disadvantage is that the method provides only a relative measurement of performance of a given device sample, with comparisons between devices difficult because of differences caused by spatial variation in PL counts of the original 156 mm × 156 mm PERC device and varying levels of shading within the ROIs due to the metallisation.Illumination experiments on as-received wafer samples were performed using the PL imaging system with a 1 Sun exposure applied for 10 s at room temperature.Light soaking experiments on PERC samples were conducted by placing samples on hotplates to maintain the sample temperature at 75 °C when illuminated with a halogen lamp.One Sun equivalent illumination was achieved by adjusting the lamp height until a power density of ~1000 Wm-2 was measured using an Amprobe Solar-100 m.We first report lifetime results for samples extracted from as-received gallium doped Cz-Si wafers.These results, which are shown in Fig. 2, include the effect of low-temperature annealing on lifetime, and the stability data are important in establishing the consistency of the measurements used at the cell level later in the paper.Fig. 2 shows the impact of low temperature annealing on the effective lifetime at an excess carrier density, Δn, of 1015 cm-3.For these experiments, room temperature superacid-derived surface passivation was used, as other surface passivation processes such as ALD Al2O3 require elevated temperatures which will change the bulk lifetime under investigation as discussed in Ref. .The surface recombination velocity for superacid-derived passivation is higher than for our ALD Al2O3 passivation, and is typically around 1 cm/s .The effective lifetimes in Fig. 2 are lower than they would be with our Al2O3 passivation, but it remains possible to distinguish changes in the bulk lifetime which occur only due to thermal effects.All lifetime measurements for Fig. 2 were made without intense illumination with sufficient time left after any annealing for any metastable recombination-active defects to have returned to their equilibrium state.The squares in Fig. 2 denote lifetimes measured prior to annealing.The average as-received effective lifetime is 439 μs, with the lowest measurement being 423 μs and the highest 460 μs.Differences in lifetime relative to the mean are within ±5% and so can be assumed to be the same given the typical reproducibility of lifetime measurements and the surface passivation scheme used .The circles in Fig. 2 denote lifetimes after low-temperature annealing, and in all cases the lifetime has increased as a result of the thermal processing.A transition in lifetime occurs between 300 °C and about 380 °C, and this may be due to the annealing out of recombination-active defects, with the improvement being less pronounced at 500 °C.Lifetime instabilities at low processing temperatures have been found to occur in a range of silicon material types.For example, lifetime increases in n-type Cz-Si between 300 and 400 °C have been linked to the annealing out of vacancy-oxygen related defects .Studies on p-type and n-type float-zone silicon have also shown an increase in lifetime with annealing up to 320 °C and decrease in lifetime at 450 °C .Thus, whilst lifetime changes occur in gallium doped Cz-Si at low temperatures, there is no evidence that this is due to the gallium and it seems more likely that it is related to reconfiguration of grown-in crystal defects as happens in materials grown with other dopants.Much stronger transitions around 350 °C have been specifically associated with gallium in electron irradiated Cz-Si , but the likely vacancy concentration in that material is much higher than in ours and in other materials used in the silicon photovoltaics industry, and so the effects are most likely different.Fig. 2 shows room temperature injection-dependent effective lifetime data for an ALD Al2O3 passivated Ga doped sample.The data are for a sample which had been pre-annealed at 400 °C to maximise lifetime based on Fig. 2 results, but the subsequent Al2O3 activation anneal at 460 °C probably overwrites this anyway.Importantly the effective lifetime is strongly affected by illumination, with the peak value increasing from 1100 μs to 1930 μs when 1 Sun is applied for 10 s.The implied maximum power point and implied open circuit condition are indicated on Fig. 2 as if this wafer were a finished PERC device."These values were determined using Trina Solar's detailed numerical model for the cell type investigated in this paper.The annealing would only increase Voc but not Vmpp, hence it would lower the fill factor and leave cell efficiency unaffected.Fig. 2 shows the kinetics of the lifetime change at Δn = 1015 cm-3.Illumination increases the lifetime substantially and the lifetime decays back down to the pre-illuminated value over the period of a few hours.The effect can be cycled.The lifetime changes which occur in Ga doped silicon upon illumination such as those in Fig. 2 and are reasonably well understood in terms of the un-pairing and re-pairing of the FeGa pair .Once dissociated by the illumination, the pairs re-form via room temperature diffusion of interstitial iron.The “after illumination” lifetime curve in Fig. 2 is therefore at least partly determined by recombination at dissociated Fei+ and Ga-, and the “before illumination” curve is partly determined by recombination at the FeGa pair.The characteristic cross-over in the lifetime curves for this situation is expected at around 3 × 1013 cm-3 at room temperature .Whilst this relatively low injection level was not reliably achieved with our measurement set up with the high lifetimes after illumination, extrapolation of the curves from higher injection shown by the dotted line on Fig. 2 is consistent with this cross-over level.The large range of possible lifetimes measured in the same sample in Fig. 2 and highlights the need for a consistent methodology when studying Ga PERC devices.In the remainder of this paper, unless noted otherwise, the measurements are therefore reported after illumination with the FeGa pairs dissociated.Fig. 3 plots the normalised PL intensity from our proxy method for B PERC and Ga PERC samples versus light soaking time at 1 Sun intensity and 75 °C.Note the vertical axes necessarily have different scales, as the magnitude of the effects observed in the B PERC is considerably larger than in the Ga PERC case.The four samples used for each dopant were cleaved from a single PERC device.To establish the impact of dark annealing on the LID characteristics, each cell sample was processed differently with one subjected to no anneal, and the others subjected to anneals at 200 °C, 250 °C or 300 °C for 30 min in air in a box furnace.Firstly, it is important to note that the B PERC devices have undergone a dark-current stabilisation step in order to mitigate boron-oxygen LID .In contrast, the Ga PERC devices have not undergone any final stabilisation process, as literature results have indicated that gallium doped silicon is immune to LID .In order to establish if Ga PERC devices are indeed immune to LID, we have performed a side-by-side comparison between B PERC and Ga PERC devices after light soaking, whereby the primary difference is the dopant in the base silicon material.For the B PERC devices shown in Fig. 3, it is evident from the green triangles that the stabilisation process has indeed mitigated the onset of degradation for the sample which has not undergone a dark anneal in our laboratory.Interestingly, we find the normalised PL intensity exceeds unity after 10 h of illumination.It is difficult to ascertain the source of this increase definitively, however Sperber et al. have attributed a similar increase to improved surface passivation .Whether it be an improvement in surface or bulk lifetime, an increase in performance under light soaking is a pleasing observation, and reflects the positive steps made over the past decades to improve and stabilise p-type silicon.In contrast, when the stabilised B PERC samples were dark annealed prior to light soaking, we observe a degradation in PL signal followed by a recovery, and thus reminiscent of a LeTID-like signature, i.e. a reduction in PL intensity followed by a complete recovery .For the 250 °C and 300 °C samples, the recovery quickly transitioned into an overall improvement in performance relative to the first data point in the respective sample, as evidenced by the normalised PL intensity exceeding unity.With our proxy technique it is not possible to ascertain whether there is an absolute improvement relative to the sample which had not been dark annealed however.Notably, the low-temperature dark anneal has destabilised the previously stabilised B PERC devices, and the extent of the degradation increases with increasing dark annealing temperature.The source of destabilisation is unclear at this time.Importantly, Fig. 3 helps validate our PL proxy method, as the results resemble trends typically observed in lifetime samples which undergo LeTID .Turning our attention to the Ga PERC results in Fig. 3, a clear observation is that all samples subjected to light soaking do degrade to some degree, although it is noted that these samples have not been stabilised in the same way as the B PERC devices.The extent of the degradation for Ga PERC is much reduced compared to the corresponding dark annealed B PERC devices.The reduced normalised PL value for the very first measurements in Fig. 3 suggests that associated FeGa pairs in the Ga PERC devices reduce the bulk lifetime, as demonstrated in Fig. 2 and at the substrate level.However, for subsequent measurements, the prolonged light soaking results in a higher proportion of the FeGa pairs being in the dissociated state, and thus their impact on the bulk lifetime is reduced under the conditions used.As in the case for the B PERC samples, subjecting the Ga PERC samples to a 30 min dark anneal does induce a LeTID-like degradation curve when subject to light soaking, and this becomes more pronounced with higher annealing temperatures.Again, it is unclear why a dark anneal would trigger such a LeTID curve, or why a higher annealing temperature would induce a stronger degradation effect, however it is evident that Ga PERC devices are not completely immune to LeTID.Furthermore, the similarity in degradation curves observed for both Ga and B PERC devices, may indicate the source of the degradation originates from a process induced defect, e.g. SiNx:H dielectrics and firing , rather than a dopant related defect as supported by the work of Chen et al. .In the context of Ga PERC devices, the most important data set is that relating to the un-annealed PERC sample in Fig. 3.In contrast to the stabilised B PERC device in Fig. 3, the Ga cell is not completely stable, and we do see a slight degradation after ~20 h of illumination, which is expected to recover by 1000 h. Providing the degradation does indeed not continue, it is still evident that the Ga PERC device is advantageous over B PERC, as it does not require an additional stabilisation process, and as such, could add potential cost savings to the manufacturing of high efficiency PERC devices and to PV utilities.Although a dark anneal was required to invoke a substantial level of degradation in the Ga PERC devices, it does mean that future processing sequences may trigger the same LeTID signature during cell operation.This issue will not be as big an issue for manufacturers as permanent LID, but it is a factor which should be considered in the development of solar cell structures involving gallium doped substrates.The results obtained using the PL proxy method in Fig. 3 show a clear degradation in the Ga PERC devices, but they do not prove that the degradation is caused by a reduction in bulk lifetime, as is known to be the case for boron doped lifetime samples .Other effects could be occurring, such as a deterioration in the surface passivation.We have therefore performed an experiment to demonstrate that bulk lifetime degradation is occurring in the Ga PERC case, and the results are shown in Fig. 4.The experiment used two 5 cm × 5 cm samples cleaved from the same 15.6 cm × 15.6 cm PERC device.Both samples were dark annealed at 300 °C for 30 min to trigger the strongest case of LeTID-like behaviour shown by the orange squares in Fig. 3.The samples were then characterised by the PL imaging proxy method to determine their pre-LID values, and to ensure the samples were similar.One sample was stored in the dark, while the other was subjected to light soaking at 1 Sun and 75 °C for ~100 h, which according to Fig. 3 will give rise to substantial degradation.Both samples were re-characterised using the PL imaging proxy method.Fig. 4 shows the results of the tests, with an 11% reduction in PL observed.The metallisation and diffusions were then stripped away from the surfaces of the two samples, and the effective lifetimes were measured with the room temperature superacid-derived passivation treatment .The effective lifetime data are shown in Fig. 4 and show practically the same reduction between the two samples at an excess carrier density corresponding to 1 Sun illumination as determined using the reference cell in the lifetime tester.From the results in Fig. 4, it is evident that bulk lifetime degradation is causing the PERC degradation, and the similar quantitative reductions in PL or effective lifetime suggest it is the dominant effect.Fig. 4 shows the injection-dependent lifetime and its value at 1 Sun of these passivated samples with a comparison to the intrinsic lifetime limit of Richter et al. .Our final set of experiments aims to understand the degradation of bulk lifetime in gallium doped silicon wafers which have been processed into PERC devices then stripped prior to illumination.Our experimental design is illustrated schematically in Fig. 5, and this methodology enables us to report effective lifetime values rather than PL proxy data.Stable surface passivation is needed for these experiments, and ALD Al2O3 deposited at 200 °C and activated by a 420 °C anneal was used.The lifetime results are shown in Fig. 6.Importantly, it is first noted that the surface passivation is stable under illumination, evidenced by the FZ n-type control data in Fig. 6, and thus any degradation observed can be attributed to a reduction in the bulk lifetime.Secondly, the 250 °C dark anneal prior to stripping the PERC devices has made no difference to the light soaking trends, suggesting the ALD process has changed the bulk lifetime characteristics, and therefore annihilated any prior defect the 250 °C dark anneal activates, as observed in Fig. 3.In the case of the stripped B PERC devices, the degradation characteristics have completely changed compared to those shown for the unstripped B PERC device in Fig. 3.The trend shown in Fig. 6 now closely resembles that of the boron-oxygen defect, suggesting that the effects of the stabilisation process has been undone.That is, there is an initial fast degradation, followed by a much slower one whereby the bulk lifetime does not recover unless subject to a dark anneal .For the stripped Ga PERC devices, the lifetime remains stable, and no degradation can be observed, consistent with previous reports on gallium doped silicon .Therefore in contrast to Fig. 3, the same trends could not be observed, indicating the ALD process has changed the bulk lifetime characteristics and its corresponding LID signature.Finally, it is noted that the constant lifetime in ALD Al2O3 passivated stripped Ga PERC material in Fig. 6 provides further evidence that the passivation on p-type material is stable under illumination, which needs to be demonstrated because it has been suggested that Fermi level position might affect the surface behaviour .The results shown in Fig. 6 demonstrate that extreme care must be taken when using dielectric passivation structures to analyse the bulk lifetime of stripped PERC devices.Low temperature annealing, can significantly change the bulk lifetime characteristics and thus its susceptibility to LID.We examined the degradation trends of B and Ga monocrystalline PERC devices when exposed to 1 Sun illumination at 75 °C for >1000 h.In the “as-produced” state which included a stabilisation step, the B PERC device indeed remained stable, or underwent a slight relative improvement.In the case of the as-produced Ga PERC device, we observed a slight degradation in performance which is expected to recover by 1000 h.When the B and Ga PERC devices were subjected to a dark anneal at 200, 250 or 300 °C prior to light soaking, we observed significant changes in the LID characteristics of the cells.All previously stabilised B PERC devices underwent a fast degradation followed by a complete recovery, characteristic of LeTID, indicating the low-temperature dark anneal has undone the effects of stabilisation.A similar trend was also observed for Ga PERC devices which had not been subjected to a stabilisation treatment, showing LeTID can also occur in Ga doped silicon.After stripping degraded Ga PERC devices, it was determined that the cause of this degradation is a deterioration in the bulk lifetime, consistent with that found previously for boron doped samples.When the Ga and B PERC cells were stripped and passivated with ALD Al2O3, we observed a complete change in the degradation characteristics, i.e. no degradation for Ga, and boron-oxygen-like degradation for B.This indicates that dielectric passivation is not adequate to diagnose bulk lifetime degradation causes in stripped PERC devices.It is evident that Ga PERC is advantageous over B PERC, as it does not require an additional stabilisation process, and, as such, could add potential cost savings to the manufacturing of high efficiency PERC devices and to PV utilities.However, manufacturers looking to substitute B PERC with Ga PERC need to be aware that any future fabrication processes could make Ga PERC solar cells susceptible to degradation, and thus care must be taken.Nicholas E. Grant: Conceptualization, Methodology, Investigation, Writing - Original Draft, Visualization, Supervision, Funding acquisition.Jennifer R. Scowcroft: Investigation.Alex I. Pointon: Investigation.Mohammad Al-Amin: Investigation.Pietro P. Altermatt: Resources, Writing - Review & Editing.John D. Murphy: Conceptualization, Resources, Data Curation, Writing - Original Draft, Visualization, Supervision, Project administration, Funding acquisition.Data published in this article can be freely downloaded from https://wrap.warwick.ac.uk/129659/.
Gallium doped silicon is an industrially viable alternative to boron doped silicon for photovoltaics, and is assumed to be immune from light-induced degradation. We have studied light soaking for >1000 h of industrially fabricated passivated emitter and rear cell (PERC) devices formed from monocrystalline gallium and boron doped substrates, with cell properties monitored using a non-contact photoluminescence imaging proxy method. As-fabricated stabilised boron doped cells did not degrade or underwent a slight improvement, whereas as-fabricated gallium doped cells which had not been intentionally stabilised experienced a slight (~5%) deterioration which then recovered. When PERC devices were subjected to a 200–300 °C dark anneal before light soaking, significant differences in the cell degradation signatures were observed. Degradation characteristic of light and elevated temperature induced degradation (LeTID) was observed for boron and gallium PERC solar cells, with the onset of degradation taking longer, and the severity being less, with gallium. Investigation of stripped gallium PERC devices with room temperature surface passivation revealed bulk lifetime degradation correlates with the cell-level degradation. When the cells were stripped and passivated with aluminium oxide, a complete change in the degradation behaviour was observed, with no degradation occurring in the gallium case and boron-oxygen-like degradation observed for boron. This indicates that dielectric passivation is not suitable for lifetime degradation diagnosis in stripped cells. Gallium retains advantages over boron doping with stabilisation processes not generally required, but manufacturers need to be aware of possible low-temperature lifetime instabilities when developing future fabrication processes, such as for passivated contact structures.
250
BIM application to building energy performance visualisation and managementChallenges and potential
The service sector accounts for 20% of UK energy consumption , with UK government targets for reduction of CO2 emissions of at least 60% relative to 2006 levels by 2050 .One means of achieving this goal is through projected improvement of energy efficiency throughout the architecture, engineering and construction industry, via reduction in building energy demand.However, to achieve this, both effective design and operation must be facilitated.The recent mandate for BIM implementation on publicly funded projects in the UK is a contributor to this target , driving the development of efficient buildings through improved design coordination, and management of design and operations information .Application of BIM to most aspects of building design and operation has been explored in depth since its emergence as an umbrella term for the processing of data describing a building.Not least of which in building performance design, simulation and optimisation, where publication trends show an exponential growth in recent years on the topic of BIM and building performance .In an industry still attempting to close the recognised performance-gap between predicted and measured building performance , methods of assisting in this process are encouraged , and where BIM is conveniently present as a platform on which to develop these.Yalcinkaya and Singh identified performance assessment and simulation as a target of BIM application, with energy management a growing trend within those areas.In contrast, its application to building performance management during operation is limited in favour of process optimisation, information querying and retrieval.Much emphasis is placed on the effective handover of information suitable for facilities management use via model view definitions and export from design models , supported by development of open exchange formats .While useful and necessary for efficient management of building and its systems , accessibility to information does not necessarily mean that information will be utilised, nor does it guarantee effective performance management .This paper aims to identify the barriers in linking BIM with in-use building performance management.The development of a prototype method of linking BIM and monitored performance data follows a building through handover, occupation and commissioning to explore those barriers and discuss the potential requirements for data structuring and specification to support BIM as a performance management tool.The paper argues that the attribution of data within a design-based BIM environment must be such that the end-user can access and utilise it in conjunction with non-integrated data sources, and demonstrates a novel method of linking BIM and building performance data for FM use in exploring operational efficiency.Subsequent sections review existing work in this area and describe a case-study in which a BIM and performance link is created, detailing the technical, behavioural and methodological barriers in its development and application.The definition of BIM adopted here is as a systematic process of the management and dissemination of holistic information generated throughout building design development and operation.Several definitions of what it means are available for BIM in various contexts , fundamentally describing the exchange, interpretation and utilisation of meta-data surrounding a CAD model, supporting multiple functions for various stakeholders in a construction and operations process.Reducing the gap between predicted and actual building performance is an area where much effort has been targeted.The reasons for this gap have been identified by Way and Bordass , who suggest frequent energy audits and continuous commissioning can optimise operational efficiency.Use of BIM as a platform on which to enable this has been explored by Dong et al. , who demonstrated its potential while suggesting the need for more effective data management to support it.Implementing BIM as a performance management tool has not yet been adopted beyond research, potentially a result of the numerous barriers in place to application of BIM in complex and error-prone processes; though its potential has been identified suggesting further investigation of how to meet this challenge.Lack of BIM outside design environments is representative of the slow uptake in adoption of new technologies throughout the AEC sector .Disparity between the schools of thought on adopting BIM for niche purposes is demonstrated between Kiviniemi , who proposes the client as the driver for adoption of BIM, and Howard and Björk , who suggest responsibility of the designer in sharing development to drive further utilisation.Each are valid, yet both demonstrate the lack of effective implementation regardless of supply and demand, as comprehensive examples detailing the use of BIM for energy management do not yet exist.Cao et al. found that use of BIM to enable effective collaboration between design disciplines is a primary driver behind its adoption, with mandates and strategies worldwide aiming to increase AEC industry productivity.A by-product of these are expected to be the creation of more efficient buildings, as a result of increased design optimisation through exploration and evaluation of design options .Application of BIM for building performance management has been explored by Srinivasan et al. and Göçer et al. , approaching implementation in different ways, yet both encountering issues of BIM integration with operational information environments.Summarising their findings and those of Codinhoto et al. in the context of FM activities, the initial barriers facing effective application are:Limited coordination between the design and operator in defining the provision of data to support operational management;,Information management standards in building operation falling behind those in building design;,Focus placed on asset maintenance issues by information providers, rather than the performance related optimisation of those assets;,A lack of real cases where BIM application is demonstrated in a replicable form; and,The absence of detailed guidance in how BIM could be best utilised to support ongoing building performance optimisation.The divide between research and practice for implementation BIM in “real-world” cases as noted by Codinhoto et al. is indicative of the challenges facing building operators in making the best use of the tools and data now available to them.Within the realms of academic research where many variables can be controlled and accounted for, the lack of repeatability of novel applications for BIM under controlled conditions results in limited feasibility in non-research settings.Slow uptake of BIM in these areas is proof of the remaining barriers to overcome prior to effective use by the wider industry.Application of BIM to managing building performance information during both design and operation is a potential end-goal for its post-construction use .Previous examples of this have been demonstrated to benefit the buildings end-user through reduction of errors, lead times and cost in design and construction ; however, practical application in the optimisation of building energy performance is less widespread.Design stage attempts to utilise information stored in a BIM environment primarily use the interoperability functions supported by modelling in a common design environment, supporting re-use of information to reduce data duplication in multiple discipline modelling tools.Che et al. , demonstrate the use of BIM in this manner, prompting the development of exchange methods between BIM and energy performance simulation tools , and the analysis of predicted performance , using BIM as a platform from which to gather information for building performance design and optimisation.Subsequently, the likelihood for performance disparity also increases due to difficulty in accurate modelling of complex high-performance features, ineffective use of those optimisations and inaccuracy of predictions ."BIM's capacity for extensible meta-data attribution to modelled objects has been used as a means of storing and potentially managing asset information , describing the composition of the represented buildings systems and operation .Most examples of BIM used for managing operational energy performance are generally simulation based, monitoring to predict in-use performance and identify deviation from predictions .The transition between design and operation is a crucial period for familiarising users with new systems and their-use, enabling more efficient building operation.Ineffective handover can increase energy consumed and occupant dissatisfaction , where BIM may be utilised to improve existing processes .Government mandated and voluntary schemes targeting these have been implemented as part of the BIM adoption process, but further guidance is required in the application of BIM during operation in context with operational performance improvements .Application of BIM as an information management platform relies on its capacity for storage and structuring of information.The modelling of objects and attribution of meta-data has been shown to enable the creation of datasets used in the management of assets post-construction .The same environment federating multiple discipline designs has also been used to store maintenance information , for storage of system operation documentation , and demonstrating BIM as an environment through which meta-data could be accessed and exchanged .Use as a data aggregation tool and its widespread adoption represents a paradigm shift from conventional document based design and operation, towards model and database style management of building data .The AEC industry has only recently been required to apply methods used in database handling for the processing of large amounts of information.Such concepts applied extensively to information architectures during the mid-1990s propagating the data infrastructure underpinning the information age are now being applied to construction via big-data analysis and BIM .Haas et al. demonstrates how accessing disparate information from a wide range of sources can be achieved via exchange protocols schema ), where relational systems rely on middle-ware to support specific functionality interpreting and utilising the related information effectively.But data relation during both design and operation is challenging due to the technical difficulties in linking disparate systems and requirement for common standards .Discordant with the availability of information describing new buildings designed using BIM, the majority of buildings for which performance improvements could be made were designed and built prior to use of 3D modelling.These lack the comprehensive models necessary to support a performance management tool .Further guidance is required for BIM enhanced performance management in this area, outside the context of asset management and maintenance, simulation and fault detection.CIBSE and ASHRAE provide guidance for the efficient management of building performance, predominantly specifying operational methods, rather than the standard to which operations are measured.Several standards have been proposed to address the need for a common information standard between operational performance management and building design modelling.For example, Project Haystack provides a Building Management System data model for the structuring of information related to equipment performance management and oBIX , which specifies a method of communicating information generated during operation via simple web-based exchanges However, these do not fully meet the requirements for integrated or relational information environments between a building and its representative BIM; instead, they apply modern methods of information structuring and exchange to the existing fields of BMS communications.Additional formats for the storage of building information outside BIM environments include Green Building XML, which is an open schema for information from BIM to be interpreted by energy modelling tools.This too could be considered the bridge between the two areas of modelling and operation with scope for time-series performance inclusion within it.However, this too faces limitations due to its ‘flat-file’ format which cannot account for the amount of data generated during operational building management .The method presented here demonstrates a potential means of linking a BIM model with monitored and recorded building performance data.Care has been taken to limit reliance on proprietary software where possible to establish non-platform specific requirements for such a method.The findings here are for building designers and operators to use in determining the effective generation and handling of performance describing information in and around BIM environments, and the utilisation of this data in the ongoing performance management of the building it describes.The method presented follows the latter-stages of design development, and subsequent handover and operation of a 30,500 m2, 3000 person non-domestic office building completed in 2013.Developed prior to widespread BIM implementation, the information available describing the building, its constituent systems and performance were held in disparate un-federated design models and documentation from various disciplines, representative of the majority of data describing buildings in-use currently .Design specifications aimed for extensive monitoring and environmental control for energy use reduction, in conjunction with high-resolution measurement of space, system and equipment performance.Monitoring was achieved using a BMS, recording information from thousands of sensors throughout the building, storing results in a Structured Query Language database.The choice of this building was due to accessibility of its design and operational data, and its status as an occupied building without detailed BIM documentation made it representative of many buildings for which such information is also unavailable.Practice-led research was used to identify the barriers in-place for widespread BIM application to building performance monitoring, through development of as-built models for simulation, creation of a performance attributable and accessible BIM model and an interface between these environments and monitored performance information.The development of a simple method of linking BIM to this data used throwaway prototyping, a subset of the rapid application method for the development of software.A simple working model of the process of creating, managing and linking design and operations performance data is created quickly, to demonstrate practicality without robustness testing , the feasibility of which is then discussed in context with feedback from producers and users of this dataset for a holistic review of BIM implementation as a performance management platform.The tools used in the development of a prototype BIM and performance monitoring link are described in Table 1.These are typical of commonly used software platforms used during building performance design and operation, with the exception of the BMS front end and Python, which were the means through which data interoperability and interpretation was achieved."The primary method of collecting information describing the case-study building was via document review, utilising drawings developed by the design team and supporting documentation to provide a comprehensive background to the building's composition and intended performance.Potential for bias from selective information survival is present ; however, given the need for creation of a model and further investigation of the building, potentially incomplete, inaccurate and disorganised information could be disregarded.With increasing information generation during design and operation an inevitable result following more widespread use of digital modelling techniques , the amount of information being generated using BIM requires effective management,Predicted building performance data was mainly generated prior to the developed design; setting the standard to which the designed building should perform for specification of systems and operating methods.The information gathered from the document review used the most recent design simulations, updated to include major changes in the buildings operational methodology and utilisation to generate a more accurate model.No comprehensive BIM models existed of the case-study building, with only partial architectural and structural models available.Accurate recreation of the entire building would have taken significant time, therefore a simplified representation of spaces and systems was chosen as a demonstrative BIM environment to which building performance data could be attributed and utilised.Space and system meta-data describing performance characteristics such as the maximum expected lighting, heating, cooling and small power loads for each space were taken from the simulated performance model and attributed to their respective spatial objects.This process used scripts written in Dynamo to interpret output from the simulation, using space names as shared attributes for coordination and transfer.Revit was used to access data within the partial models and was subsequently chosen as the platform in which to store the building design performance data, using its extensible meta-data attribution capabilities commonly employed for these means .Since Codinhoto et al. identified that access to data stored within BIM environments is a factor in reducing adoption in FM, accessibility has increased through development of tools interoperating between BIM authoring platforms ; however, a gap between the data generated during design and use remains that could be overcome using basic data management.Dynamo was used to extract basic building geometry and performance related information from the Revit BIM environment into a JavaScript Object Notation lightweight data-interchange format capable of interpretation via the development language used.Utilising a non-standard format for extracting and processing data from the BIM environment distinguishes the non-platform specific barriers to wider implementation of BIM from its authoring software.Dynamo was also used to attribute predicted performance data to the design model as meta-data describing spatial and system performance.The more widely used IFC format was also considered as an appropriate carrier for this information, but given the limitations in extract from Revit into this format and potential loss of data , the alternative was created to avoid these errors and specify exact data to be included in output of a lightweight and platform agnostic format.A relational database is the industry standard method for recording, storing and managing large databases of time-series information related to the operation of a buildings systems, and its performance.The existing BMS present in the case-study building utilised an SQL system.This comprised over 3000 sensors reporting continuous performance via the BMS interface into a Microsoft SQL Server 2008 back-end for storage of historic data, following industry design guidance specifying such capability .This form of monitoring enables the in-situ BMS to control HVAC equipment and identify faults .The amount of data recorded, while dependent on resolution, detail, data type and data recording methodology remains a limiting factor in the linking of live and historical performance between BIM and operational buildings.Gerrish et al. showed that while attribution of historic performance data directly into BIM formats is possible, it is infeasible given the amount of data potentially collected, and the computational capacity required for handling these datasets for which BIM is unsuitable.Numerous issues were identified in the commissioning and operation of the on-site BMS prior to research application, most notably the inefficiency of data update, formatting and querying.Database management techniques are essential for handling large, frequently updated datasets; however, the method implemented in the case-study building demonstrated several key faults and barriers to extraction, interpretation and analysis:Redundancy in database structuring meant update transaction were inefficient, reducing system performance;,Lack of indexing in any of the recorded logs meant querying of historical performance took far longer than necessary.Provision of indexes to support efficient access would need to balance the memory requirements of that index, the speed of its update when monitoring thousands of meters simultaneously and the method of querying to access the indexed data ; however, given the often required process of extracting historical performance for meter subsets by FM this is a requirement for effective metering;,Incorrect commissioning of the BMS resulted in numerous gaps in recorded data following system down-time; and,Access to data was constrained by security concerns over the network access rights for the BMS.These issues severely hindered the development of a prototype BIM and performance linking method; however, enough historical data was gathered from the BMS to provide a dataset suitable for testing and evaluation of its potential.Given access rights to the BMS either remotely or locally, live performance could also be used; however, this would require significant modification to the BMS back-end to support efficient querying of recorded information.Upon extraction of spatial and systems performance data from the BMS, preliminary review of key meter groups displayed several errors.The sources of these were identified as incorrect installation and commissioning with the BMS, inadvertent modification following FM and maintenance activities and faults occurring as part of ongoing use.Outlier detection, removal and interpolation was used to clean the raw data and provide more suitable data for analysis.Causes of erroneous collected data were identified through manual exploration of the major meter groups present in the BMS database:The limitations of the poorly set-up database from which performance data was collected were evident, and indicated a major barrier to the effective use of monitored performance data, due to poor database design and implementation.Following error removal, the cleaned data could have been reinstated in an SQL database; however, to increase performance of the prototype method an alternative Hierarchical Data Format format was chosen.Choice of this format over a conventional database such as SQL was ease of storage, efficiency of access to structured time-series data, portability of the data recorded, speed and accessibility using the development language .While likely unsuitable for implementation in the wider AEC industry, this approximates the required performance and accessibility of a building performance database interacting with data from a BIM environment.Pandas was used for all time-series performance data exploration and error removal, including extraction from the original SQL database and storage in the HDF5 format.Visualisation of historical performance was created using Matplotlib .The data held within the JSON file specifying spatial and system design performance is used as the basis for indicating performance outside expected levels.The names of each space in the design model and BMS monitored zones were matched to map between individual space performance.Visualisation of live and historic performance for each space was made possible using an interactive python environment to produce a user query-able environment in which performance could be monitored Fig. 4a. Various data visualisation and analysis methods were developed using this link, including selection of monitored variable and time-span and historic summary to indicate historic trends.A dashboard was created to indicate performance outside expected predicted levels, linking live and historic monitored performance data with predicted performance values stored within the JSON BIM proxy.The tools developed here represent the basic elements of a BIM linked building performance management system, developed so that the technical encumbrances in implementing such a system could be understood and evaluated in context with those providing and utilising the information generated therein.In addition to practice-led research of the technical requirements for such a system, the psychological and processual barriers inhibiting implementation currently were also investigated.Following development and application of a basic methodology for linking design specification BIM models with monitored building performance data.Semi-structured interviews were undertaken to understand the user based issues in implementing BIM as a performance management enabling tool must be addressed.Semi-structured interviews were chosen as the method for gathering forthright responses and feedback to the proposed methodology, and raise points from the users perspective around their experiences and knowledge in the context of BIM application to building performance design and management.Harrell and Bradley , Barriball and While note that data collection in this manner may be suitable for smaller study samples and provide an in-depth contextual response."Interviewees consisted of a member from each of the building's design, commissioning and operation teams, framed by the interviewers experiences in studying the building's operation and management since completion. "Interviewees were: a mechanical, electrical and plumbing engineer from the design team responsible for performance design and specification of conditioning plant; a commissioning engineer responsible for installation sign-off; and the building manager responsible for the building's energy consumption and optimisation.These roles were chosen due to their holistic familiarity with the building from design completion to current operation, reducing the potential for knowledge loss through changing roles.Each interview was conducted in-person at the interviewees place of work with consent given for anonymised publication of responses.Interview responses were transcribed and categorised into themes above, enabling the grouping of topics by common areas for discussion.A method based on that proposed by Clarke and Braun was adopted, generating likely themes from the topics proposed followed by further categorisation and interpretation of overarching response themes about the topics discussed below.The same questions were asked of each interviewee, from which responses were thematically categorised in context with respondent role in producing and utilising information in BIM environments.These questions, and the themes of discussion included:Provision: How building performance information is given to the building operator;,Utilisation: How that information being utilised and what commissioning activities are undertaken to meet expected performance;,Challenges: The challenges that have arisen during the commissioning and operation of the building, and how these challenges change with the provision of BIM-based information; and,Potential: The future of building performance management.Respondent role is indicated, allowing perspective and their responsibilities in managing building performance to be attributed to response.Each theme is grouped into issues pertaining to the process, skills and technology-based issues within, indicating the barriers and opportunities for BIM application to building performance management.Economic reasons for understanding and optimising building energy performance were the primary drivers indicated by all interviewees; however, the way in which that economy was achieved differed per respondent.The CE and ME noted direct cost savings of efficient operation and energy reduction strategies, with indirect benefit from government financial incentives for low-carbon fuel sourcing.These responses suggest the requirements for greater visibility of financial benefits as a reason for adopting potentially energy saving methods.EM: “long term thinking is easily dispensed with to make short term cash savings”; and, "EM: “it's ignored the bills turn up higher than expected”.Provision of information by designers, and its use by building operators underpins the capability of any building operating methodology.Interviewees agreed upon the current processes used to generate and utilise that information as a major barrier to applying BIM throughout design and operation.The ME and EM indicated the lack of interest in meeting design specified performance by FM, even with clear provision of that information.However, there were instances where design specification was communicated poorly, resulting in inefficient performance.Issues such as these could be addressed through more effective communication of design intent, where the EM suggests that the time-scale in which buildings are developed impacts that communication."CE: “We’ve tried to do in more recent projects, but it never feels finished, there's always just something missing. "There's so much of it done in BIM, but then it just stops and the final bits don’t get added”;",EM: “Quite often I think the values are used, which aren’t representative of normal operation, they just state the acceptable limits.I’ve seen put into in the log book as the setpoints to be used!,; and,EM: “I think in large complex buildings, the time-scale is so significant.If it takes 4 years to design and build, technology has moved on in that time.Open standards have the potential to be adapted in that process … and keep pace with technology”.The skills-requirements for those interpreting the information handed over in any format determines its potential for interpretation.Lack of skills in interpreting information in non-traditional handover formats was seen as both a challenge and an opportunity by the ME.In conjunction with the skills of stakeholders in providing and handling information describing building performance, the technologies used were identified as a source of some of the issues facing effective communication of design intent.During performance design the ME noted that information transfer between design platforms resulted in the need to manually recreate information.Current technologies, processes and skills impinge the ability of each stakeholder in the design process to provide relevant operational information outside the format in which it is created.CE: “ struggle to give the building operators the information they need.Handing a complete model to the FM would be good, but we’ll still need to handover files because the FM might not be able to get things out of a model”;,ME: “The process we go through is something we shouldn’t give away.We need to make sure our intellectual property isn’t given away with the BIM”;,CE: “At the time it was still when design was CAD with Excel sheets.The whole idea for the building was way back in 2005, with design starting late 2008.There were quite a few things that didn’t end up in it, lots of ideas and plans, a bit of 3D stuff to improve coordination, but beyond that it was standard CAD”; and,EM: “There are very few identical buildings."I think that's a large part of the problem; a BIM for my building needs to be specific for my building, and I need to be able to access it”.Limited use of information provided for building energy performance management was earlier identified as barrier to effective implementation of BIM as a supporting tool, but the activities undertaken by FM could be enhanced with more effective access to relevant performance data.Identifying relevant information is the first hurdle, with all interviewees responding as such."ME: “there's too much detail there that we need to simplify”;",EM: “the quality management of the building process and installation is significant.The cost of investigating and checking the problem is often more than the energy cost, and is overlooked.It all adds up, and there are much bigger fish to fry in terms of system optimisation.We have trouble understanding what systems there actually are”.The skills of those using the datasets created during design were seen to be lacking by the CE, who noted that reaction to performance issues were only a result of faults indicated on the BMS.Resources for scheduled and predictive maintenance do not preclude inefficient application, with the EM noting that other similar high-performance buildings showed a reduction in operational efficiency over time as a result of lack of skills for identifying performance deficiency from these resources.The lack of defined responsibility in who owns, and can act upon monitored performance data was indicated by each interviewee.Behavioural challenges remain a significant barrier to new technology and process adoption , demonstrated by the interviewees as reluctance to take on additional responsibilities beyond contractual obligations."A previous experience of the ME included an anecdote where upon being asked where the BMS was, the FM responded “what's that?”",as it had been hidden in a cupboard while the building was being controlled manually.CE: “it depends on their appointment.Anything beyond routine maintenance just isn’t done."They’re contracted to run the building and fix what goes wrong and that's it”; and", "ME: “it's not in their contract, and if its not there they won’t do it. "The client assumes that because it's being maintained, it's being run efficiently and optimised, but that doesn’t happen”.Splitting the challenges in implementing BIM for use in building energy performance management into process, skill and technology-based issues, the following themes were identified:The complexity of building design, handover and operation processes contribute to the difficulty in applying new methods of working, and understanding how to best apply an energy management based BIM tool in this process.EM: “if we were to go forward on implementing BIM, I’d need to procure someone with the right expertise.How would I write a specification for that?,Do we just ask people ‘Do you know how to do it?’,but we can’t check that”;,CE: “one of the main barriers is how complicated we tend to make things.Models we make are way more complicated than how the buildings run.And that might mean the wouldn’t understand it fully and ”; and,EM: “fixing things takes time, and the more parties involved, the more time it takes."There's a lot of bureaucracy in the whole process, and for less tangible things like energy it's more difficult”. "A combination of skills-based issues were noted, where the ability to correctly utilise a BIM model significantly impacted it's effectiveness as an information management tool.Both design and operation side interviewees stated some distrust in whether current job roles offered the correct skills to handle information in this format.ME: “ has been used on other projects, but one of the problems I found was that the information inside it wasn’t put in in the right way.It was there, but as it wasn’t scheduled we couldn’t get to it”;, "EM: “It takes an expert to run a performance analysis, but our clients don’t have that capability, they employ a consultant to do that, but some of the tools included in make it seem like it's a simple task”; and",ME: “a lot of the mentions of BIM seem to be by people who think they ought to mention it, and don’t necessarily know what they mean or anyone else means when they mention it”.The EM noted apprehension over the benefits of BIM, requiring greater demonstration of previous outcomes.Clarification of potential benefits for how it facilitates information management and utilisation could potentially drive implementation further than publication of these benefits alongside guidance documents:EM: “My fear is that a lot of the potential benefits of BIM are exaggerated.What would be helpful would be to have some clear definitions, standards and guidelines, you could say BIM and I could say BIM and we know we mean the same things”; and,EM: “In terms of how we move forwards, how BIM could help us understand our building needs to made real and visible.The invisibility of energy is a major problem, and making it visible to occupants means we have a chance, and where I hope clever and appropriate BIM could help”.The potential benefits a BIM supported method of building energy performance information management and visualisation could provide require more cohesive information management standards.The CE mentioned experiences where, with the requisite skills and input from those responsible for the delivery and use of that information, its effective use could be experienced more readily.Addressing the responsibility issue, the EM suggested that overcoming the lack of tangibility in energy performance by using an integrated model and management system could potentially bring occupants to account for their impact.However, they also stated that responsibility for new process implementation was currently ill defined in current job roles.Standards for practical interface with information environments are yet to be developed, representative of the significant changes required for integration of these capabilities in the building handover and operation processes.ME: “what you need is a target, and you can aim for that from project inception.It should be client driven; the designers must be capable of achieving that target.The contractor must then deliver to that target, and will result in a really efficient building”;, "EM: “there's a desire to contract out responsibility for being the occupant of a building, either as an organisation or an individual.We need to be more explicit about optimising, but we need relevant standards for how to do that”; and,ME: “being able to use a design model to check against monitored performance would be great.If changes were made in the building, these could be checked against design specifications and flag up a compliance or performance clash”.Previous methods of using BIM for identifying performance deficiencies neglect their wider application to the variable circumstances across the construction industry.The technical and methodological challenges facing implementation of BIM in this way are discussed, using the prototype methodology described previously and interview responses to identify key barriers.Technical challenges were identified during the development of a link between BIM and monitored building performance Additional issues were raised by the interviewees whose experiences provide a real-world perspective on challenges to consider.A balance between the specification of detail for effective building performance management, and the manageability of that information requires consideration of its purpose and the capabilities of those utilising it.If there is too little modelled data, the number of potential uses for it are reduced, and effort may be required at a later date to recreate usable information manually.If the information provided to the building operator is extensive there is greater scope for its utilisation; however, this is contingent on the format and structuring of that information if the end-user is to be able to extract from it what they require.Information overload is an evolving issue in BIM implementation , with additional work required in interpreting it for FM purposes.Jylhä and Suvanto recognise this via poor documentation, contributing to the paradox of there being too little information available, yet what information there is to use is irretrievable amidst a mass of non-indexed files.Management of information for further utilisation denotes a key deficiency in current BIM and FM tools.Its classification can be achieved using existing schemas; however, standards only specify the development of design information, while incorporation of operational building data into a BIM model is limited.Creation of a single method for structuring all information related to a buildings design, handover and life-cycle is an enormous undertaking, for which existing formats such as IFC may have some capacity, but holistic implementation of this is limited .Instead, specific data management systems for handling the information describing a building and its performance are required, separating the large continuously changing monitored data from more static and periodically updated FM information.Managing each data type in its own environment is practical, but separation necessitates exchange mechanisms and means of access for which standardisation is not available.Supporting the technological capacity to link a BIM model with a BMS must be the capability of the user to manage and maintain that system.Beyond the availability of a model describing a building to the FM, upkeep and maintenance of that model is unlikely to be completed; just as the drawings and records of non-digital FM documentation weren’t.Analogous to the availability of information, accessibility is an intrinsic part of its effective utilisation.Handover of documentation in the form in which it was authored, is yet to be adopted from designer to FM for numerous reasons , of which accessibility is a major limiting factor.Non-standardised extraction and interpretation of information as demonstrated in the method presented, is representative of the challenges facing utilisation of BIM models for purposes other than design.The need for creation of a proxy format from which data could be accessed shows that while possible, the time taken and effort to extract this information would be infeasible in most building handover and operation processes.Commercial tools to access this information directly are available; however, these incur costs in purchase and user training, and the time required for integration into an FM process for which its purpose is not yet defined.While accessibility and availability of information underpin the potential for its utilisation, its accuracy defines how well it represents the building or system it depicts.For performance management, accuracy is essential effective interpretation, and where links with existing datasets describing performance and the building must be pertinent.Methodological challenges in energy performance management using BIM are ancillary to the technical barriers.However, these represent the major limitations placed on its use for this purpose.The methodological challenges identified by the interviewees primarily concerned the procedures in place, and the responsibilities and skills of those managing the information generated during design and operation.The capability of those responsible for the operation of a building to interact with and make sense of information stored in non-traditional formats impacts the potential for that person to improve building performance.If understanding the building is the first step in its optimisation, employing those with the skills to interpret information, and communicate that clearly to those who can make operational changes is a logical necessity.Provision of information without transfer of the methodology in which it was generated is a subject under close review in BIM implementation.The designers who provide that information must make it accessible without losing their intellectual property, just as the users of that information must not misinterpret design intent and incorrectly operate their building.While not strictly a capability issue, the contractual arrangements of FM was shown to preclude the optimisation of building systems and energy performance.Several interviewees indicated deficiencies in employment contracts for those responsible for building maintenance, wherein specification of duties beyond upkeep was overlooked as it was assumed optimisation was an integral part, which it was not.The methods with which information is recorded and exchanged currently do not best support utilisation of BIM in procedures outside building design.Collaboration between designer and operator at handover is limited to the seasonal commissioning and exchange of basic information, building on documentation created without the needs of the end-user fully considered.Design intent is not indicated with the transferred information, leading to misinterpretation while compilation of this alongside additional documents giving context may alleviate these issues.For example, a design setpoint may indicate an maximum possible value, but could be interpreted as a target value to which the building is commissioned.The lack of standard methods for both performance monitoring and provision of performance data containing BIM models reduce the possibility of using BIM as a performance management tool.Individually, these can be addressed using open exchange formats; but given variability in the construction industry of FM requirements, building operating methodologies and technologies, developing a new standard for such a broad spectrum is infeasible.Instead, methods of interfacing existing data infrastructures may be more suitable.Data management during design and operation must be more carefully considered to support effective use of it for novel purposes, and the ability to use it to inform better building performance management.Without a standard form or structure, the time taken to sort and structure that data to make it usable, is too long and costly to be effectively implemented.Specification of data management systems during building operation must account for access to that data, and provide efficient handling of potentially large datasets.The IT sector is well versed in managing such feats, but the AEC industry is behind in its application of database administration to BIM and other data collection platforms.As handover of a building to its occupant or operator is beginning to include models, efficient handover and access mechanisms must be developed to support management of the information being communicated.Recent communication protocols provide a method for achieving this, but uptake of these amongst other new technologies remain low.The reasons for this discussed previously add to the existing issues of project complexity.These include: preventing holistic implementation of new tools and processes; project rather than organisation orientation reducing the capacity for ideas to be shared between projects with changing members; and disparity between the client and developer whose contrasting objectives must balance the clients demands to the scope and scale of the developers fee.Against the background of BIM as a standard working process, the mindset of designers and operators must change, and adapt to the impacts new technology is having on their roles.During design, FM and building owners must give guidance for their expectations of information delivery, while designers must have the skills to deliver these requirements.Moving beyond simple handover of models and files, the responsibility for the upkeep of these must also be defined, without which dependant systems and understanding of how the building operates become ineffectual."Widespread application of BIM for purposes outside design development is unlikely to happen without corresponding and relatable standards for information management, in the areas to which it's applied.Addressing the barriers identified here would simplify this process, and enable more effective utilisation of design and operational data in ongoing performance management.The question remains: How can information describing a buildings performance be standardised in such a way that would enable the automated application of tools to give accurate representation of where energy is being used?,And how could this be supported in context with the common data environment using BIM?
This paper evaluates the potential for use of building information modelling (BIM) as a tool to support the visualisation and management of a building's performance; demonstrating a method for the capture, collation and linking of data stored across the currently disparate BIM and building management system (BMS) data environments. Its intention is to identify the barriers facing implementation of BIM for building designers and operators as a performance optimisation tool. The method developed links design documentation and metered building performance to identify the technological requirements for BIM and building performance connection in a real-world example. This is supplemented by interviews with designers and operators identifying associated behavioural and methodological challenges. The practicality of implementing BIM as a performance management tool using conventional technologies is established, and recognises the need for more effective data management in both design and operation to support interlinking of these data-rich environments. Requirements for linking these environments are proposed in conjunction with feedback from building designers and operators, providing guidance for the production and sourcing of data to support building performance management using BIM.
251
Regulation of Kv4.3 and hERG potassium channels by KChIP2 isoforms and DPP6 and response to the dual K+ channel activator NS3623
About 10 distinct potassium channels participate in the repolarization of cardiac action potentials .However, how they map into the net AP repolarizing current is complicated; in the ventricles, the rapid and slow delayed rectifier K+ currents influence AP repolarization over plateau voltages, whilst the inward rectifier K+ current is involved in both setting the resting potential and mediating the final repolarization phase of the AP .The transient outward K+ current, Ito, contributes to phase 1 repolarization but will also affect later repolarization phases of the AP by modifying the time- and voltage-dependent recruitment of other K+ currents as well as L-type Ca2+ current .In addition, Ito will affect NCX current via effects on ICa,L dependent Ca2+ release as well as via Ca-dependent inactivation of ICa,L .Native Ito has components with fast and slow recovery kinetics and KCND2 and KCND3 underlie Ito,f, while KCNA4 is responsible for Ito,s .The normal physiological behaviour of many cardiac K+ channels appears to require both pore-forming and accessory subunits to be co-expressed and associated .Native Ito,f channels require interactions between α-subunits and K+ Channel interacting Protein 2 β-subunits, but other proteins such as DPP6 and members of the KCNE family may also modulate the current .Two splice variants of KChIP2 were discovered by RT-PCR cloning and the shorter form KChIP2S was identified as the predominant isoform in human heart .Additional expression cloning from human revealed another splice variant of KChIP2, KChIP 2.2 which was 32 amino acids shorter than KChIP2.1 and which, like KChIP 2.1, also increased Kv4.2 channel cell-surface expression and slowed inactivation .KChIP2.1 and KChIP2.2 are produced by alternative splicing from the KChIP2 gene removing exons 3 and 2+3 to produce isoforms coding of 252 and 220 amino acids respectively .The AP depolarization also activates IKr which plays a key role in determining action potential duration .Recordings at physiological temperature from recombinant hERG channels expressed in mammalian cells closely approximate native IKr .The accessory subunits of native IKr channels have been a matter of some debate as hERG can co-assemble with both KCNE1 and KCNE2 and clinically observed mutations in these subunits can influence hERG current and the channel’s pharmacological sensitivity .The potassium channel regulatory protein KCR1 has also been shown experimentally to influence drug sensitivity of hERG channels and the possible interaction with other K+ channel regulatory units is uncertain.Recent data suggest that hERG channel current magnitude is influenced by Kv4.3 co-expression , but no such information exists for Ito beta subunits.KChIP2 has recently been identified to act as a core transcriptional regulator of cardiac excitability .That KChIP may also effect other K channels is suggested by the observation that KChIP knockdown in myocytes that do not express Ito increases action potential duration .While this did not seem to be associated with a detectable change in IKr, the natural low level of expression of KChIP2 in guinea pig leaves open the possibility of interactions at higher levels of expression.The promiscuous nature of KChIP2 interactions is further underscored by data demonstrating altered L-type Ca current magnitude in murine myocytes lacking KChiP2 and a direct interaction between KChiP2 and Cav1.2 .Such interactions could have important ramifications for drug design in the future and need evaluation.Small molecule activators of cardiac K+ channels that both increase repolarizing current and prolong post-repolarisation refractoriness have the potential to offer novel antiarrhythmic actions .Furthermore, activators of Ito may restore early repolarization and inhibit dyssynchronous Ca2+ release that occurs consequent to loss of the early repolarization notch of the AP in heart failure .A prototypical Ito activator, NS5806, has been shown to enhance native ventricular Ito in dog and rabbit and it also increases recombinant Kv4.2 and 4.3 currents .The agonist effect of NS5806 on Kv4.x channels requires the presence of KChIP2.1 .Canine atrial Ito is augmented by NS5806 to a much smaller extent than ventricle, whilst rabbit atrial Ito is paradoxically inhibited by the compound .NS5806 also has an off-target effect of atrial-selective Na channel inhibition , and a related compound, NS3623, has been reported to be a dual activator of IKr and Ito and to increase repolarization reserve in cellular and multicellular canine preparations.NS3623 was originally described as a chloride channel inhibitor but it also activates hERG/IKr .Paradoxically, NS3623 had no significant effect on Kv4.3 channels expressed in Xenopus oocytes at 30 µM , despite effects on epicardial AP notch, J wave amplitude and increasing Ito in canine preparations .These conflicting results led us to hypothesize that the lack of sensitivity of Kv4.3 to NS3623 could be due to the absence of KChIP2/DPP6 in the aforementioned oocyte experiments or that the augmentation of Ito might be due to some other mechanism that affects Kv4.3 current density rather than an effect on the channel per se.The present study had three aims: first, to compare the gating properties of Kv4.3 co-expressed with either KChIP2.1 or KChIP2.2 with and without DPP6.Second, to examine the effects of NS3623 on Kv4.3 and hERG, in both the presence and absence of KChIP2 and DPP6 in a mammalian cell expression system.Finally, to examine the possibility that KChIP2.1, KChIP2.2 and DPP6 expression may affect Kv11.1 by transfecting each β-subunit in a stably transfected mammalian cell line which expresses hERG channels.The cDNA constructs coding for human short Kv4.3 isoform 1 precursor and KChIP2 variant 2 , herein designated as KChIP2.2 were kindly provided by Professor Robert Bähring.The cDNA constructs coding for human DPP6 variant 1 and KChIP2.1 were synthesized and sequenced by GenScript.HEK 293 cells transfected with Kv4.3 or a stable HEK 293 cell line expressing wild-type hERG channels were used.Additional K+ channel β–subunits were also transfected into the cells.Cells were maintained at 37 °C in a humidity controlled incubator with 5% CO2 atmosphere and cultured in Dulbecco’s modified Eagle’s medium supplemented with 10% heat inactivated fetal bovine serum, 1% non-essential amino acids and 50 μg/mL Gentamicin.In the case of hERG-expressing HEK 293 cells, the medium was further supplemented with 400 μg/ml G418 selection antibiotic.Prior to transfection, cells were plated for 48 h onto 12-well plates using a non-enzymatic agent before transfection.Transfection reactions were prepared using OPTI-MEM® I.For Kv4.3 experiments 0.3 μg of DNA was transfected along with either 1 μg of a GFP construct or KChIP2.1/2.2.In experiments where a KChIP2 isoform and human DPP6 were co-transfected, 0.8 μg of each DNA was transfected and compared to a “control” condition, in which 1.6 μg of GFP alone was transfected.For hERG experiments, a matching concentration of GFP DNA was used, in order to have an equivalent DNA concentration across all conditions.Lipofectamine™ 2000 transfection agent was used at a 2:1 ratio with DNA.The culture medium was replaced 5 h after transfection.After 24 h, cells were collected and re-plated onto 13 mm round glass coverslips, pre-treated for 4 h with 200 μg/mL Poly-D-Lysine.Patch-Clamp recordings started 3 h after plating.The compound NS3623 was purchased from Santa Cruz Biotechnology Inc. and dissolved in DMSO to a final concentration of 10 mmol/L.Individual aliquots were frozen at −20 °C and thawed for use on the day of the experiment.The DMSO stock was diluted to a final concentration of 0.1% in extracellular solution to give an 10 μmol/L NS3623 solution, as used previously .All reagents to prepare solutions were purchased from Merck KGaA or Sigma-Aldrich.The extracellular solution contained: 138 NaCl, 4 KCl, 1 MgCl2, 2 CaCl2, 10 HEPES, 10 Glucose and 0.33 NaH2PO4.The intracellular solution used for Kv4.3 recordings was based on that from previous study of Kv4.3 .The patch pipette solution contained in mmol/L: 90 KAspartate, 30 KCl, 10 NaCl, 1 MgCl2, 5 EGTA, 5 MgATP, 10 HEPES, 5.5 Glucose.The intracellular solution for hERG recordings was similar to that used in prior studies from our laboratory , containing in mmol/L: 130 KCl, 1 MgCl2, 5 EGTA, 5 MgATP, 10 HEPES.Patch pipettes were fabricated from borosilicate glass capillaries using a P-87 puller.Data were acquired and recorded with Clampex 10.3 software using an Axopatch 200 amplifier and Axon Digidata® 1322A.Data were digitized at 20 kHz during all voltage protocols and a bandwidth of 2 kHz was set on the amplifier.For whole-cell recordings, cells were continuously perfused with extracellular solution at 33 ± 1 °C.Access resistance was always below 5 MΩ and series resistance was typically compensated by ∼40%.Voltage protocols are described in the “Results” section.Patch-Clamp recordings were analysed using Clampfit 10.3.Statistics and graphs were prepared using Excel Professional Plus 2013 and Prism 7 for Windows.Statistical significance was assessed by applying a non-parametric test for comparing two different conditions within a group and analysis of variance or one-way ANOVA when comparing three or more groups.Two-way ANOVA test was used when multiple comparison between different groups was necessary.In all cases, a p value of less than .05 was required for statistical confidence.Values are expressed as mean ± standard error of the mean.Initial experiments examined the effect of both KChIP2.2 and KChIP2.1 on Kv4.3.The voltage protocol involved square voltage steps from −60 to +40 mV in 10 mV increments from a holding voltage of −80 mV, as illustrated in the inset to the top panel of Fig. 1A. Co-expression of Kv4.3 with either KChIP2 isoform produced a robust increase in outward currents when compared to Kv4.3 alone.For Kv4.3, current density at +40 mV was 490 ± 108 pA/pF, which increased to 833 ± 107 pA/pF and 1043 ± 88 pA/pF for cells expressing KChIP2.2 and KChIP2.1, respectively.In addition, we evaluated the effect of expressing both KChIP2 isoforms with human DPP6 as the presence of both accessory subunits may be necessary to recapitulate native Ito .The presence of DPP6 reduced the augmentation of Kv4.3 current produced by KChIP2.1, resulting in both KChIP isoforms having the same agonistic effect on Kv4.3 in the presence of DPP6 as shown in Fig. 1C and detailed in Table 1.Fitting a bi-exponential decay function to the Ito inactivation time course showed that expression of KChIP2.1 consistently increased both the fast and slow time constants compared to Kv4.3 alone.Values for tau fast were 11.16 ± 0.89 ms vs 21.24 ± 2.88 ms, while slow time constant values were 84.7 ± 4.9 ms vs 146.4 ± 23.9 ms.Interestingly, co-expression of KChIP2.2 increased the Kv4.3 slow time constant to 132.8 ± 17.3 ms, but not the fast component.Finally, addition of DPP6 with KChIP2 isoforms opposed the changes in Kv4.3 inactivation due to KChIP2 expression alone.The time-course of recovery of Ito from inactivation was measured using a protocol consisting of two square voltage pulses to +40 mV with varying interpulse intervals, Δt as illustrated in the inset to Fig. 1E.A plot of the fraction of recovered current against inter-pulse interval was generated and fitted with a mono-exponential function whose time constant characterized the rate of recovery.The expression of KChIP2 isoforms decreased Kv4.3 τrec from 46.10 ± 6.19 ms to 5.88 ± 0.55 ms and 11.99 ± 1.87 ms in the presence of KChIP2.2 and KChIP2.1, respectively.Co-expression of DPP6 with KChIP2.1/2.2 isoforms had only small effects on τrec compared to KChIP alone.Recently, 5 μmol/L NS3623 has been reported to increase native Ito in canine ventricular myocytes at membrane potentials above +10 mV .In addition, recovery from inactivation was shown to be slightly faster in the presence of NS3623.We therefore examined the effect of 10 μmol/L NS3623 on Kv4.3 currents at +30 mV alone, in the presence of KChIP2 isoforms with and without DPP6.Representative current traces are shown in Fig. 2A.In all cases, application of NS3623 resulted in an increase in current magnitude, but to an extent that depended on isoform co-expression.The greatest increase in current was seen in cells expressing Kv4.3 alone and co-expression with KChIP2.1/DPP6 resulted in a significantly reduced agonism.Application of NS3623 in the presence of either KChIP2.2 or KChIP2.1 led to a ∼35% increase in the Kv4.3 current fast inactivation time constant.Without these subunits, the time constant was not detectably altered by NS3623.When DPP6 was also expressed with KChIP2.1/2.2, the Kv4.3 current fast inactivation time constant was further increased.The opposite effect was observed for slow time constants.The KChIP2-mediated increase in time constants was generally reversed by 10 μmol/L NS3623 application.Furthermore, values from all conditions, with the exception of KChIP2.1, showed a significant acceleration of the slow component of inactivation.Nevertheless, the overall effect of NS32523 was greatest in the presence KChIP2.x co-expressed with DPP6, as shown by the change in current integral.Recovery of Ito from inactivation was slowed by NS3623 in all our Kv4.3 expression conditions by a factor of 3–4.However, the current from cells expressing accessory subunits still recovered much faster than that from cells expressing Kv4.3 alone.The possible effects of KChIP2.1/2.2 and DPP6 on hERG current) were evaluated with a standard hERG voltage clamp protocol.Both end-pulse and tail currents showed the expected electrophysiological characteristics,.Current densities in the absence of any accessory subunit were 51.6 ± 4.5 pA/pF and 100.4 ± 7.5 for IEnd Pulse and Itail, respectively and current magnitudes were not changed by co-expression with KChIP2.1/2.2 or DPP6.We also examined the normalised voltage dependence of IhERG when applying a 2-s-long voltage steps from a Vh of −80 mV to potentials between −40 and +60 mV.Each test pulse was followed by a repolarization step to −40 mV.As is typical for IhERG, current increased with progressive depolarization up to 0 mV.Further depolarization to test potentials above +10 resulted in current decline, as indicated by the region of negative slope on the end-pulse I-V relation.Tail current activation upon repolarization to −40 mV followed a sigmoidal activation pattern which could be fitted by a Boltzmann function.Examination of the Boltzmann half-activation voltage and slope parameters showed essentially no change with KChIP2.1/2.2 and DPP6.The effect of KChIP2.1/2.2 and DPP6 co-expression on IhERG rectification properties and deactivation time course was examined using the voltage protocol shown in Fig. 4Ai and test pulses were applied at 12 s intervals to ensure full recovery between pulses.Fig. 4Aii shows representative currents elicited by this protocol and Fig. 4B shows the resulting I-V relation.IhERG exhibited a voltage-dependence that was very similar to previous studies.Accessory subunit expression had no measurable effect on the resulting currents.In all cases, the fully activated I-V relation was maximal with a repolarization step to ∼30 mV.Likewise, all groups showed a similar reversal potential around −85 mV with no significant changes detected.The deactivation time-course was calculated from the tail currents at each repolarization potential for each cell by fitting a bi-exponential function, giving fast and slow components.Similar to the lack of effect of subunit co-expression on the fully activated I-V relation, both components of deactivation were unaffected by the addition of KChIP2.1/2.2 or DPP6.Activation time course of IhERG was evaluated using an “envelope of tails” protocol .The protocol consisted of a variable duration pre-pulse to +20 mV followed by a repolarizing step to −40 mV to de-activate IhERG and produce a tail current.Itail amplitudes were then fitted by a monoexponential function to give time constants for activation which were compared across different expression conditions.Neither co-expression with KChIP2.1 nor DPP6 altered the activation time-course.However, KChIP2.2 co-expression resulted in a ∼35% increase in the rate of IhERG activation.To examine the time dependence of recovery from inactivation we used a 2 s long depolarizing step to +40 to activate IhERG current, followed by a variable length repolarization step to −40 mV to allow the channels to recover from inactivation.This was followed by a 20 ms test step to +40 mV to probe the extent of recovery from inactivation.Currents were normalized to the maximal current seen during the second +40 mV steps.No significant differences in the rate of IhERG recovery were observed, although KChIP2.2 co-expression resulted in a significantly faster recovery time-constant compared to KChIP2.1.Although KChIP2.1/2.2/DPP6 co-expression had only small effects on the individual kinetic parameters of IhERG, the acceleration of activation by KChIP2.2 and other kinetic interactions during the dynamic AP might produce some summative effect.We therefore examined potential modulation of IhERG by KChIP2.1/2.2/DPP6 accessory subunits under AP clamp as the most physiological stimulus, using a digitised human ventricular AP waveform as the voltage command .The AP waveform as well as an exemplar IhERG record from a cell expressing only hERG are shown in Fig. 6A and Table 2.Consistent with prior studies, IhERG current slowly increased during the plateau phase before quickly increasing in amplitude once the repolarization phase of the action potential started, with a maximal peak at ∼−40 mV .Current dramatically declined during terminal repolarization.As expected from the above data, only small differences on the activation profile are seen in the normalized instantaneous I-V relation for all conditions, specifically at voltages more depolarized than −40 mV.Maximal currents occurred late in repolarization in all conditions, with a mean membrane potential of −35.58 ± 0.94 mV in hERG-only expressing cells.The voltages in the other groups were: −32.10 ± 0.89 mV, −35.63 ± 2.68 and −35.91 ± 1.11.No significant changes due to KChIP2.1/2.2/DPP6 co-expression were detected.In addition to the effect of NS3623 on Ito in ventricular cardiomyocytes, this compound also activates hERG ion channels in ex vivo preparations .To confirm the activity of NS3623 against IhERG and to explore the involvement of either KChIP2.1/2.2 isoforms and DPP6 in the compound-mediated response, we used our “standard” IhERG voltage step protocol before and after adding 10 μmol/L NS3623.Representative IhERG traces in control and with 10 μmol/L NS3623 are shown in Fig. 7A and a plot of IhERG tail amplitudes over time is shown in Fig. 7B.The results show that IhERG is quickly, and reversibly, activated by NS3623.An increase in IhERG in response to NS3623 was always observed when KChIP2.1/2.2 and DPP6 were co-expressed and the degree enhancement of IhERG was independent of KChIP2.x and DPP6 co-expression.To our knowledge, this is the first study to investigate a potential modulatory role of the accessory subunits KChIP2.1, KChIP2.2 and DPP6 on recombinant hERG channels and to evaluate the dual potassium channel opener NS3623 on recombinant Kv4.3 channels.In addition, we have examined the effect of the 220 amino acid long isoform KChIP2.2 as well as KChIP2.1 on Kv4.3 and its response to NS3623, both of which are expressed in human cardiac tissue .It is important to point out that we have used expression of GFP protein as a reporter to identify cells expressing the ancillary subunits of interest.Although this approach does not guarantee that all constructs are expressed within the same cells, it was reassuring that the current density-voltage plots in Fig. 1B and C showed distinct patterns depending on whether KChIP2.1 or KChIP2.2 was co-transfected with Kv4.3 or with the presence/absence of DPP6.Furthermore, for Kv4.3 experiments, we have used substantial replicate ‘n’ numbers, so we are confident that our measurements should reflect successful transfection.Expression of both KChIP2.1 and KChIP2.2 resulted in a larger Kv4.3-mediated outward current than Kv4.3 alone.Qualitatively, our results are in reasonable accord with previous studies assessing KChIP2.1 and Kv4.3 expressed in HEK 293 cells , CHO cells and Xenopus oocytes , although our data show that KChIP2.2 increases current magnitude more weakly than KChIP2.1.Importantly, addition of DPP6 reduced the difference in current density produced by the KChIP2.1/2.2 isoforms by reducing the current density which was increased by KChIP2.1 co-expression while leaving KChIP2.2 current density largely unaltered.The latter result is similar to the finding that co-expression of DPP6 with KChIP2L had no significant effect on current density compared to KChIP2L alone .This suggests that DPP6 is not only a chaperone for Kv4.3 expression levels but can also stabilize the properties of the Kv4.3/KChIP2.x complex across different KChIP2.x isoforms.It is unclear whether Ito inactivation follows a mono- or a bi-exponential time course but in our experiments, mono-exponential fits did not properly describe Kv4.3 inactivation.The rate of Kv4.3 current inactivation was slowed by accessory subunit co-expression in our experiments and a larger effect was observed in cells co-expressing Kv4.3/KChIP2.1 compared to Kv4.3/KChIP2.2.Whilst KChIP2.1 slowed inactivation, in cells expressing KChIP2.2 the increase was smaller and the change in fast component change was no longer statistically significant.Our fast time constant values are slightly slower than reported by Lundby et al. for experiments in a CHO expression system at 37 °C but this may be explained by the temperature difference between our studies .Our data differs from that reported in Xenopus oocytes , suggesting that both the cell system and experimental temperatures may differentially affect biophysical properties.Co-expression of either KChIP2 isoform with DPP6 did not significantly change inactivation time constants compared to Kv4.3 alone, suggesting DPP6 opposes the kinetic increase mediated by KChIP2.x.A similar result has been previously reported for the KChIP2.1 isoform in oocytes .However, these results differ from an earlier study in CHO cells where expression of KChIP2.2 and KChIP2.2+DPP6 showed similar time constants .The recovery from inactivation showed the expected acceleration following expression of either KChIP2 isoforms or KChIP2/DPP6 co-expression, although our time constants are faster than reported in previous studies.While our results show no significant difference between KChIP2.2 and KChIP2.2+DPP6, Radicke et al. reported an accelerated rate of recovery when DPP6 is co-expressed with KChIP2.2.However, their rate in CHO cells is considerably slower than we observed in HEK293 cells and this difference is not explainable by the temperature difference between our studies because the Q10 is ∼2 .In contrast to a recent report demonstrating that interactions between recombinant Kv4.3 and hERG channels can result in an increase in IhERG density , co-expression of KChIP2.2, KChIP2.1 or DPP6 along with recombinant hERG channels resulted in no change in the magnitude of IhERG.The voltage and time-dependent properties of IhERG were generally unaltered although KChIP2.2 co-expression appeared to accelerate hERG activation.Furthermore, direct examination of IhERG profile under AP voltage clamp showed no significant difference in the profile of current activation with and without Ito accessory unit co-expression.Some recent studies have also investigated the effects of KChIP2 in guinea pig ventricular myocytes, which lack a functional Ito but still expresses KChIP2 .Knocking-down KChIP2 expression in guinea pig ventricular myocytes leads to AP prolongation which is not due to changes in IKr and IKs but an increase in Cav1.2 calcium channel expression .Thus although KChIP2.x may be considered a “master regulator of cardiac repolarization” IhERG appears to escape this regulation.Hansen and colleagues characterized the effect of NS3623 on IhERG in Xenopus oocytes, reporting: an increase in IhERG amplitude; a rightward shift in the voltage-dependence of inactivation and a slower onset of inactivation .Finally, evaluation of a mutant lacking inactivation showed that the compound did not further augment IhERG.Since no accessory subunits were co-expressed with hERG in that study, their results support the idea of a direct interaction between NS3623 and hERG protein.Further studies in guinea pig showed NS3623 shortens the AP and decreases the appearance of extrasystoles in perfused hearts as well as reversing drug-induced QT prolongation, supporting the idea of some therapeutic potential for this compound .Our results show that Ito-modulating β-subunits do not influence the agonism of IhERG by NS3623, consistent with both the lack of modulatory effects of KChIP2/DPP6 on the channel and a direct α-subunit action.In relation to the effect of NS3623 on Kv4.3, our results show that the compound augments current amplitude without accessory subunit co-expression in contrast to findings by Hansen and colleagues using Xenopus oocytes, who failed to detect Kv4.3 activation .The reason for this discrepancy is not clear, but may be related to the different expression systems and recording conditions.The concordance of our findings with the recent report by Calloe et al. using canine heart preparations highlights the importance of using mammalian cell expression systems for Kv4.3 studies.The rapid response to NS3623 which augments Kv4.3 current is consistent with acute effects of NS3623 on Ito leading to an increased epicardial action potential notch seen in canine left ventricular wedge preparations .Similarly, effects on current kinetics are also consistent with the primary modulatory effects of NS3623 being ion channel function rather than trafficking/expression.Integrated charge transfer during current activation at +40 mV in the presence of NS3623 was increased by ∼50% by KChIP 2.1/2.2 expression and this was further increased by DPP6 co-expression to ∼100%.Thus the addition of these subunits significantly augments the effect of NS3623 and this raises the possibilities of both multiple interaction sites for NS3623 and that these effects may be different in different regions of the heart depending of the level/composition of subunit expression.This notion is consistent with a prior study of the effects of the related compound NS5806 on canine Ito ; these were greatest in epi- and mid-myocardial cells, which had the highest levels of KChIP2.Whether such differential expression and response could be beneficial for treatment of disease should be considered in future studies.Of course, as is the case for all expression studies, it is not possible to rule out the possibility that the NS3623 response may be altered by differing relative expression levels of Kv4.3 and KChIP2/DPP6 or that other accessory subunits not studied here may contribute to the Kv4.3 response in native tissue.Despite structural similarities, our data show that NS3623 binding sites must, in some ways, be different from the binding site for NS5806 since stimulation of Kv4.3 current by NS5806 requires the presence of KChIP2 .Our results show that, although KChIP2 isoforms and DPP6 proteins are not required for Kv4.3 activation by NS3623, their presence influences the compound’s effects on the rate of current decay, dominating the overall effect as reflected by total charge transfer.In addition, the change in kinetics depends on β-subunit combinations, with no change in the slow inactivation component when only KChIP2.1 is expressed.This observation suggests KChIP2 influences the gating effect of NS3623 on Kv4.3.Whether this effect is the result of the drug having multiple binding sites on Kv4.3 and/or accessory subunits is unclear.Experiments with the KChIP3 isoform have shown NS5806 can bind at a hydrophobic site within the C terminus, modulating the interaction between KChIP3 and Kv4.3 .NS5806 has an additional trifluoromethyl group and bromine compared to NS3523.It is remarkable how such a small molecular difference can lead to such different mechanisms of action.Further studies are therefore needed to identify the interaction sites between Kv4.3 and where the site that mediate differential drug effects on current amplitude and kinetics are located.Elucidation of the underlying molecular basis of the differences between NS2623 and NS5806 could lead to new therapeutics being developed from these prototype drugs.At this juncture, we can only suggest that NS3623 either binds to both Kv4.3 and KChIP2 and/or that there are two binding sites on Kv4.3 where one of them inhibits KChIP2 binding.The study was funded by a programme grant from Medical Research Council U.K.JCH acknowledges a University of Bristol research fellowship.
Transient outward potassium current (Ito) contributes to early repolarization of many mammalian cardiac action potentials, including human, whilst the rapid delayed rectifier K+ current (IKr) contributes to later repolarization. Fast Ito channels can be produced from the Shal family KCNDE gene product Kv4.3s, although accessory subunits including KChIP2.x and DPP6 are also needed to produce a near physiological Ito. In this study, the effect of KChIP2.1 & KChIP2.2 (also known as KChIP2b and KChIP2c respectively), alone or in conjunction with the accessory subunit DPP6, on both Kv4.3 and hERG were evaluated. A dual Ito and IKr activator, NS3623, has been recently proposed to be beneficial in heart failure and the action of NS3623 on the two channels was also investigated. Whole-cell patch-clamp experiments were performed at 33 ± 1 °C on HEK293 cells expressing Kv4.3 or hERG in the absence or presence of these accessory subunits. Kv4.3 current magnitude was augmented by co-expression with either KChIP2.2 or KChIP2.1 and KChIP2/DPP6 with KChIP2.1 producing a greater effect than KChIP2.2. Adding DPP6 removed the difference in Kv4.3 augmentation between KChIP2.1 and KChIP2.2. The inactivation rate and recovery from inactivation were also altered by KChIP2 isoform co-expression. In contrast, hERG (Kv11.1) current was not altered by co-expression with KChIP2.1, KChIP2.2 or DPP6. NS3623 increased Kv4.3 amplitude to a similar extent with and without accessory subunit co-expression, however KChIP2 isoforms modulated the compound's effect on inactivation time course. The agonist effect of NS3623 on hERG channels was not affected by KChIP2.1, KChIP2.2 or DPP6 co-expression.
252
Vector field statistics for objective center-of-pressure trajectory analysis during gait, with evidence of scalar sensitivity to small coordinate system rotations
Center of pressure trajectories detail the dynamic interaction between the foot and ground, and have been widely used to characterize gait mechanics in both health and disease .They are typically analysed first qualitatively and then statistically, through the extraction of a number of scalar parameters like planar orientation and maximum displacement .One problem with COP trajectory parameterization is that a large number of scalars – on the order of 50 – exist for describing even single COP trajectories , and many additional scalars exist for describing multiple COP trajectories .Since different studies tend to report different parameters, multi-study comparisons and meta-analyses are difficult.A potentially more serious problem is that ad hoc scalar extraction can bias statistical analysis via unjustified focus on particular coordinates and/or temporal windows .The purpose of this study was to demonstrate how vector field statistics can be used to more objectively analyse COP trajectories.The method stems from statistical parametric mapping , an applied statistical technique used to detect signals in spatiotemporal continua.We use previously collected plantar pressure data to test the null hypothesis that walking speed does not affect the COP trajectory, both to clarify trends in those data and to corroborate vector field COP results with independently reported walking speed effects .Since coordinate system definitions can affect COP interpretations , we also conduct a coordinate system sensitivity analysis.Ten male subjects provided informed consent and performed 20 trials of each of slow, normal and fast walking .Plantar pressures and walking speed were recorded using a Footscan 3D system and a ProReflex system, respectively."PP data were spatially normalized using optimal scaling transformations to align the average PP distribution's principal axes with the measurement device's coordinate system.COP trajectories were linearly interpolated to 101 values.The data were fitted to two different statistical models: a paired t test and linear regression.Analyses of these two models were found to produce qualitatively identical interpretations, so for simplicity only the former is presented below.Although our only formal hypothesis test was a single vector field test, we also separately analyzed COP scalars to emphasize the pitfalls of trajectory simplification.Specifically, we extracted the two scalars that appeared to be most affected by walking speed: rx at time = 70% stance, and ry at time = 55%.A Šidák threshold of p = 0.0253 corrected for the two tests.Each COP trajectory was regarded as a single vector field r = , where q represents time."Within-subject mean r trajectories were estimated for each subject and for both slow and fast walking, yielding the jth subject's fast–slow difference trajectory:",The paired Hotellings T2 test statistic trajectory was computed as:Statistical inference was conducted by calculating the T2 threshold above which only α = 5% of T2 trajectories would be expected to traverse, if the null hypothesis were true, and if the underlying COP data were generated by a random process with the observed 1D smoothness .Following thresholding, exact p values were computed for each supra-threshold cluster based on their temporal extent .Last, post hoc t tests were conducted on rx and ry using the identical procedure, with a Šidák threshold of p = 0.0253.Additional details regarding this inference procedure are provided in Supplementary Material.COP trajectories were rotated in the xy plane in increments of 0.5° between −15° and +15°.Sensitivity to these rotations was evaluated using the post hoc null hypothesis rejection decision for the rotated rx trajectories.Walking speed produced no qualitative COP change in the xy plane, but fast walking appeared to medialize the COP over 60–80% stance and anteriorize the COP over 50–70% stance.Scalar extraction analysis yielded p < 0.001 and p = 0.003, respectively.Vector field results agreed with the medialization trend over 65–80% stance via a post hoc test on rx.Post hoc analysis also agreed with the anteriorization trend over 65–90% stance, but this effect failed to reach significance at the instant of scalar analysis.Last, vector field analysis revealed an effect not detected in scalar analyses: a more posterior COP at heel contact in fast vs. slow walking.Coordinate system sensitivity analysis found that the medialization effect reduced in magnitude with external foot rotation.Effect significance disappeared for external rotations greater than 5°.The current ry results agree with independent findings of increasingly posterior heel contact and increasingly rapid transfer on to the forefoot with walking speed.Nevertheless the scalar results disagreed regarding the existence of anteriorization at time = 55%, and this disagreement persisted in supplementary analyses using a different statistical model.The reason is that scalar extraction analysis fails to account for both vector covariance and multiple tests across the time series.By observing the mean ry curve before choosing our scalar, we effectively conducted 101 tests, but then chose to report only one, without considering vector covariance.The current scalar and vector analyses agreed regarding speed-related medialization, but this effect disappeared for a small coordinate system rotation on the order of 5°, in agreement with previous reports of coordinate-system dependence in COP results .This sensitivity finding is practically relevant because: laboratory equipment may be oriented manually, motion generally does not parallel the laboratory coordinate system , and foot posture is variable between trials.All factors conspire to imply that near-threshold COP results should be interpreted cautiously, and preferably with accompanying sensitivity analyses.Vector field analyses via SPM account for both vector covariance and multiple comparisons across the trajectory and are therefore more objective than scalar extraction analysis.A second advantage of SPM is analysis efficiency.Whereas scalar parameterizations of COP trajectories can lead to tabulated results for on-the-order-of 50 different parameters , SPM efficiently focusses on just a single parameter, the vector field r.This focus on r is consistent with most studies’ null hypotheses, which implicitly pertain to a single entity: the COP trajectory as a whole.To justify scalar extraction one would have to devise explicit a priori hypotheses regarding each extracted scalar.In summary, this study has shown that temporally normalized COP trajectories can be analyzed in their original form using SPM, and that ad hoc scalar simplification is generally biased because it fails to account for both vector covariance and multiple comparisons across time.This study also confirms previous reports of COP coordinate system sensitivity, implying that vector statistics are better suited to generalized COP analyses.
Center of pressure (COP) trajectories summarize the complex mechanical interaction between the foot and a contacted surface. Each trajectory itself is also complex, comprising hundreds of instantaneous vectors over the duration of stance phase. To simplify statistical analysis often a small number of scalars are extracted from each COP trajectory. The purpose of this paper was to demonstrate how a more objective approach to COP analysis can avoid particular sensitivities of scalar extraction analysis. A previously published dataset describing the effects of walking speed on plantar pressure (PP) distributions was re-analyzed. After spatially and temporally normalizing the data, speed effects were assessed using a vector-field paired Hotelling's T2 test. Results showed that, as walking speed increased, the COP moved increasingly posterior at heel contact, and increasingly laterally and anteriorly between ~60 and 85% stance, in agreement with previous independent studies. Nevertheless, two extracted scalars disagreed with these results. Furthermore, sensitivity analysis found that a relatively small coordinate system rotation of 5.5° reversed the mediolateral null hypothesis rejection decision. Considering that the foot may adopt arbitrary postures in the horizontal plane, these sensitivity results suggest that non-negligible uncertainty may exist in mediolateral COP effects. As compared with COP scalar extraction, two key advantages of the vector-field approach are: (i) coordinate system independence, (ii) continuous statistical data reflecting the temporal extents of COP trajectory changes. © 2014 Elsevier B.V.
253
What are the limits to oil palm expansion?
Palm oil production has boomed over the last decades driven by increasing use as frying oil, as an ingredient in processed food and non-edible products, and more recently in biodiesel production.Most observers expect this trend to continue in the coming years, even though probably at a slower pace than the last decade.The share of palm oil in global vegetable oil production has more than doubled over the last twenty years, today representing more than 30%, outstripping soya oil production.Reasons for this strong expansion include the substantially higher oil yield of palm oil compared to other oilseeds – over four and seven times greater than rapeseed and soy, respectively – and its lower price, which has made it the primary cooking oil for the majority of people in Asia, Africa and the Middle East.Schmidt and Weidema estimate that palm oil is today the “marginal oil”, i.e. future increases in demand for vegetable oils will be primarily satisfied by palm oil rather than by other vegetable oils.This resulted in an expansion of the global oil palm planting area from 6 to 16 Million hectares between 1990 and 2010, an area which now accounts for about 10 percent of the world’s permanent cropland.Malaysia and Indonesia have been the epicenter of this dynamic development: in these two countries planted area has increased by 150% and 40%, respectively, over the last decade, and together they currently represent over 80% of the global palm oil production.As global demand increases and available land becomes increasingly scarce in the traditional production centers, governments of developing and emerging countries such as Brazil, Peru and Central and Western Africa increasingly promote oil palm cultivation as a major contributor to poverty alleviation, and food and energy independence.It is estimated that 17% of the new plantations in Malaysia and 63% of those in Indonesia came at the direct expense of biodiversity-rich tropical forests over the period 1990–2010 and up to 30% of this expansion occurred on peat soils, leading to large CO2 emissions.These potential negative effects of oil palm cultivation have given rise to closer scrutiny from consumers.As a consequence, the palm oil sector developed in 2004 its own sustainable certification standard, the Roundtable on Sustainable Palm Oil, and the European Union as well as the United States have also set-up some specific sustainability criteria on feedstock imports for biofuel production.However, RSPO-certified palm oil continues to be a niche product, holding about only 15% of the market, half of which is marketed as conventional palm oil, since demand for certifies oil is still too low, 2011).In 2014, five major oil palm growers initiated the Sustainable Oil Palm Manifesto which is preparing the ground for the establishment of a set of clearly defined and globally applicable thresholds to the definition of sustainable palm oil.The broad objective of sustainable development is to “meet the needs of the present without compromising the ability of future generations to meet their own needs”.Some palm oil certification schemes like the RSPO, tackle the three pillars of sustainable development i.e. the environmental, social and economic dimensions while some other initiatives, like the EU directive on biofuels, focus on carbon savings and biodiversity protection.There is not an alignment among the different certification schemes on the most appropriate or useful set of indicators and there are different approaches for developing and using them.However, two schemes are widely used in order to prevent emissions from the conversion of land with high carbon content or the destruction of biodiversity-rich natural habitats from palm oil production: the High Carbon Stocks and the High Conservation Value indicator.In the context of a continued boom in palm oil demand and the increasing sustainability commitment of the palm oil sector, the objective of this paper is to identify the potential available area for future expansion of palm oil plantations globally and more especially, how this might be affected by the implementation of some environmental sustainability criteria which are currently discussed by the sector.We first assess oil palm land suitability from a bio-physical perspective taking into account climate, soil and topography.Subsequently, we remove from the suitable area the land where conversion is currently not possible because being already under use or protection.Then, we exclude land which is of special value for biodiversity conservation or carbon storage.Finally, we assess the accessibility of the resulting potentially available land for future oil palm plantations expansion, as remoteness might reduce the profitability of palm oil production.Oil palm trees grow in warm and wet conditions.Four climatic factors are crucial for oil palm cultivation: the average annual temperature, the average temperature of the coldest month of the year, the annual precipitation and the number of months which receive less than 100 mm of precipitation.Optimal temperature conditions range between 24 and 28 °C, and the average temperature of the coldest month of the year should not fall under 15 °C.Further, the length of the growing period for oil palm is mainly determined by the length of the period with sufficient moisture supply.Optimal conditions for palm cultivation are 2000–2500 mm rainfall per year with a minimum of 100 mm per month.On well drained soils, i.e. soils which are classified as other than poorly drained according to the Harmonized World Soil Database annual rainfall up to 4000 mm is well supported, above this threshold diseases become more frequent and 5000 mm is considered the definite upper limit to oil palm cultivation.It is reported to be grown under precipitation conditions as low as 1000 mm per year and up to five months of dry period.We present a review of suitability factors used by other studies in SM C.We do not consider irrigation schemes as a potential management option because for oil palm cultivation these schemes are still in the experimental phase.We use data from the WorldClim database to compute climate suitability at the 30 arc seconds resolution level and data from the HWSD to determine the drainage status of a site.Oil palm is not very demanding in its requirements of the chemical and physical properties of the soil: it grows on a wide range of tropical soils, many of which are not suitable for the production of other crops.Constraining soil factors for oil palm cultivation can be either chemical or physical in nature.Optimal conditions are provided by finely structured soils with high clay content, though fairly good yields can also be achieved on loam and silt-dominated soils.Oil palm is also very sensitive to insufficiencies in water provision which are frequent on sand-dominated soils.We distinguish between those soil features that can be overcome by appropriate agronomic management and those that are unsuitable regardless of management.We make the assumption that appropriate soil management measures are applied in agro-industrial oil palm plantations and therefore non-permanent problematic soil features can be overcome and are not considered in the analysis.For soil information we rely on the HWSD, as it provides globally consistent data and has become the standard soil dataset for global applications in recent years.The database is, however, incomplete concerning significant areas in Africa and Asia and to be conservative we classified these areas as not suitable.However, since these patches are located in arid areas unsuitable for oil palm cultivation, the partial lack of soil data does not affect our assessment.Steep slopes restrict oil palm cultivation in different ways.They increase planting, maintenance and harvesting costs, and shallow soils mean weak anchorage of the plants and surface runoff of fertilizers.Topsoil erosion of exposed sites is also commonly associated with sloping land, which is an exclusion criterion in an assessment of High Conservation Values.Ideal conditions can be found on flat areas with 0–4° slope inclination – but palms can successfully be grown on slopes of up to 16°.The common opinion at present is that slopes above 25 ° should not be planted at all.Furthermore, in tropical regions, elevation is strongly correlated to temperature, with a lapse rate being around −6 °C per 1000 m and elevation is also often associated with slope inclination.We use data from the NASA Shuttle Radar Topography Mission with a 90 m initial raster grid cell size resampled to 1 km using a nearest neighbor technique as this source provides a globally consistent dataset at high resolution and free of charge.Soil and climate are the basic resources for growth of any crop whereas topography is a good proxy for the manageability of a mechanized production system, with the latter being particularly true for the oil palm.We defined an optimal range and minimum and maximum suitability values for oil palm growing conditions according to four climatic, three soil and two topography criteria and classified suitable land from 1 – marginally suitable to 5 – perfectly suitable.The approach to combine criteria into one overall suitability presented here is based on Liebig’s fundamental “Law of the Minimum”, which states that “a given factor can exert its effect only in the presence of and in conjunction with other factors”.For instance, a soil may be rich in nutrients but these substances are useless if necessary moisture is lacking to sustain plant growth.Consequently, the overall suitability score reflects the score of that bio-physical variable which is least suitable for oil palm cultivation, e.g. overall suitability is zero if one or more variables are zero.In the following we use the term “suitable land” for all land that is suitable from a purely bio-physical viewpoint based on the criteria described in Table 1.Detailed information of the thresholds considered to classify bio-physical data into suitability bins is provided in the Supplementary material.We distinguish three types of limits to oil palm expansion: land that is prevented from being converted to other uses such as built-up land, land which is already used such as cropland and pasture and non-protected areas which are nevertheless important for biodiversity conservation and carbon storage.The data sets used are available at varying spatial resolutions, in raster or polygon format.To allow for a consistent assessment, we converted the datasets to raster format at the spatial resolution of 30 Arc seconds, corresponding to ca. 1 km using a nearest neighbor technique.We first exclude protected areas from land potentially available for oil palm expansion since the law usually prevents land conversion in these areas.We opted to use PAs of all status classes from the World Database on Protected Areas to identify location and extent of protected areas.PAs of any status were picked in order to adopt a conservative approach and to ensure we did not omit PAs that might be delivering some conservation on the ground despite not being legally recognized as PA by the jurisdiction in place.Generally, information about both location and extent of PAs was available as polygons.In some cases, point data was available from the WDPA indicating the approximate center and the reported area of each PA only.In those cases we calculated a circular shape of the PA corresponding to the reported size of each PA as suggested by Juffe-Bignoli et al. and added these circular polygons as a proxy of the actual extent of PAs to the dataset.We consider that the timescale to convert built area to other uses goes beyond the scope of this study.Consequently, we also exclude urban areas from the land being potentially available for oil palm expansion.We used the crowdsourcing-based hybrid land cover map constructed by See et al. to identify urban areas.Suitable land for oil palm cultivation can be already used for food, animal feed or timber production.Substitution of palm oil plantations to these different uses is usually not forbidden, but this could potentially create some conflicts with other needs including food for local populations.Following a conservative approach, we decided to exclude this land from the available land for oil palm plantations expansion.Existing cropland, pasture and cropland-forest mosaic area was identified based on the See et al. global land cover map.Furthermore, we also excluded existing industrial oil palm plantations for Indonesia, the Central African Republic, Equatorial Guinea, Cameroon, Democratic Republic of the Congo, Gabon, Liberia, 2013a), Cameroon, the Republic of the Congo and Guatemala, for which we had access to spatial data.This approach allowed us to capture ca. 15 Mha of concession area.Spatial data were not available for important palm oil producing countries like Malaysia and Colombia.Finally, forest concessions are usually attributed to timber harvests during a period longer than 25 years in the tropics.We exclude them from available land for seven countries worldwide where we had access to spatial data: Indonesia, the Central African Republic, Equatorial Guinea, Cameroon, Democratic Republic of the Congo, Gabon, 2013a,b) and the Republic of Congo.High Conservation Values dominate the discussion around sustainable palm oil and conducting assessments against HCV standards are obligatory for a number of certification schemes.However, HCV is a concept developed for local and case-to-case application and hence there is no global dataset of HCVs.In an attempt to find substitutes for HCV data, we identified areas where at least four of the six global, terrestrial biodiversity priority areas overlap, following an approach put forward by Kapos et al. to cover HCV 1 and 3.The six priority areas include Conservation International’s Hotspots, WWF’s Global 200 terrestrial and freshwater ecoregions, Birdlife International’s Endemic Bird Areas, WWF/IUCN’s Centers of Plant Diversity and Amphibian Diversity Areas.The draft version of the sustainability commitment of the major oil palm growers mentions that “old-growth forests without evidence of recent human disturbance” should not be converted, which is related to HCV 2.For this purpose, we use the Intact Forest Landscape dataset that maps old growth forests with a minimum area of 20,000 ha.The sustainability commitment of the palm oil sector sets out very clear guidelines for the definition of what is to be considered land with high carbon stock – including both above ground and below ground carbon – that should be permanently spared from conversion to oil palm plantations.The proposal is to consider as HCS any forest type with an above ground biomass greater or equal to 100t/ha and peat soil with a thickness of its peat layer exceeding 12.5 cm.To that end, we use the pan-tropical AGB map produced by Baccini et al. to identify HCS forests, and the histosols soil category from the HWSD as a proxy for tropical peatlands.Finally, we overlay the potentially available land for sustainable oil palm cultivation that we obtain from the combination of all previously mentioned criteria, with the time to access the closest city above 50,000 inhabitants based on current infrastructure network.This allows us to estimate how accessible and therefore economically attractive are the remaining areas identified for sustainable oil palm production and hence provides a first glimpse of the economic dimension of this assignment.The spatial resolution of this dataset is 30 arc seconds.We find that some 1.37 billion hectares of land globally are suitable for oil palm cultivation.Suitable land is concentrated in twelve tropical countries, which together encompass 84% of the global suitable area.Almost half of the land area of Brazil – essentially located in the Amazon – is to some extent suitable for oil palm planting, which corresponds to a total suitable area of 417 Mha, making it the number one country in terms of suitable land.The sheer size of the country determines the huge potential for oil palm expansion, in fact other countries have a higher proportion of suitable land relative to their total.The Supplementary material provides an overview of the bio-physically suitable area for all tropical countries.Suitability is essentially driven by climate.High temperatures over the year along with sufficient and steady rainfall are crucial to oil palm cultivation.Optimal climatic conditions are found in South East Asia and especially in Indonesia and Malaysia, with consistently high temperatures and precipitation throughout the year.However, when moving north to continental South East Asia away from the equator, a marked dry season diminishes climatic suitability for oil palm cultivation in countries such as Thailand, India and Cambodia.In South America, large tracts of the Amazon region in Brazil, Colombia, Peru and Ecuador exhibit good climatic conditions for oil palm growth and so do parts of Central America and the Caribbean.The main limiting factor here is the Andean mountain chain stretching North-South and the climate which − in addition to the equatorial gradient − is too dry to bear oil palms in a good portion of the north of Brazil.In Africa, the biggest area of suitable land is located in the Congo basin, essentially in DRC, but also the gulf of Guinea and West Africa harbor a relatively narrow stretch of suitable land along the coast.However, several months with less than 100 mm and lower annual precipitations than in the other tropical regions partly reduce the suitability for oil palm in the region.Undulating slopes and elevated areas pose further constraints in mountainous areas such as the Andes in South America, the Albertine Rift in Eastern DRC and the New Guinea Highlands on the island of Papua.About 70% of the potentially suitable cultivation area for oil palm according to climatic conditions could be negatively affected by problematic soil growing conditions, the most prominent problematic soil type being weathered and leached soils which are widespread over the whole tropical area, and especially in Africa.Poorly drained soils are common in depression zones of Indonesia, which are often identical to peat areas and other soils with high organic matter.These can also be observed along major rivers in South America.However, most of these constraints could be overcome by applying optimal management, even if it will entail some additional production costs.Starting from the total suitable area for oil palm cultivation, we first exclude one by one the land which falls under each individual criterion to determine how each of them impacts the land availability for oil palm expansion.In a second step, we combine all the criteria to determine their joint impact on land availability, since it is important to note that many of the criteria overlap.Of the total of 1370 Mha of suitable land, urban areas reduce available land for oil palm expansion by 5 Mha which is equivalent to 0.38% of total suitable area.Conversely, 30% of the globally suitable area for oil palm production is currently occupied by PAs, reducing the available land for oil palm plantations expansion by 417 Mha worldwide.PA coverage of suitable land for oil palm production ranges from less than 2% in Papua New Guinea to as much as 67% in Venezuela, with the majority of the countries covering 15-20% of their suitable area.About 216 Mha of agricultural land is located on suitable areas.This number comprises cropland and pasture as well as areas covered with cropland-forest mosaic.Oil palm is currently grown on a total area of 18 Mha according to available spatial information, among which 20% is already classified as agricultural land.This means that about 14 Mha of current oil palm concessions have to be added to the agricultural area to account for the area already under agricultural use.For the countries where data was available – basically the Congo basin countries and Indonesia – logging concessions could further reduce oil palm plantations expansion by almost 70 Mha with Indonesia holding the largest area with a total of 24.9 Mha of suitable land being under forest concession.Since the overlaps between the above criteria are quite limited, we calculate that 723 Mha of suitable area for oil palm expansion is already taken by other uses, reducing the land availability for future expansion by half compared to the biophysically suitable area.Highly biodiverse areas cover 125 Mha of suitable land for oil palm cultivation and are relatively concentrated in a handful of countries with Indonesia, Peru, Brazil and Venezuela making up for almost 45% of all highly biodiverse areas in suitable areas for oil palm cultivation.We also note that highly biodiverse areas would almost completely prevent oil palm plantation expansion in some countries, such as Madagascar and Liberia.A similar concentration on a few countries is true for intact forests, where Brazil, DRC and Peru account for two thirds of the global suitable area, which amounts to a total of 507 Mha.Forest storing more than 100 tons AGB per ha is the most constraining criterion in terms of land availability for oil palm expansion, covering about 1 billion ha i.e. leaving 370 Mha suitable for potential expansion worldwide.The suitable area for oil palm is strongly correlated with this criterion as 83% of carbon-rich forests are located in the twelve countries that also have the largest suitable area.This criterion would especially reduce land availability in Brazil, with more than 300 Mha dropped from the suitable area.Peatlands, by contrast, are very much concentrated on South-East Asia with Indonesia and Malaysia harboring almost all the world’s known peatlands.The combination of all above mentioned criteria and suitable land for oil palm cultivation yields an estimate of available land for oil palm plantations of 233.82 Mha worldwide, only 17% of what we have estimated as the total suitable area.Brazil, with 43.4 Mha, has by far the largest area of available land for oil palm expansion followed by the DRC, Colombia and Indonesia.That being said, the application of sustainability criteria will restrict oil palm cultivation in some countries more than in others.There are countries which could develop as much as 49%, 53% and even 69% of their suitable land while adhering to the full set of sustainability criteria.On the other end of the spectrum countries such as Peru, Guyana, Suriname and French Guyana could only develop a marginal share of less than 4% of the countries’ suitable area for sustainable palm oil production.The extensive and biomass-rich forest cover is by far the single most constraining factor in these countries.Whereas overall potential availability of suitable land is ca. 17% globally, only 5% of the ‘very suitable’ areas remain.In absolute terms, this corresponds to 19.3 Mha of very suitable land which could be available for sustainable oil palm cultivation in the future, a number which would still allow doubling the current extent of 18.1 Mha of oil palm worldwide.Once overlap of criteria is taken into account, the combination of existing agricultural land and 100 t/ha above ground biomass cover would be enough to cover 88% of the total excluded area globally based on the combination of the eight criteria considered in this study.However, this could not be the case locally, where other criteria, like biodiversity hotspots could significantly reduce area for oil palm expansion beyond carbon and agricultural land.Analysis of accessibility of potentially available lands yields the results presented in Fig. 4.Just less than 1/5th of the area is in reach in less than 2 h from the closest city and 50% are accessible in less than 5 h. On the other hand, 20% of all available areas are located at 10 h or more from cities.Variation among suitability classes is minor, yet land in the highest suitability class tends to be somewhat more remote than land in other classes.We have generated a new global bio-physical oil palm suitability map which differentiates between five suitability classes.This dataset could be one useful layer of information to guide future oil palm expansion according to different objectives.Our results indicate that ten countries encompass 75% of the global suitable area.Countries in South East Asia – the current center of palm oil production – have the highest share of suitable land in relation to the size of the countries, while countries in Latin America and Central and Western Africa have the largest tracts of potentially suitable land.Suitability is essentially driven by climate, and in particular high temperatures with sufficient and steady rainfall over the year.The choice of thresholds to categorize categorical data and discrete data to form suitability classes has been made upon a detailed literature review.However, this could remain a potential source of debate.The suitability map produced for this study is comparable with previous studies, yet regionally strong differences exist between the products which mainly relate to differences in the way water availability in particular during dry seasons are being taken into account.We think that this study better captures the impact of seasonality by using the number of dry months over the year and the lowest temperature in the coldest month rather than the lowest mean monthly relative humidity of the driest month of the year which is used in the GAEZ study, where dry spells are not reflected explcitly.This study also tends to consider a wider area to be suitable than the WRI study because we use a lower minimum annual rainfall: 1000 mm instead of 1400 mm.Since there is empirical evidence of oil palm being cultivated under less favorable climatic conditions, we are confident in using a lower minimum threshold value.We present a comparison of the both the GAEZ product and our new suitability map in the Supplementary material.One limiting factor to the reliability of the suitability map is the quality of input data.As we assessed suitability on a global scale, the data is often the result of an interpolation process from in situ measurements.Climactic information is collected in a network of climate stations around the globe, however, in tropical areas this network is particularly thin and the quality of the final product is thus diminished.Both availability and quality of soil data also vary greatly among regions.The authors of the Harmonized World Soil Database acknowledge that soil data for West Africa and South Asia is especially less reliable.The suitability map has been realized under optimal management i.e. assuming that most of the soil constraints are overcome by better practices including for instance ploughing, soil water management techniques, mulching or fertilization.Consequently, suitability area on problematic soils is higher than without management but the total suitable area is not affected by assumptions made about the management.In fact the suitable area could be expanded if irrigation could be used to overcome water deficit.However, there is currently almost no agro-industrial plantation which uses irrigation so we decided to not use this management option.Yet, an alternative suitability map could be built in the future to allow for irrigation.Ultimately, all these management options should not only be considered through increase in the potential yield per hectare but also through higher production costs.This would allow exploring which palm oil price level would be necessary to adopt certain management options.One of the limits of future oil palm expansion is the competition for land with other uses.We have shown that by removing the area under current use or protection, we reduce by half the total suitable area potentially available for future oil palm expansion.In reality, an important share of plantation has been developed on agricultural land in the past.But since the global population is expected to continue growing until at least 2050, we likely underestimate the area which will be allocated to other uses and overestimate the available area for future oil palm cultivation.We envisage to investigate this issue of increasing competition for land in the future between oil palm and other commodities by using an economic model with a detailed representation of land-based activities and market interactions such as GLOBIOM.Our results also highlight the need to reinforce control in existing protected areas as 30% of current protected areas are located in areas suitable for oil palm.Major palm oil producing companies and countries are more and more committed to reduce their environmental impacts.From 1.37 billion hectares of land being suitable for oil palm cultivation, only 17% remains when land currently allocated and environmental sustainability criteria are taken into account, including 19 million hectares of highly suitable area.High carbon stocks criteria alone reduces by 73% the suitable area for oil palm expansion and encompasses 88% of the land excluded by the sum of all other environmental criteria.This suggests that this criterion could be prioritized in future studies if data on other criteria is not available.However, if several global datasets on aboveground biomass are available, there is a high uncertainty associated with the biomass information which is derived from satellite images.The remaining suitable, sustainable and potentially available land that we estimate in this study is still large if we compare with the current 17 million hectares under oil palm cultivation globally.A study commissioned by the Indonesian government finds 18 Mha of available land for oil palm expansion which is similar to our results.For the Brazilian Amazon, Ramalho Filho et al. identified 31.2 Mha which is about 12 Mha less than our study and for the Republic of the Congo Feintrenie et al. found available land of 1.28 Mha as opposed to 6.3 in our study.This can partly be explained by the explicit focus of Ramalho Filho et al. on previously deforested sites and the fact that buffer zones around villages, rivers, and protected areas are also excluded in Feintrenie’s assessment.It should also be noted that the biodiversity sustainability criteria used in this study are likely less rigorous than a detailed HCV assessment.In our assessment we explicitly cover HCV 1-3 by considering global terrestrial biodiversity priority areas and intact forest landscapes.But we lack criteria on ecosystem services, social and cultural well-being of local communities or indigenous peoples which can only be identified through engagement at the local level.However, our results also show that the potentially available land for oil palm expansion using a limited number of environmental sustainability criteria becomes quite scarce in some countries, especially for highly suitable area.There is almost no area left for the development of oil palm plantations in Liberia and Madagascar.Moreover, diverting oil palm production to lower suitable areas will also lead to lower economic profitability which could be partly offset by higher plantations area.A careful cost-benefit analysis must be done to ensure that new oil palm plantations meet the three dimensions of sustainable development.Finally, the oil palm business model has very much focused on expansion as the means to satisfy increasing demand and yields have stagnated over the last decade in Malaysia and Indonesia.However, novel breeding technologies are expected to allow for attaining higher yields and longer productive rotation periods, which might contribute to a reduction in future expansion of plantations.Further research should therefore also address the possible role of agronomic intensification and yield increases.Focusing on the current centers of oil palm production, Indonesia with available land in the order of 18.2 Mha and currently planted area of 10 Mha might face looming land scarcity for sustainable oil palm production and Malaysia with 2.1 Mha of available land and 4.6 Mha of currently planted area has already exceeded its sustainable area.On the other hand, our findings also support the feasibility of a number of countries’ future oil palm expansion plans, although they should be considered as an upper boundary to sustainable oil palm expansion as fine-scale economic and social criteria must also be taken into account.
Palm oil production has boomed over the last decade, resulting in an expansion of the global oil palm planting area from 10 to 17 Million hectares between 2000 and 2012. Previous studies showed that a significant share of this expansion has come at the expense of tropical forests, notably in Indonesia and Malaysia, the current production centers. Governments of developing and emerging countries in all tropical regions increasingly promote oil palm cultivation as a major contributor to poverty alleviation, as well as food and energy independence. However, being under pressure from several non-governmental environmental organizations and consumers, the main palm oil traders have committed to sourcing sustainable palm oil. Against this backdrop we assess the area of suitable land and what are the limits to future oil palm expansion when several constraints are considered. We find that suitability is mainly determined by climatic conditions resulting in 1.37 billion hectares of suitable land for oil palm cultivation concentrated in twelve tropical countries. However, we estimate that half of the biophysically suitable area is already allocated to other uses, including protected areas which cover 30% of oil palm suitable area. Our results also highlight that the non-conversion of high carbon stock forest (>100 t AGB/ha) would be the most constraining factor for future oil palm expansion as it would exclude two-thirds of global oil palm suitable area. Combining eight criteria which might restrict future land availability for oil palm expansion, we find that 234 million hectares or 17% of worldwide suitable area are left. This might seem that the limits for oil palm expansion are far from being reached but one needs to take into account that some of this area might be hardly accessible currently with only 18% of this remaining area being under 2 h transportation to the closest city and that growing demand for other agricultural commodities which might also compete for this land has not been yet taken into account.
254
Prion-like seeding and nucleation of intracellular amyloid-β
Protein aggregation is a pathological hallmark of many neurodegenerative disorders such as amyloid-β plaques and tau tangles in Alzheimer's disease and α-synuclein containing Lewy bodies in Parkinson's disease.However, how these proteins aggregate and spread throughout the brain remain poorly understood.A hypothesis that has been gaining traction the last decade is that these disease-linked proteins have prion-like properties.Prions are potentially infectious proteins that are capable of misfolding and aggregating, inducing homologous proteins to misfold and, crucially, can spread and induce misfolding throughout the brain and even between organisms.There is evidence that α-synuclein pathology might spread from host to graft in PD patients who received embryonic stem cell grafts and between cells in culture.Moreover, treatment with fibrillar α-synuclein can seed intracellular inclusions in α-synuclein expressing cells and intracerebral injection of pathological α-synuclein into α-synuclein expressing mice accelerated formation of Lewy bodies and neurites.Tau has also been shown to spread between cells in culture.Intracellular tau inclusions can be formed after addition of fibrillar tau to tau-fragment expressing HEK-293 cells, and injecting these cells into the brains of transgenic tau mice induced tau pathology.For Aβ, studies have shown that intracerebral injections of AD brain material in familial AD transgenic mice accelerate amyloid pathology and that it is specifically Aβ that causes this as immuno-depletion of Aβ abolishes seeding activity.Remarkably, as little as a femtogram of PBS soluble AD brain derived Aβ can seed pathology in a FAD mouse.In contrast, much larger amounts of synthetic Aβ either fail to seed plaques or require nitration or 72 h of agitation of the synthetic Aβ to augment pathology.It appears that only particular form of Aβ that is present in AD but not normal brains are capable of seeding pathology, albeit in remarkably low quantities.On the basis of these findings it has been argued that Aβ is a prion-like protein.Understanding where “prion-like” Aβ can form and its structure would then be important for understanding the pathogenesis of AD.While there is much in vivo work on prion-like Aβ, it has not been shown that one can induce inclusions of Aβ in cultured cells as has been shown for tau and α-synuclein.One reason is practical; Aβ is a low molecular weight metabolite cleaved from within the larger amyloid precursor protein and it is therefore less feasible to construct a cell line expressing physiologically generated, fluorescently labeled Aβ.This makes it difficult to study Aβ inclusions and transfer in cells.There is also a theoretical reason; as Aβ plaques are extracellular, intracellular Aβ has been viewed as less relevant.However, intraneuronal Aβ42 accumulation is seen before plaques in the brain areas first affected by AD and appears to be among the earliest changes in AD and is associated with synaptic pathology.Accumulation of intraneuronal Aβ coincides with cognitive symptoms and occurs before plaques in the 3xTg AD mouse.Recently it was also reported that FAD mutations in presenilin 2 specifically increase the intracellular pool of Aβ42.Furthermore, if conversion of monomeric to aggregated/prion-like Aβ is a stochastic process, one would expect the biological conversion to happen where concentrations are high as they are in subcellular compartments; the acidic environment of the late endosome/lysosome also favors Aβ aggregation.Thus, if Aβ has prion-like properties, one might expect intracellular prion-like conversion as is the case for prion protein and other prion-like proteins.In this study we induce human APP expressing N2a neuroblastoma cell lines to stably form intracellular inclusions of Aβ by treating them with AD transgenic mouse brain extracts.We characterize these cell lines biochemically and with infrared absorbance spectroscopy and conclude that the inclusions are oligomers.We show that lysates of the inclusion bearing cells, a purely cellular source of Aβ, can be used to induce naive APP-expressing cell lines to also form inclusions.In summary, we provide a cellular model of seeded nucleation of Aβ, where the inclusion forming Aβ can be propagated both vertically and horizontally.These data are consistent with prion-like conversion of intracellular Aβ.The forebrains of 21-month-old APP/PS185Dbo/Mmjax), Tg19959 harboring the Swedish and London mutations in APP or wild type mice were collected and immediately frozen on dry ice and stored at −80 °C.The forebrains were homogenized in 10% weight/volume sterile PBS, sonicated 3 times for 5 s at 80% amplitude with a Branson SLPe model 4C15 sonifier and centrifuged at 3000g for 5 min.The resulting supernatant was sonicated 3 times for 20 s each at 80% amplitude as described in Langer et al., 2011.The supernatant was then aliquoted and kept at −80 °C until use.N2a cells were grown in media containing 47% high glucose DMEM, 47% Optimem, 5% FBS and 1% penicillin/streptomycin at 37 °C in a humid 5% CO2 incubator.Single cell cloning was done via serial dilution in 96 well plates.Cells were washed twice on ice with ice-cold PBS and collected with a cell scraper.Cells were then pelleted at 10,600g for 2 min at 4 °C, snap-frozen in liquid nitrogen and stored at −80 °C.N2a cells expressing human APP with the Swedish mutation were passaged and the following day treated with brain supernatant at a concentration of 0.5% from either aged APP/PS1 or WT mice in Optimem with 1% penicillin/streptomycin and 0.5% FBS; low serum media was used to inhibit cell growth.The supernatant was kept on the cells for 4 days at a time and then passaged and treated again or single cell cloned to isolate cells with aggregates.Cells were grown on glass coverslips.Cells were washed 3 times on ice with ice cold PBS, fixed with cold 4% PFA in PBS for 15 min at room temperature and then washed 3 times with PBS at RT.Blocking, to reduce unspecific antibody binding, was done with 2% NGS, 1% BSA and 0.1% saponin in PBS for 1 h. Cells were stained with primary Aβ/C99 N-terminus-specific antibody 82E1 at 1:200 and fibrillar oligomer-specific Aβ antibody OC 1:1000 in PBS 2% NGS overnight at 4 °C.Cells were then washed with PBS-T for 5 min, 15 min and 15 min.Then incubated with secondary antibodies at 1:500; AF488 anti-mouse and Cy3 anti-rabbit for 1 h at RT in the dark.Cells were again washed with PBS-T 3 times for 5 min each and during the second wash 0.1% DAPI was added.Coverslips were then mounted on glass slides with slow fade gold anti-fade reagent dried in the dark overnight and then sealed with Covergrip Coverslips Sealant.Images were obtained using a Leica TCS SP8 confocal microscope equipped with Diode 405/405 nm and Argon lasers with an HP PL APO 63x/NA1.2 water immersion objective.Autoquant was used for image deconvolution.Two-dimensional images obtained by confocal microscopy were reconstructed using Imaris.N2a cell pellets were triturated and incubated on ice for 30 min in a buffer solution of 20 mM Tris, 50 mM NaCl, 1% triton X pH adjusted to 7.4 with HCl and with 1:100 protease inhibitor cocktail.The cells were then centrifuged at 10,600g at 4 °C for 20 min, mixed with NativePAGE sample buffer 4× and loaded onto 3–12 or 4–16% Bis Tris protein gels.Native-Mark unstained protein standards were used as molecular weight markers with the addition of a 14 kDa marker.Before protein transfer, the gels were washed with running buffer with 0.1% added SDS for 10 min.Transfers for all protein electrophoresis gels were done onto polyvinylidene difluoride membranes.Membranes were then boiled in PBS for 5 min and blocked in 5% skim milk in PBS-T for 30 min at RT.Primary antibody incubation was done at 4 °C overnight in 5% skim milk PBS-T; for BN-PAGE, antibody 82E1 was used at 1:700.The membranes were then washed 4 times for 20 min each in PBS-T and incubated with secondary HRP antibody 1:2000 for 1 h in 5% skim milk in PBS-T at RT.The membranes were then washed 3 times for 15 min each in PBS-T and developed with Clarity western ECL blotting substrate.Cell Samples were prepared as for BN-PAGE.1 μl of sample was put on a nitrocellulose membrane and allowed to dry and then washed with PBS-T for 15 min 3 times and blocked for 30 min in 5% skim milk PBS-T.Primary antibody incubation was done at 4 °C overnight in 5% skim milk PBS-T and A11 or OC was used at a concentration of 1:5000.After primary incubation the procedure is identical to that of BN-PAGE.The cells were either lysed as for BN PAGE or as 20% wt/vol sonicated PBS extracts.Cell samples were mixed with Novex Tricine SDS Sample Buffer to a final SDS concentration of 4 or 2%.Cell samples were incubated with the sample buffer for 10 min at RT and then loaded onto 10–20% Tricine Protein Gels.SeeBlue Plus2 Pre-stained Protein Standard was used as a protein standard.Samples were incubated for 1 h at 37 °C with 50 μg/ml proteinase K solution after which protease inhibitor cocktail 1:100 was added and the samples were then immediately analyzed.Sample cells were lysed in 6% SDS and 1% β-mercaptoethanol in PBS, sonicated twice for 20 s at 20% amplitude, heated to 95 °C for 5 min and centrifuged at 10,600g for 10 min.Supernatants were mixed with SDS Sample Buffer with 0.8% β-mercaptoethanol, heated to 95 °C for 5 min, briefly centrifuged and then loaded onto 10–20% Tricine Protein Gels with SeeBlue Plus2 Pre-stained Protein Standard used as a protein standard ladder.Densitometric quantifications were done with imageJ.Spectra were recorded on a Hyperion 3000 IR microscope coupled to a Tensor 27, which was used as the IR light source with 15× IR objective and MCT detector.The measuring range was 900–4000 cm−1, the spectra collection was done in transmission mode at 4 cm−1 resolution from 250 co-added scans.Background spectra were collected from a clean area of the same CaF2 window.All measurements were made at RT.Samples were prepared by spreading 1 μl of cell pellet on the CaF2 1 mm window surface and rapidly dried under nitrogen flow.For reproducibility, FTIR spectra were taken from different areas of the cell pellet deposited on CaF2.Analysis of FTIR spectra was performed using OPUS software.After atmospheric compensation, spectra exhibiting strong Mie scattering were eliminated.For all spectra a linear baseline correction was applied from 1400 cm−1 to 2000 cm−1.Derivation of the spectra to the second order was used to increase the number of discriminative features to eliminate the baseline contribution.Derivation of the spectra was achieved using a Savitsky−Golay algorithm with a six-point filter and a polynomial order of three.In FTIR spectroscopy, β-sheet structures can be distinguished based on the analysis of the amide I α-helix and unordered structures can each be assigned to bands at 1656 cm−1 and to 1640 cm−1, respectively.The level of β-aggregation of proteins in the cells was studied by calculating the peak intensity ratio between 1620 and 1640 cm−1 corresponding to β-sheet structures and the maximum corresponding mainly to α-helical content at 1656 cm−1.An increase in the 1620–1640 cm−1 component was considered a signature of β-sheet structures.Since β-sheet structures have two typical components in the amide I region: the major component has an average wavenumber located at about 1630 cm−1, and a minor component, at 1695 cm−1; the 1695/1630 intensity ratio was used to study the percentage of antiparallel arrangements of the β-strands in the β-sheets.Anti-β-Actin, Clone AC-74, 82E1, 6E10, A11 and OC.One-way analysis of variance was used for all p-value calculations followed by Bonferroni correction for multiple comparisons; *, **, *** and **** are p ≤ 0.05, 0.01, 0.001 and 0.0001 respectively.Shapiro-Wilk normality test was used to assess distribution and no significant departure from normality was found for any data sets.The individual data points are shown in the bar-plots.All statistical calculations were done with Graphpad Prism 7.To test whether intracellular seeded nucleation and prion-like conversion of Aβ can occur in cells, we aimed to seed stable Aβ inclusions in an APP expressing cell line.To this end, we treated N2a cells expressing human APP carrying the Swedish mutation with the PBS dispersible fraction of homogenized brains from 21-month old APP/PS1 mutant transgenic or wild type mice.APPswe cells were incubated with brain extract for 4 days and then passaged and treated with brain extract two more times for a total of 12 days of treatment.Analogously to seeded nucleation of misfolded tau in cells, we expected only a minority of cells to undergo prion-like conversion of Aβ; to isolate these we performed single cell cloning.Single cell clones were then labeled with antibodies 82E1 against the N-terminus of Aβ/C99 and OC against oligomers/fibrils.7 out of 22 single cell clones that had earlier been treated with APP/PS1 transgenic mouse brain extract showed puncta of antibody 82E1 co-localizing with antibody OC, while none of the 18 WT brain treated clones had such labeling.Western blot indicates that the levels of cellular C99 are very low compared to Aβ in the treated cells supporting that antibody 82E1 is mainly detecting Aβ.Thus, the co-localization of antibodies 82E1 and OC indicate the presence of oligomeric and/or fibrillar Aβ in the AD transgenic brain extract treated clones.Next, we repeated the treatment with APP/PS1 brain extract on naive APPswe cells but now treated the cells only 2 times, for a total of 8 days of treatment.This only yielded 1 out of 20 single cell clones with 82E1 and OC positive inclusions, indicating that fewer cells develop inclusions with shorter duration of treatment with APP/PS1 brain extract.We additionally tested the seeding capacity of aged brain extract from a different transgenic mouse, 31-month old Tg19959, and obtained 3 of 19 clones with inclusions.To show vertical stability of the inclusions, we labeled them again after >10 passages and 6 months of storage at −80 °C.Remarkably, the FAD brain treated cells retained their inclusions.We therefore conclude that we seeded stable oligomeric and/or fibrillar inclusions of Aβ in an APP-expressing cell line by treatment with AD transgenic mouse brain, which is consistent with prion-like conversion of intracellular Aβ.These inclusion-bearing cells that have been treated with transgenic brain lysate will be referred to as “prion-like clones” and those treated with WT brain lysate as “WT-clones”.To determine whether the intracellular Aβ of the prion-like cells is capable of seeded nucleation, we treated naive APPswe cells with prion-like cells or WT-clone cells.We used the same protocol as with homogenized brain tissue but replaced brain extracts with the PBS dispersible fraction of cells and treated the naive cells for a longer time period, 7 treatments, for a total of 28 days.We hypothesized that the seeding activity of the prion-like cells would be lower than that of brain, hence the longer treatment time.This protocol yielded 8 out of 31 single cell clones with inclusions from cells treated with prion-like clone 1, 6 of 18 from treatment with prion-like clone 2 and 8 of 19 for prion-like clone 3.In contrast, none of 22 single cell clones treated with WT-clone cells had significant inclusion formation.Further analysis of the inclusion-bearing single cell clones by confocal microscopy showed antibody OC and 82E1 positive puncta that were similar to those observed in the first prion-like clones obtained after treatment with brain extracts.Since the daughter clones of prion-like clone 3 had the strongest labeling, we used this cell clone for further analysis.Thus, prion-like cell extracts, analogous to the APP/PS1 brain extracts, can seed Aβ aggregates in an APP expressing cell line and appear to do so with comparable efficiency, consistent with horizontal stability of the Aβ inclusions.Antibody OC and 82E1 positive inclusions indicated the presence of oligomeric and/or fibrillar Aβ in the prion-like clones.To show this with another method and to estimate their size and thereby discriminate between oligomers and fibrils, we performed Blue Native Polyacrylamide Gel Electrophoresis.BN-PAGE allows analysis of protein complexes in their folded and potentially aggregated state, where mobility in the electrophoresis gel depends both on the size and charge of protein complexes.Interestingly, BN-PAGE revealed aggregates in the 250–670 kDa range in all prion-like clones studied, particularly in prion-like clone 3; prion-like clone 2 also had robust labeling while it was a bit weaker in clones 1 and 4.In contrast, WT-clone cells barely showed any labeling on BN-PAGE.BN-PAGE provided further evidence for the vertical stability of oligomer formation, as it appeared similar after numerous passages and storage at −80 °C.The horizontal stability of oligomer formation was also evident as both prion-like clone 3 and its daughter clone prion-like 3a harbor oligomers of similar size.To further study the Aβ oligomers we performed dot blots with the conformation dependent antibodies A11 and OC.This revealed increased levels of OC in the prion-like clones and unchanged levels of A11.The size of the Aβ complexes on BN-PAGE, together with the OC labeling both on dot blot and immunofluorescence, suggests that the intracellular Aβ-inclusions are fibrillar oligomers.Amyloid and prions are more stable than other protein complexes and therefore only partially broken up with SDS treatment.It has also been suggested that SDS-stable low-n oligomers of Aβ are the building blocks of amyloid deposits, as well as early mediators of neuronal dysfunction.We therefore hypothesized that the Aβ of prion-like clones would have greater SDS resistance and more low n-oligomers compared to that of WT-clones.Semi-denaturing detergent agarose gel electrophoresis is used to break up aggregates of amyloid and prion protein into SDS stable components.However, since the pores in agarose gels are too large for low-n oligomers of Aβ, we modified the SDD-AGE protocol to use polyacrylamide rather than agarose and refer to this as SDD-PAGE.In contrast to SDS-PAGE, in SDD-PAGE cells are lysed without SDS or β-mercaptoethanol, boiling is avoided and sample buffer with lower SDS concentration is used.SDD-PAGE on WT-clone cells lysed with sonication in PBS yielded only one non-monomeric band at around 50 kDa, while in the prion-like clones several bands, as well as a high molecular weight smear, were evident, indicating greater stability of their Aβ aggregates.Moreover, SDD-PAGE of the prion-like clones lysed with Triton-X showed higher levels of Aβ, including the presence of oligomers of Aβ up to 50 kDa, as well as a HMW smear, that were not present in WT-clone cells; as with BN-PAGE, this was particularly apparent for prion-like clone 3.A cardinal feature of prion disease is resistance to proteinase K.Pertaining to AD, seeding Aβ from mouse brain extracts was reported to be partly PK-resistant, however, ultracentrifuged brain extracts, containing only soluble Aβ, was PK-sensitive.We investigated the PK-sensitivity of the prion-like clones with native PAGE and dot blot and found the Aβ to be PK-sensitive.The Aβ of the prion-like clones is soluble and the PK-sensitivity is thus consistent with prior results reported in brain tissue.Studies have shown that the intracellular accumulation of Aβ affects APP processing and degradation.Thus, we wanted to study the levels of APP and its metabolites in our clones.Since the Aβ in the prion-like clones had greater resistance to SDS degradation, we expected the levels of Aβ to be higher in them due to potentially decreased degradation.Indeed, SDS-PAGE showed higher levels of both intra- and extra-cellular Aβ in the prion-like clones.The Aβ levels were highest in the prion-like clone 3, as would be expected from the BN-PAGE and SDD-PAGE data.Increased Aβ could be caused by more APP, increased amyloidogenic β-cleavage of APP or decreased degradation.C99 was also significantly increased in the prion-like clones.It has been reported that addition, and subsequent intracellular accumulation and aggregation of Aβ42, led to increased β-cleavage of APP into Aβ, supporting that increased β-cleavage is causing the increase of C99 and Aβ in the prion-like clones; though, increased β-cleavage and decreased degradation are not mutually exclusive.Full-length APP was also significantly increased in the prion-like clones, and again, increased APP due to accumulation of Aβ has been reported.More APP should also raise the levels of Aβ.Levels of secreted soluble APPα in media were however unchanged, which is also consistent with previous reports.The ratio of sAPPα:APP is therefore decreased in the prion-like clones compared to WT-clone cells, suggesting lower non-amyloidogenic α-cleavage of APP in prion-like clones.To further study the structures of the induced Aβ oligomers detected by BN-PAGE, dot blot and immunofluorescence, we used non-destructive Fourier transform infrared micro-spectroscopy imaging.FTIR has mostly been used on pure protein preparations but can also be used for cells and tissue, though some caution is warranted when comparing cell data with those from pure protein preparations.1 μl of cell pellet was spread onto CaF2 sample support surfaces and dried under nitrogen flow.FTIR measurements were done on several different locations of the dried cell pellet.We detected a significant increase in unordered structures in all prion-like clones.Importantly, the amount of unordered structures remains elevated after several passages.Oligomers of Aβ have been suggested to have more unordered structures than fibrillar Aβ.Surprisingly, the level of total β-sheet content remained at a similar level between WT-clones and prion-like clones, while total β-sheet content in WT-clone cells was significantly increased compared to N2a cells expressing human non-mutated APP.This suggests that the total content of fibrillar structures is not higher in prion-like clones than in WT-clones.Interestingly, antiparallel β-sheet content was significantly increased in all prion-like clones.Antiparallel structures have been linked to A11 positive oligomers of Aβ, though OC-reactivity was not investigated in that study; it has also been shown that both A11 and OC-positive oligomers exhibit more antiparallel structures than fibrils.In summary our FTIR results show a clear difference in the secondary protein structure of the prion-like clones, which is consistent with an increase of oligomeric but not fibrillar Aβ.A major question in the field is which forms of Aβ have seeding capacity.Here we provide data suggesting that intracellular fibrillar oligomers of Aβ can be a seeding unit.We demonstrate that seeded nucleation of Aβ can be induced intracellularly in an APP producing cell line by treatment with mouse FAD brain extract.Native PAGE, labeling with conformation dependent antibodies and infrared spectroscopy support that the inclusions were oligomeric, and in the terminology introduced by Kayed et al., “fibrillar oligomers”.Furthermore, SDS treatment revealed SDS-resistant low n-oligomers, indicating aggregates reminiscent of amyloid and prion aggregates.Moreover, cells with induced Aβ aggregates could seed Aβ aggregation in naive APP-expressing cells, thus demonstrating seeded nucleation with a purely intracellular source of Aβ.Remarkably, native PAGE and infrared spectroscopy indicated that the Aβ oligomers from the parent clone were similar to those from the daughter clone.These data are consistent with formation of fibrillar oligomeric strains of intracellular prion-like Aβ that can be transmitted both vertically and horizontally.Numerous studies underscore an early role of intraneuronal Aβ in AD pathogenesis.According to the prion-like hypothesis this stochastic formation of prion-like Aβ would be one of the earliest steps in AD.Though most Aβ is normally secreted, the probability of stochastic conversion of physiological Aβ to prion-like/seeding Aβ should depend on concentration and the highest concentrations of soluble Aβ are found in intracellular vesicles.Furthermore, it has been reported, and we also observe, that intracellular accumulation and aggregation of added synthetic Aβ42 increased cleavage of APP into Aβ and increases levels of APP, potentially creating a positive feedback loop of increased intracellular Aβ aggregation and production.Yang et al. did not investigate whether this intracellular Aβ aggregation persisted over replicative generations.It is possible that they achieved intracellular prion-like conversion with synthetic Aβ, albeit with the extremely high concentration of 25 μM Aβ42, whereas our treatment of cells with brain extract contains approximately 20 nM Aβ and with prion-like cell extract about 2 nM."Considering the results of Yang et al., the fact that in vivo seeding of Aβ can be achieved with synthetic Aβ and that α-synuclein and tau seeding can be done with synthetic fibrils, it's likely that in vitro prion-like intracellular seeding can be accomplished with some specific aggregation states and concentrations of synthetic Aβ.Ultracentrifugation of AD brain was reported to remove >99.95% of Aβ, while only reducing seeding capacity by 70%.This suggests that the most potent seeding Aβ species are relatively small soluble oligomers rather than larger insoluble fibrils.It was also noted that this potently seeding soluble Aβ is destroyed by PK-digestion; the Aβ of our prion-like clones is also PK-sensitive.However, seeding Aβ appears not to be present in CSF, as intracerebral injection of CSF, a cell-free physiological source of soluble Aβ, from aged AD patients failed to seed aggregation.A 2016 study showed that some of the seeding Aβ from a homogenized mouse brain with plaques was associated with mitochondrial membranes and was non-fibrillar; this is consistent with intracellular seeding Aβ in the brain, though after homogenization of plaque-ridden brain an extra or intra-cellular origin of a given pool of Aβ is difficult to ascertain.Theoretically it also seems plausible that oligomers can be more biologically active as seeds due to their higher surface area and higher molarity at equivalent weights compared to fibrils.Indeed, it has been observed that the prions in Creutzfeldt-Jakob disease with the greatest prion-like conversion potency are smaller protease sensitive oligomers.In humans it has been noted that the soluble pool of Aβ in AD brain correlates with cognitive decline.It has furthermore been observed that it is specifically the fibrillar oligomer load that correlates with cognitive decline and that they are absent in CSF.Thus, it is noteworthy that the intracellular prion-like Aβ we report also seems to consist of fibrillar oligomers.However, while immunofluorescence and dot blot show more OC labeling in prion-like cells, the FTIR data is more ambiguous.The increased anti-parallel structures we observe are associated with Aβ oligomers, but more so with A11 than OC; though those studies were based on pure protein preparations, not cells.We establish, for the first time, that intracellular inclusions of Aβ can be seeded and stably maintained in an APP expressing cell line.These inclusions can propagate both vertically and horizontally.The seeding Aβ inclusions are oligomers, likely fibrillar, of Aβ with low-n oligomers resistant to SDS.There is an extensive literature indicating an early and important role of intraneuronal Aβ and of oligomeric Aβ in the pathogenesis of AD, and our results support a novel connection between intracellular Aβ, soluble fibrillar oligomers of Aβ and prion-like Aβ.
Alzheimer's disease (AD) brain tissue can act as a seed to accelerate aggregation of amyloid-β (Aβ) into plaques in AD transgenic mice. Aβ seeds have been hypothesized to accelerate plaque formation in a prion-like manner of templated seeding and intercellular propagation. However, the structure(s) and location(s) of the Aβ seeds remain unknown. Moreover, in contrast to tau and α-synuclein, an in vitro system with prion-like Aβ has not been reported. Here we treat human APP expressing N2a cells with AD transgenic mouse brain extracts to induce inclusions of Aβ in a subset of cells. We isolate cells with induced Aβ inclusions and using immunocytochemistry, western blot and infrared spectroscopy show that these cells produce oligomeric Aβ over multiple replicative generations. Further, we demonstrate that cell lysates of clones with induced oligomeric Aβ can induce aggregation in previously untreated N2a APP cells. These data strengthen the case that Aβ acts as a prion-like protein, demonstrate that Aβ seeds can be intracellular oligomers and for the first time provide a cellular model of nucleated seeding of Aβ.
255
Post‐treatment of yeast processing effluent from a bioreactor using aluminium chlorohydrate polydadmac as a coagulant
The wastewater from food industry is treated by physical, chemical and biological methods .Industrial effluents are a major source of environmental toxicity.The polluted effluents have a negative impact on aquatic ecosystem and soil microflora.Increasing water demands for both industrial and public uses make the industrial wastewater recovery a necessary alternative ."Baker's yeast industry uses large amounts of water and produces high strength wastewater .The effluent discharged from this industry has high biological oxygen demand, high COD concentrations and a characteristic dark brown colour .The wastewater is extremely difficult to treat using biological methods."Conventional biological wastewater treatment processes can remove only 6–7% of melanoidins present in Baker's yeast effluent .Therefore, the dark brown colour of such wastewater remains an issue for its disposal.Various technologies have been used to treat industrial effluents.These include biological process, physicochemical treatment and oxidation processes .The UASB reactor is a proven technology for the biological treatment of wastewater .The UASB is a high rate reactor capable of removal of organic matter in wastewater .The reactor is capable of treating high strength industrial wastewater, in addition to treating municipal wastewater .However, to meet the recommended discharge limits, the anaerobic treatment process alone cannot produce high quality effluent.A combination of anaerobic treatment and post‐treatment processes can provide quality discharge effluents.Coagulation and flocculation neutralises the electrical charges of particles in the wastewater .The gelatinous mass agglomerate into sludge particles that are large enough to settle out .In various studies the coagulation/flocculation process has been demonstrated to be cost effective, easy to operate, efficient and serves as an energy-saving treatment alternative .Coagulation/flocculation methods have been successfully applied in the reduction of COD, colour, total suspended solids and turbidity .Researchers have shown that physicochemical processes like coagulation/flocculation can reduce pollution potential and generate clean water for reuse .Studies show extensive use of polymers in water and wastewater treatment .There are two types of organic polymers applied in water and wastewater treatment; natural and synthetic polymers .Molecular mass and charge density are the most important characteristics for synthetic polymers .Polydiallyldimethylammonium chloride is a homopolymer of diallyldimethylammonium chloride .Polydadmac has strong cationic group radicals and activated-adsorbent group radicals that can destabilise and flocculate the suspended solids in wastewater.Presently, no study has reported on the combination of semi‐continuous UASB treatment and post-treatment using aluminium chlorohydrate polydadmac applied to wastewater from bakers’ yeast processing.Coagulation/flocculation process was used in this study for reduction of COD, colour and turbidity of yeast processing effluent treated using an UASB at bench-scale."Yeast processing wastewater samples were collected from a baker's yeast manufacturing plant in Zimbabwe.A laboratory scale UASB reactor was used in the anaerobic treatment of wastewater at 40 °C.The effluent from the lab-scale bioreactor was then used for coagulation/flocculation tests.A laboratory-scale UASB reactor designed to have an operational capacity of 1 l was used in this study.The reactor was jacketed to allow adjustment of temperature during anaerobic digestion.Anoxic conditions were created by sealing all openings including the lid of the reactor.A Masterflex L/S peristaltic pump was used for feeding the reactor.The wastewater was applied intermittently.The supernatant was collected from the top of the reactor and analysed to determine the treatment efficiency.The experimental regime was completed at 40 °C using the jacketed reactor.Temperature was controlled by recirculating water in the jacket of the bioreactor and controlled by a water bath.All coagulation/flocculation experiments were conducted as outlined in Fig. 1, in 6 × 500 ml jars, filled with 300 ml of anaerobically treated wastewater samples.pH of the wastewater was adjusted while mixing using 0.1 M HCl or 0.1 M NaOH.Initial pH of the anaerobically treated effluent was adjusted to pH 2; 4; 6; 8 and 10.No pH adjustment was made to the control.An optimised aliquot of 1 ml aluminium chlorohydrate polydamac coagulant was added to each jar.This was followed by rapid mixing in each jar at 120 rpm for 1 min and 10 min of slow mixing for flocculation .The rapid mix helps in coagulation whilst slower mixing promotes floc formation by enhancing particle collisions which lead to larger flocs.The treated wastewater was allowed to settle for 24 h. Samples were drawn for analysis of final pH, COD, residual turbidity and colour in each jar.Thermogravimetric behaviour of the resultant sludge was monitored on a thermogravimetric analyser.The TGA traces of the sludge samples were obtained using nitrogen purge at a flow rate of 60 ml min−1.Sludge sample, was heated in a platinum HT pan, up to 900 °C at the heating rate of 20 °C min−1.During the study, decrease in COD reduction using a UASB reactor at 40 °C was recorded at 32.67%.The COD reduction was low compared to other studies on wastewater treatment of food processing effluents where treatment efficiencies above 90% were recorded ."The results of this study carried out with lab-scale equipment demonstrated that the anaerobic sludge bed reactor is a suitable for anaerobic treatment of wastewater from baker's yeast production but post treatment is necessary.The highest COD reduction recorded in Fig. 2 is 63.63% at pH 6.However, it is also important to note that, the control, with no pH adjustment recorded similar COD reduction of 59.52%.The least COD reduction of 38.71% was recorded at an initial pH of 2.There was a significant difference between the pH treatments of the wastewater.When using inorganic coagulants, pH is an important factor in determining the treatment efficiency .Fig. 2 illustrates COD reduction with a gradual increase in removal of organic load as pH increases to 6.After the optimum values of 63.63% COD reduction, the treatment efficiency of the physicochemical process decreases.A related study showed that reduced doses of the coagulants can achieve maximum COD treatment efficiency of 65% using alum .A similar study recorded a COD reduction of 70.8% using polyaluminum chloride and polyferric sulphate as coagulants .In this study the main mechanism of coagulation was charge neutralisation.Under acidic conditions, the metal ions present in the form of Al3+ coordinate with the anaionic organic molecules in the effluent forming sludge mainly characterised by insoluble charge-neutral products that then settle down .At high pH, more sludge sediments since the organics adsorb onto preformed flocs of metal hydroxides.The organics with different functional groups at various pH values are removed from the effluent.The highest treatment efficiencies occur where the combined effect of both neutralisation and bridging adsorption is maximum .However, in this study the effluent could not be treated to permissible COD limits of 60 mg/l as prescribed by the local regulatory agency in a single post-treatment step.pH had an effect on colour reduction.The highest value attained for colour reduction was 68.25% as shown in Fig. 3.This was attained after adjusting the initial pH to 6.The lowest reduction of colour was recorded at pH 10.The coagulation–flocculation process only managed to remove 17.16% colour from the biologically treated effluent at pH 10.ANOVA analysis showed that there was a significant difference in decolourisation at various pH values.pH is amongst the crucial parameters that influences significant removal of particles matter in wastewater .At values of pH above 6, colour reduction falls sharply to as low as 17.61%.Coagulation efficiency of the coagulant falls in the alkaline range.However, as pH increases, colour of the wastewater intensifies.At very high pH, the effluent turns black from reddish brown.Again if the pH is high the extent of depolymerisation of coagulant declines and coagulation function drops.Studies have shown that, the brown colour remains or even darken in the biologically treated effluent due to repolymerisation of pigments .The initial pH adjustment had a significant effect on turbidity removal.High rates of reduction were recorded at pH 4, 6 and the control with reductions of 88.67%, 88.51% and 91.33%, respectively.As shown in Fig. 4, the lowest turbidity reduction was recorded when the wastewater had been adjusted to pH 10.Turbidity can be used to measure the performance of individual treatment processes as well as the performance of an overall water treatment system.The efficacy of the coagulation process is determined by the residual turbidity of the treated sample .The pH not only affected the surface charge of coagulants, but also affected the stabilisation of the suspension .High rates of reduction were recorded at pH 4 and 6 in this study indicating efficient treatment systems.However, the control without any initial pH adjustment also recorded high treatment efficiencies in turbidity reduction.The may be attributed to the fact that the effluent had similar pH range around 6.In a related study, alum and poly aluminium chloride removed more than 95% of the turbidity .From the results in Fig. 5, large amounts of settleable solids were obtained with the highest amount of 112 cm3 l−1 recorded at an initial pH of 6.However, there was a notable difference in the amount of flocs for the other adjustments made in the study.There was a positive correlation between settleable solids and turbidity reduction.Correlation analysis between settleable solids and colour removal also showed a positive correlation.In general, the sludge characteristics and quantity vary depending on the operational conditions and the type of coagulant used .The largest quantity of sludge was observed when the initial pH of effluent was 6.The same conditions also resulted in the highest reductions in colour and COD.The sludge produced reflects the organic load and suspended solids in the effluent.The handling, transport, and eventual disposal of the large volumes of sludge may pose logistical and financial challenges .There was a notable positive correlation between colour reduction and turbidity reduction.The relationship between turbidity reduction and COD reduction was also examined.There was a strong positive correlation between the two variables, with high levels of turbidity reduction associated with high COD treatment efficiencies.TGA analysis in Fig. 6 shows dehydration and volatilisation.The TGA trace shows steady reduction in sample mass from ambient temperatures to about 540 °C with a loss of about 45% weight.This can be explained by the loss of moisture and light volatiles.The second stage shows decomposition with a weight loss in the range of 20%.Beyond 640 °C there is a gradual loss of weight, giving off about 10% weight from 640 °C to 840 °C.The thermal process may result in residual aluminium hydroxide, a water insoluble precipitate.Aluminium recovery process can be considered as an alternative option to manage aluminium in sludge for safe utilisation in agriculture.A number of chemical processes have been proposed to recover Al from sludge.The patented AquaCritox technology can be used to recover the pure aluminium which can be reconstituted to generate new coagulant for reuse in water treatment process .Anaerobically treated wastewater can be subjected to post-treatment using coagulation and flocculation to reduce COD, colour and turbidity.The post treatment of anaerobically treated yeast processing effluent resulted in reduction of COD, colour and turbidity.Therefore, polydamac is an effective coagulant in post treatment of yeast processing effluent from a UASB reactor.
A laboratory scale coagulation/flocculation process was used for the reduction of colour, turbidity and Chemical Oxygen Demand (COD) in biologically treated yeast processing effluent. The coagulation/flocculation was carried out to assess the efficacy of post-treatment of anaerobically treated effluent from an Upflow Anaerobic Sludge Blanket (UASB) reactor. The combination of semi-continuous UASB biological reactor treatment followed by a post-treatment process using aluminium chlorohydrate polyadamac as a coagulant was investigated. Jar tests were conducted in 6 × 500 ml jars filled with 300 ml of anaerobically treated wastewater. Initial pH of the anaerobically treated effluent was adjusted to pH 2; 4; 6; 8 and 10. No pH adjustment was made to the control. COD, turbidity, colour and settleable solids were recorded after coagulation/flocculation. The sludge was dewatered for further analysis using thermal treatment. Thermogravimetric analysis (TGA) of the sludge was also done to ascertain the characteristics of the flocs. The highest treatment efficiencies for COD reduction and colour removal were recorded at pH 6 with 63.63% and 68.25%, respectively. A 91.33% reduction in turbidity was observed in this study. The sludge loses moisture and other volatile organics in TGA analysis. Post treatment of anaerobically treated bakers’ yeast effluent reduces the pollution potential of the wastewater. However, the process of coagulation/flocculation generates a lot of sludge.
256
Reduction of global interference of scalp-hemodynamics in functional near-infrared spectroscopy using short distance probes
Functional near-infrared spectroscopy is a noninvasive functional neuroimaging technique that can measure concentration changes in oxygenated and deoxygenated hemoglobin in the cerebral cortex.It has advantages of portability, fewer physical constraints on the participant, and simplicity of use.Therefore, although measurements are limited to the cortical surface, it has been adopted widely in clinical practices and daily life situations.However, undesirable artifacts such as head motion and scalp-hemodynamics often contaminate fNIRS signals, and obscure task-related cerebral-hemodynamics.In particular, scalp-hemodynamics, which is systemic changes of blood flows in the scalp layer, cannot be prevented experimentally because they are affected by systemic physiological changes resulting from activation of the autonomic nervous system or by changes in blood pressure accompanied by actions.Indeed, both scalp- and cerebral-hemodynamics increase in a task-related manner.This is especially true for ∆ Oxy-Hb, which is more widely used as an indicator of cerebral activity than ∆ Deoxy-Hb because of its higher signal-to-noise ratio.For example, a majority of task-related changes in ∆ Oxy-Hb during a verbal fluency task was reported to originate from the scalp rather than the cortex.Furthermore, Minati et al. reported that rapid arm-raising movement during visual stimulus presentation generated transient increases in systemic blood pressure, and that ∆ Oxy-Hb in the visual cortex was coupled with this change in blood pressure, rather than with visual stimulation.Given that scalp- and cerebral-hemodynamics in ∆ Oxy-Hb have similar temporal profiles, removing the scalp-hemodynamic artifacts by conventional temporal filtering or block averaging is difficult.Assuming that changes in scalp-hemodynamics are more global than changes in cerebral-hemodynamics, several analytical techniques have been proposed that estimate scalp-hemodynamic artifacts from spatially uniform components of ∆ Oxy-Hb that are measured by a standard source–detector distance of 30 mm.Using principal component analysis, Zhang et al. proposed an eigenvector-based spatial filter from data obtained during rest periods, which assumes that the effects of systemic hemodynamics is dominant in baseline data.This method has been further extended by applying Gaussian spatial filtering.Furthermore, using independent component analysis, Kohno et al. extracted the most spatially uniform component of ∆ Oxy-Hb and showed that it was highly correlated with scalp blood flow that was simultaneously measured by laser-Doppler tissue blood-flow.By removing these spatially uniform ∆ Oxy-Hb components, both methods identified task-related cerebral-hemodynamics in more spatially localized regions, suggesting that global scalp-hemodynamics is a major source of artifacts that decrease the signal-to-noise ratio in fNIRS measurements.Because the ∆ Oxy-Hb recorded by Long-channels is a summation of scalp- and cerebral-hemodynamics, these techniques can lead to over-estimation of scalp-hemodynamic artifacts and underestimation of cerebral activity if the two spatially overlap or are highly correlated with each other.Therefore, independent measurement of scalp and cerebral hemodynamics-related ∆ Oxy-Hb is preferable.Moreover, few studies have experimentally supported the assumed homogeneity of scalp-hemodynamics.Recent studies have proposed removal of local scalp-hemodynamic artifacts using direct measurements from source–detector distances that are shorter than the standard Long-channels.For example, Yamada et al. added a Short-channel detector with a 20-mm distance to each of four Long-channel probe pairs during a finger-tapping task, and subtracted the Short-channel signal from the corresponding Long-channel signal.They confirmed that the activation area that remained after artifact subtraction was comparable with that measured by functional magnetic resonance imaging.Although this is a powerful and accurate technique, numerous probes are necessary to cover broad cortical areas because the same number of Long- and Short-channels is required.Dense and broad fNIRS probe arrangements are expensive, heavy, and time consuming, and therefore not practical or feasible, especially for clinical applications.To take advantage of its simplicity of use and the ability to measure activity from broad cortical regions, a simple method that can remove fNIRS artifacts from broad measurement areas is required.To meet this requirement, here we first tested whether it is possible to estimate scalp-hemodynamic artifacts with a reduced number of Short-channels.Above-mentioned previous methods using Short-channels used multiple Long- and Short-channel pairs because scalp-hemodynamics was considered to vary at different locations on the head.However, if scalp-hemodynamics is globally uniform as has been assumed in previous studies, we can reduce the number of Short-channels necessary for estimating global scalp artifacts.Because the distribution of scalp-hemodynamics over broad measurement areas remains unclear, we first measured scalp-hemodynamics during a finger-tapping task using 18 Short-channels placed on bilateral motor-related areas, including primary sensorimotor, premotor, and supplementary motor areas and during a verbal fluency task using eight Short-channels on the prefrontal cortex.We assessed scalp-hemodynamic homogeneity and determined the number of Short-channels necessary for estimating artifacts.Next, we tested a new method for estimating cerebral activity that combines minimal Short-channel measurements with a general linear model, a combination that has not yet been effectively employed toward broad fNIRS measurements.Results from Experiment 1A provided the minimal number of Short-channels needed to estimate global scalp-hemodynamic artifacts independently from cerebral-hemodynamics in motor-related areas.The estimated global scalp-hemodynamic model was then incorporated into the GLM design-matrix together with the functional cerebral-hemodynamic models.Because simultaneous estimation of both scalp- and cerebral-hemodynamics has been demonstrated to improve performance, our GLM with the scalp-hemodynamic model is expected to estimate cerebral activity more accurately by reducing the interference from scalp-hemodynamics.Although removal of scalp-hemodynamics is thought to be unnecessary or achievable through conventional means when scalp-hemodynamic artifacts and cerebral activity are independent, conventional methods cannot remove it when scalp-hemodynamics is highly correlated with cerebral-hemodynamics.Our study aimed to provide a practical approach that addresses scalp-hemodynamic artifacts by effectively combining already existing techniques, rather than developing a new and costly technique.We hypothesized that estimating global scalp artifacts using a few Short-channels would avoid the over-estimation occurs when using only Long-channels, and would be effective even when the scalp-hemodynamic artifact is correlated with cerebral-hemodynamics.We tested this hypothesis using data from fNIRS and fMRI experiments.The proposed artifact reduction method consists of three steps: preprocessing, estimation of the global scalp-hemodynamic artifact, and removal of scalp-hemodynamics using GLM analysis.Specifically, in the second step, the common temporal pattern of scalp-hemodynamics was extracted from a small number of Short-channels using PCA.In the third step, the GLM with the scalp-hemodynamic and cerebral-hemodynamic models in response to the task was applied to the Long-channels, yielding an estimation of cerebral activity free of global scalp-hemodynamic artifacts.According to standard practice, systemic physiological signals were filtered out from the fNIRS data if they occurred at frequencies that were higher or lower than once in a task cycle that did not correlate with the task cycle.The ∆ Oxy-Hb signal from each channel was detrended using a discrete cosine transform algorithm with a cut-off period of 90 s, which was twice the task cycle, and smoothed using a moving average with a temporal window of 3 s.Based on the results from Experiment 1A, the majority of scalp-hemodynamic artifacts were globally uniform.We therefore focused only on this global component and regarded it as the first principal component extracted from the Short-channel signals using PCA.We placed four Short-channels on the bilateral frontal and parietal cortices at a probe distance of 15 mm because this number was enough to precisely estimate the global component, according to the results from Experiment 1A.To prevent extracted PCs from being biased to a specific channel signal because of motion artifacts, signal-to-noise ratios, or different path lengths, the ∆ Oxy-Hb signals were normalized for each channel before applying PCA so that the mean and standard deviation were 0 and 1, respectively.We applied a GLM to compute the relative contributions of scalp- and cerebral-hemodynamics to each channel.We added the extracted hemodynamics model to the conventionally used GLM design matrix for fMRI and fNIRS analyses.Here, β is an unknown weight parameter vector and ε is an error term that is independent and normally distributed with a mean of zero and variance σ2, i.e., ε ~ N.In the proposed model, design matrix X is constructed with five components: the cerebral-hemodynamic model to be predicted, its temporal and dispersion derivatives, a constant, and the global scalp-hemodynamic model.Here, the cerebral-hemodynamic model is calculated by convoluting the task function and the hemodynamic response function.The temporal and dispersion derivatives can model small differences in the latency and duration of the peak responses, respectively.In the motor task, fNIRS measurements were performed using a multichannel continuous-wave optical imaging system with wavelengths at 780, 805, and 830 nm.Because we assume that the proposed method will be useful in clinical practices such as rehabilitation, we used 32 probes to cover the motor-related areas of both hemispheres.Measurements were performed during repetitive sets of motor tasks following a simple block design: six rest-task-rest blocks, with a 100 ms or 130 ms sampling period.The start of the task period was indicated by a single clicking sound and the end by double clicks.Participants sat in a comfortable reclining armchair with both hands resting naturally on their knees, and were presented with a fixation point approximately 1 m in front of their faces.They were asked not to move their bodies during the rest period, and to repetitively tap their right index finger or grasp a ball with either their left or right hand as fast as possible during the task period.After fNIRS measurement, each fNIRS probe position was recorded with a stylus marker.We assumed that data obtained from the left- and right-hand tasks were independent, even when measured from the same participant, as in the method adopted in our previous study.Thus, we analyzed each as a separate sample.Although both ∆ Oxy- and ∆ Deoxy-Hb were calculated according to the modified Beer-Lambert Law, we only analyzed ∆ Oxy-Hb because detecting ∆ Deoxy-Hb with the wavelength pairs of our fNIRS system is difficult, and produces low signal-to-noise ratios.First, we verified that the major component of the scalp-hemodynamics was consistent over a broad range of cortical areas, as has been assumed in previous studies, and that it could be extracted from a few Short-channels.Thirteen right-handed healthy volunteers participated in this experiment.Participants gave written informed consent for the experimental procedures, which were approved by the ATR Human Subject Review Committee and institutional review board of Nagaoka University of Technology.To measure the scalp-hemodynamics in motor-related areas of both hemispheres during a hand movement task, 18 Short-channels were arranged so that the center of the probes corresponded to the Cz position of the International 10–20 system.The experimental protocol is given above in the fNIRS data acquisition during the motor task section.Three participants performed a tapping task with their right index finger, and the other 10 participants performed a grasping task twice, once with each hand.Two samples were excluded from analysis because motion artifacts were found to have contaminated the raw fNIRS signals.Thus, 21 samples were analyzed.After preprocessing, correlation coefficients of ∆ Oxy-Hb on each Short-channel pair were calculated in each sample.Using PCA, the 1st PC was extracted from the 18 channels as the major component of the scalp-hemodynamics, and its contribution ratio was calculated for each sample.The contribution ratio is a measure of how much the PCs explain the variance, which is also referred to as the variance accounted for.Here, An denotes arbitrary channel combinations of n channels out of Nch Short-channels, XiShort indicates the Short-channel signal obtained from channel i, PC1 is the 1st PC extracted from the channel combination An, and Corr is the correlation coefficient between the extracted 1st PC and Short-channel signal XiShort.In each sample, the CI was computed for all the possible combinations out of the 18 Short-channels and its median value for each n channel combination was obtained.Then, the average of median CI for each n was calculated across all the samples.To investigate the homogeneity of scalp-hemodynamics during a non-motor task, we conducted an experiment with a verbal fluency task and measured scalp-hemodynamics in the prefrontal cortex using Short-channels.We adopted the protocol used in Takahashi et al.The fNIRS probes for measurements of eight Short-channels were placed on the forehead.Ten participants from Experiment 1A participated in this additional experiment, but four were excluded from the analysis because the data were contaminated by motion artifacts.Preprocessing was the same as in Experiment 1A, except that detrending was set to a cut-off period of 320 s because the task cycle was 160 s.We investigated the performance of the ShortPCA GLM using measurements from Long- and Short-channels.Sixteen right-handed volunteers, 15 healthy participants and one right-handed male stroke patient, participated in this experiment.The patient had an infarction in the right corona radiata and showed mild left hemiparesis.All participants gave written informed consent, and the experiment was approved by the ATR Human Subject Review Committee, institutional review board of Nagaoka University of Technology, and the ethical committee of Tokyo Bay Rehabilitation Hospital.fNIRS signals were recorded by the probe arrangement described above.Two healthy participants performed a tapping task with their right index fingers, and the others performed a grasping task twice, once with each hand, as described in the fNIRS data acquisition during the motor task section.Patient data obtained for the unaffected right hand were excluded from analysis because of motion artifacts that contaminated the raw fNIRS signals.Two healthy participants who performed the grasping task were also excluded because they were not relaxed during the fMRI task and unusual bilateral fMRI activation was observed.Thus, 25 samples were analyzed.First, we calculated a variance inflation factor to test multicollinearity in the GLM because VIF higher than 10 usually causes significant problems for estimating the parameters in GLM.Next, the performance of our new method was compared with those of three conventional methods: RAW, MS-ICA, and RestEV.The RAW method directly applied the Standard GLM to the preprocessed fNIRS signals.The MS-ICA and RestEV methods first estimated global artifacts using only Long-channels that covered wide cortical areas, and then removed the estimated artifacts from the fNIRS signals.Cerebral-hemodynamics was then estimated by applying the Standard GLM.To evaluate results of the GLM analysis, we first separated the fNIRS samples based on the degree of correlation between cerebral- and scalp-hemodynamics.Samples in which the global scalp-hemodynamic model did not significantly correlate with the cerebral-hemodynamic model were assigned to the cerebral-scalp uncorrelated group.Those in which the global scalp-hemodynamic model significantly correlated with the cerebral-hemodynamic model were assigned to the cerebral-scalp correlated group.The significance of the correlation was assessed by a permutation test.We computed the null distribution of the correlation as follows: for each sample, we first generated 100 sets of a “task-onset randomized” cerebral-hemodynamic model, which was calculated by convolving the hemodynamic response function and the boxcar task function that randomly changed task-onset.We then calculated the correlation coefficients between the global scalp-hemodynamic model of each sample and the randomized cerebral-hemodynamic model.Finally, we obtained the correlation threshold as the 95th percentile of the null distribution for the correlation coefficients of 2500 samples.To evaluate the goodness-of-fit of the GLM for each group, an adjusted coefficient of determination was calculated and averaged over all Long-channels for each of RAW, MS-ICA, RestEV, and ShortPCA methods.For comparison, a two-tailed paired t-test was applied to assess differences in the averaged adjusted R2 between ShortPCA and each of the other three methods, respectively.To confirm whether cerebral-hemodynamics estimated from the fNIRS signals accurately reflected cerebral activity, we performed an fMRI experiment with a similar task and compared fNIRS-estimated cerebral activity with that estimated by fMRI.Kohno et al. removed a component that had the highest coefficient of spatial uniformity among the independent components separated by an algorithm proposed by Molegedey and Schuster, and considered it the global scalp-hemodynamics component.Their artifact removal algorithm was implemented into the analytical software in our fNIRS system, and we applied it to the ∆ Oxy-Hb measured from the 43 Long-channels.After removing the artifact, the signal was processed using the same preprocessing procedures as our new method, the Standard GLM was applied, and the results were compared with those obtained from the ShortPCA GLM.Zhang et al. proposed an eigenvector-based spatial filtering method using the rest period.This method removes the first r spatial eigenvectors calculated from baseline data by PCA.Here, the spatial filtering was applied to the preprocessed ∆ Oxy-Hb from the Long-channels, and then the Standard GLM was applied.The first rest period was used to determine the eigenvector-based spatial filter.Zhang et al. determined the number of components r to be removed based on the spatial distribution of the eigenvector, although they did not give any clear criterion.We report the results of r = 1, as we confirmed that spatial filtering with a different r value showed similar results.To confirm whether cerebral-hemodynamics estimated from fNIRS signals accurately reflected cerebral activity, we performed an fMRI experiment with a similar task and compared cerebral activity estimated by ∆ Oxy-Hb signals with that estimated by fMRI blood-oxygenation level dependent signals.This is reasonable given that ∆ Oxy-Hb signals are temporally and spatially similar to BOLD.We evaluated the estimation accuracy for each method using estimation error and signal detection measures, assuming that fMRI correctly detects cerebral activity.Thus, effectiveness of the proposed method was examined by comparing the estimation accuracy of the ShortPCA with the three conventional methods.All participants of the fNIRS experiment also participated in the fMRI experiment, and performed the same motor task.T1-weighted structural images and functional T2*-weighted echo-planar images were recorded using a 1.5 or 3.0 T MRI scanner.Details for the fMRI experiment and parameters are described in Supplementary material 1.Functional images were analyzed using SPM8 software to obtain parametric cerebral activation maps related to the task for each sample."First, images were preprocessed.Then, the Standard GLM was applied to the smoothed functional images to calculate the voxel t-values for the difference between the task and rest periods.To compare fNIRS and fMRI activation maps for each sample, the fMRI t-value for the cerebral-hemodynamic model corresponding to each fNIRS channel location was calculated based on a previous report.To normalize the difference in the degrees of freedom between fNIRS and fMRI GLMs, each t-value was divided by a t-value of the corresponding degree of freedom at a significant level with Bonferroni correction that was divided by the number of Long-channels.On the assumption that the fMRI correctly reflects cerebral activity, we compared the t-values given by fNIRS with those given by fMRI.Fig. 2A shows the ∆ Oxy-Hb for all 18 Short-channels and the 1st PC for a representative sample of right-hand movement.The temporal changes in ∆ Oxy-Hb were similar in each of the 18 Short-channels.The other samples showed a similar tendency.We calculated average correlation coefficients between all Short-channel pairs for each sample, and found that the mean of all 21 samples was 0.838.The major component of the scalp-hemodynamics was extracted by applying PCA to these data.The VAF of the 1st PC among all the samples was 0.850, and the mean value of the average correlation coefficients between the 1st PC and all Short-channel signals was 0.920.These results indicate that the 1st PC can explain the majority of the component contained in ∆ Oxy-Hb obtained from the 18 Short-channel signals.To investigate the characteristics of task-related changes in global scalp-hemodynamics, the block-averaged 1st PC was ensemble averaged over all the samples.Compared with the cerebral-hemodynamic model, the 1st PC increased quickly after the onset of the task, reached a peak approximately 5 s earlier than the cerebral-hemodynamic model, and then decreased gradually after the peak.Fig. 2C shows the correlation index).The average of median CI across samples increased exponentially as the number of Short-channels used in the PCA increased.Importantly, the correlation index calculated for all 18 channels was similar to those for only 2, 3, and 4 channels.The differences were 0.031, 0.020, and 0.014, respectively.We also confirmed that a similar trend was produced using the ensemble average of ∆ Oxy-Hb over the selected Short-channels, instead of using their 1st PC.Consistent with the results from Experiment 1A, very similar temporal patterns in ∆ Oxy-Hb were observed for all measured Short-channels.The average correlation coefficient across all channel pairs was 0.838 and the contribution ratio of the first principle component was 0.853.The mean value of the average correlation coefficients between the 1st PC and all Short-channels was 0.916.These values were similar to those obtained in Experiment 1A.The correlation index also showed a tendency similar to that observed in Experiment 1A.The differences from the correlation index calculated for all eight channels were also relatively small for indexes calculated for two, three, and four channels.These results suggest that scalp-hemodynamics in the prefrontal cortex during the verbal fluency task were as homogenous as those in the motor-related areas during the hand-movement tasks.We evaluated performance of the ShortPCA GLM using experimental data, and compared the results with those obtained from three other conventional methods.Fig. 3 shows the temporal waveform for the global scalp-hemodynamic model and estimation results using ShortPCA for a representative sample.The global scalp-hemodynamic model extracted from the four Short-channels showed a task-related temporal characteristic such that hemodynamics increased during the task period and decreased during the rest period.The mean contribution ratio of the global scalp-hemodynamics component was 0.853.Among all samples, the maximum VIF we observed was 5.96, indicating that multicollinearity was not an issue for these samples.Fig. 3B shows an example of the estimated cerebral- and global scalp-hemodynamics from Ch 21 and Ch 27, placed on the left and right primary sensorimotor cortices, respectively.The majority of ∆ Oxy-Hb in Ch 21 was explained solely by global scalp-hemodynamics.Conversely, Ch 27 contained both cerebral- and global scalp-hemodynamic components.We next compared the results between cerebral-scalp uncorrelated and correlated groups.Using a correlation threshold of 0.314, seven samples were assigned to the uncorrelated group while the remaining 18 samples were assigned to the correlated group.To compare the fitting of each GLM, an adjusted R2 was calculated for each of the Long-channels and averaged over the 43 Long-channels for each sample.The adjusted R2 for the proposed ShortPCA GLM was significantly higher than that for the other methods in the uncorrelated group, and in the correlated group.These results suggest that ShortPCA GLM is the most appropriate for fitting the ∆ Oxy-Hb during movements in a block design."Fig. 5 shows the cerebral activity t-maps estimated by fNIRS and fMRI for group-level analysis of a right-hand sample assigned to the cerebral-scalp correlated group, and for a patient's affected left-hand sample, also assigned to the cerebral-scalp correlated group.The t-values of the estimated cerebral-hemodynamics at the 43 Long-channels were mapped on the brain surface created by the structural MR image using Fusion software based on the fNIRS probe position recorded by a stylus marker.For RAW, we found significant activity in all Long-channels.However, fMRI results demonstrated that brain activity was localized to the contralateral sensorimotor area, suggesting that false positive activity was estimated because of task-related changes in scalp-hemodynamics.Thus, without accurate removal of scalp-hemodynamics, cerebral activity can be over-estimated.Results applying scalp-hemodynamics removal showed improvements in the estimation of cerebral activity.Importantly, ShortPCA seemed to produce an activity map more similar to the fMRI result at the single sample level.In the representative samples assigned to the cerebral-scalp correlated group, when scalp-hemodynamics were estimated using MS-ICA or RestEV, and removed during preprocessing, the estimated cerebral activity was distributed in some unrelated areas.Additionally, a large negative value was estimated in some channels.In contrast, ShortPCA accurately estimated the expected cerebral activity, localizing it in the contralateral primary sensorimotor cortex.When we compared the fMRI t-maps with those from each fNIRS method, the ShortPCA spatial map showed similar activation patterns to those obtained by fMRI.To quantify these results for each sample level, we evaluated the estimation accuracy for the cerebral-scalp uncorrelated and correlated groups.For the uncorrelated group, ShortPCA, estimation error did not significantly differ from the other methods.The signal detection measures were also similar.Conversely, estimation error for the correlated group, was significantly lower for ShortPCA than for the other methods.When we looked at the signal detection measures for the correlated group, RAW had the highest sensitivity, but the lowest specificity.The other three methods drastically improved the specificity compared with RAW.In particular, ShortPCA had the highest sensitivity and highest specificity among the three methods, and the highest G-mean among all four.In this study, we propose a new method for removing task-related scalp-hemodynamic artifacts that cannot be filtered out by conventional fNIRS analysis.Our method extracts the global scalp-hemodynamic artifact from four Short-channels, and then uses a GLM to simultaneously remove this artifact and estimate cerebral activity.Our method proved to be successful using fNIRS and fMRI experimental data.Among several alternative models, ShortPCA accurately fitted ∆ Oxy-Hb data obtained from fNIRS, and produced an estimated cerebral activity pattern that was the most similar to that observed by fMRI.Our method is an effective combination of two previously developed techniques that have been independently studied.Studies for extracting global components have used only Long-channels, and no study has tried to incorporate Short-channel data toward this goal.Similarly, studies that directly measured scalp-hemodynamics using Short-channels focused only on the accurate estimation of local artifacts, and did not attempt to estimate global components.By taking advantage of these two techniques, we were able to overcome their individual drawbacks.Scalp-hemodynamics have been considered to be inhomogeneous in several studies using multiple Short-channels.Recently, Gagnon et al. showed that the initial baseline correlation of fNIRS time courses between Short-channels decreased with the increase in relative distance between the two channels, suggesting localization of scalp-hemodynamics during rest period.Furthermore, Kirilina et al. used fNIRS and fMRI and reported that scalp-hemodynamics were localized in the scalp veins, indicating a locally regulated physiological process in the scalp.Other reports advocate homogeneity of scalp-hemodynamics, which is physiologically supported by the reflection of systemic physiological changes in heart rate, respiration, and arterial blood pressures in fNIRS signals.Our results regarding scalp-hemodynamics revealed that ∆ Oxy-Hb measured by the Short-channels was uniform over the measured areas, and the majority of this temporal pattern was explained by the 1st PC.Similar spatially homogeneous characteristics of scalp-hemodynamics were observed by an additional experiment with Short-channel measurements of prefrontal cortex during a verbal fluency task.To our knowledge, this is the first experimental result that quantitatively supports homogeneity of scalp-hemodynamics over broad measurement areas.Although this result is inconsistent with the literature showing it to be inhomogeneous, this discrepancy is likely because of differences in experimental settings such as the measurement hardware rather than differences in the methods used to evaluate homogeneity.Indeed, we computed the initial baseline correlation between two Short-channels in the same way as Gagnon et al. for all the samples, and verified that the correlation remained high even though the relative distance between the two channels increased.To clarify any limitations of our method, further studies should investigate the range of experimental conditions in which scalp-hemodynamics show homogeneity.Based on the experimental evidence that the majority of scalp-hemodynamic artifacts are globally uniform, we can use fewer Short-channels to extract the scalp-hemodynamic artifact than past approaches have done.Despite the uniformity of the scalp-hemodynamics, we consider that more than one channel is required for the following reasons.First, each Short-channel signal may be contaminated with local noise which has similar frequency characteristics to the task cycle and therefore cannot be filtered out.Second, some channels occasionally fail to work properly because of poor contact between the probe and the scalp.Thus, separate location of Short-channels is generally better for applying our method to avoid local noises including motion artifacts.The more Short-channels we used for extracting the global scalp-hemodynamics, the higher the correlation indexes were.Nevertheless, they were very high even with a few Short-channels.When the number of available probes is limited, arranging many Short-channels to improve the artifact estimation reduces the number of Long-channel probes and prevents coverage of broad cortical areas.Therefore, the minimum number of Short-channels is preferable.In Experiment 2, we assigned four Short-channels with the arrangement shown in Fig. 1C so that the motor-related areas are covered with Long-channels using 16 probe pairs.To examine the efficacy of this specific arrangement in estimating global scalp-hemodynamics, we compared the correlation index of this specific four Short-channel combination with that of all 18 Short-channels using the data from Experiment 1A.The difference in correlation indexes averaged across 21 samples was 0.017, suggesting that this arrangement was adequate for estimating cerebral-hemodynamics around motor-related areas in our experimental setting.Location and combination of Short-channels should not be crucial because the difference between the highest and the lowest CI among all possible four Short-channel combination pairs was very small.However, the optimal number and location of Short-channels may vary depending on experimental conditions and the homogeneity of scalp-hemodynamics, and care should be taken to evaluate them accordingly.We compared the proposed ShortPCA method with three conventional methods by looking at the adjusted R2 values, t-value estimation errors, and the ability to detect cerebral activation.We found that ShortPCA performed best among all the methods when the scalp-hemodynamics significantly correlated with the cerebral-hemodynamics.Scalp-hemodynamics often increases in a task-related manner, and over-estimation of the artifact and under-estimation of cerebral activity is an issue if only Long-channels are used to reduce the task-related artifact.Consistent with this idea, estimation accuracy degraded for conventional methods when applied to the cerebral-scalp correlated group.In the RAW method, large false-positive cerebral activity was observed in many channels as shown in Fig. 5, resulting in very low specificity.Although MS-ICA and RestEV were able to remove the scalp-hemodynamic artifact and improve specificity, they had lower adjusted R2 in the GLM for both cerebral-scalp uncorrelated and correlated groups.These poorer fittings likely resulted from only using Long-channels, which led to misestimating the global scalp-hemodynamic components from fNIRS signals.As demonstrated by a simulation, global scalp-hemodynamic models extracted from only the Long-channels were contaminated by cerebral-hemodynamics and caused the global scalp-hemodynamic model contribution to the GLM to be over-estimated.To support this, we found that G-mean was lower, owing to the lower sensitivity, when we performed GLM analysis using only Long-channels.This tendency was evident for the cerebral-scalp correlated groups.Thus, our results indicate that by using four Short-channels, robust estimation of cerebral activity can be achieved regardless of whether the influence of scalp-hemodynamics is significant.To evaluate the accuracy of the estimated cerebral-hemodynamics, fMRI BOLD signals obtained during the same motor task were used as a standard for correct cerebral-hemodynamics.Because we did not measure fNIRS and fMRI simultaneously, the cerebral activity measured in the two experiments was not identical.We cannot deny the possibility that the small differences that we observed between the two measurements were derived from differences in behavior, physical responses, or mental states.Additionally, although we compared t-values derived from ∆ Oxy-Hb to those from fMRI-BOLD signals, which fNIRS signal correlate best with the BOLD signal remains an open question.Several studies indicate that during motor tasks, ∆ Oxy-Hb in frontal or ipsilateral areas is higher when measured by fNIRS than by fMRI.Thus, differences in detectability of cerebral activity could account for this aspect of our results.However, considering the robust and reproducible hemodynamic responses evoked by simple motor tasks, and the spatial resolution of fNIRS that is not high enough to detect slight differences in protocols, we believe that our results are valid.In fact, fMRI and ShortPCA both showed expected contralateral cerebral activity around motor regions.In contrast to ∆ Oxy-Hb, our method did not improve estimation using ∆ Deoxy-Hb data in Experiment 2.This could be because the scalp-hemodynamics value is reflected less in ∆ Deoxy-Hb and its estimation is difficult owing to the physiological origin of each hemoglobin signal.Conversely, it could also be considered that cerebral- and scalp-hemodynamics were both poorly detected using ∆ Deoxy-Hb because of its low signal-to-noise ratio in our system.If this is the case, and good ∆ Deoxy-Hb data is available, our method will be applicable and improvement is expected.Because reduction of scalp-hemodynamics was improved by combining ∆ Deoxy-Hb with ∆ Oxy-Hb, the performance of our method may be further improved by adding ∆ Deoxy-Hb information.The source–detector distance is another methodological concern.We used a source–detector distance of 15 mm to measure the scalp-hemodynamic artifact because a simulation studies by Okada et al. reported that the spatial sensitivity profile was confined to the surface layer when the source–detector distance was below 15 mm, and because a 15-mm distance is easier to implement in our current hardware.Although recent studies suggest that signals measured by 15-mm distant Short-channels may contain a small amount of cerebral-hemodynamics, scalp-hemodynamics is likely to be the predominant type of hemodynamics represented in Short-channel data.In a simulation, for example, Yamada et al. reported that the absorption changes in 15-mm distant channels for gray matter layers were less than 20% of those when the distance was 30 mm.In agreement with these reports, we confirmed by an additional analysis that there were more scalp components in Short-channel data than in Long-channel data in our experimental samples.We evaluated the contribution of scalp- and cerebral- hemodynamics for experimental data in Experiments 1A and 2 directly, by VAF.The results revealed that averaged VAFs of the estimated cerebral- and scalp-hemodynamics across samples in the Short-channels were 0.049 and 0.787, respectively, whereas those in the Long-channels were 0.313 and 0.550.Thus, scalp-hemodynamics values were approximately 16 times larger than cerebral-hemodynamics in the Short-channels, but only 1.7 times larger in the Long-channels.Furthermore, the contribution ratio of the 1st PC in Experiment 1A was nearly 0.85 for all samples.These observations strongly support our hypothesis that scalp-hemodynamics is dominant in Short-channels and is distributed globally.Therefore, we expect that multiple 15-mm distant Short-channels are practical enough to accurately extract global scalp-hemodynamics.As our method is applicable to any source–detector distance, future studies can test whether estimation accuracy increases when it is applied to channel distances shorter than 15 mm as has been used in the previous studies, provided it can be implemented in the fNIRS measurement hardware.Another consideration is that because it was designed to remove the global and homogenous artifact, the current method cannot remove local artifacts derived from experimental and physiological factors.In the motion artifacts, two possible situations may degrade the performance of the proposed method.One is the case in which local artifacts occur in an area in which only Long-channels are placed, and the other is the case in which local artifacts occur in an area in which both Long- and Short-channels are placed.In the first case, local artifacts would only be included in the Long-channels.These local artifacts would not affect the estimation of global scalp-hemodynamics, and would therefore not degrade the performance of other channels.While the global scalp-hemodynamic model would not be able to remove them from those specific Long-channels, they could be removed as a residual when applying the GLM, provided they do not correlate with the task.In the second case, global scalp-hemodynamics cannot be extracted correctly because of interference by local artifacts included in some of the Short-channels.We simulated both cases with three temporal patterns of motion-like artifacts, which are often observed in fNIRS signals.In the second case, the contribution ratio of the 1st PC to the Short-channels was markedly lower and estimation error was higher for global scalp-hemodynamics.Therefore, the contribution ratio of the 1st PC could be a good indicator of whether proceeding to the GLM using the estimated global scalp-hemodynamic model is advisable."In fact, when we analyzed data from the patients' right hand movements that had been excluded from analyses because several Long- and Short-channels were contaminated with Step-type local artifacts, we found a much smaller contribution ratio of the 1st PC to the Short-channels than was observed in the other data.For these contaminated data, ShortPCA was not more effective than the other methods.Such local artifacts should be prevented experimentally or removed by preprocessing using other artifact removal methods.Even after excluding local motion noise, slight differences among channels remain in scalp-hemodynamics owing to its heterogeneous nature.These local components might be eliminated when we allocate the Long- and Short-channels in pairs.However, Experiment 1–1 showed that more than 85% of the signals measured by the Short-channels contained the global component, with only a small contribution by the local component.Therefore, at least in our experimental setting, using only the 1st PC is effective in estimating cerebral activity.Note that no significant improvement was observed even when we incorporated additional principal components into the GLM.As discussed above, large local fluctuations in scalp-hemodynamics are detectable by a smaller contribution of the 1st PC.In such cases, the proposed method should not be applied.By combining four Short-channels with a GLM, our new method improves the estimation accuracy of cerebral activity while keeping the measurement area broad and without reducing the benefits of fNIRS.Hence, this method should be useful, especially in the clinical setting.Consider fNIRS measurement during rehabilitation after stroke.Unexpected cortical regions, such as those ipsilateral to the moving hand, may be activated during rehabilitation, and detection of these regions requires as broad a measurement area as possible.Furthermore, stroke patients tend to exert maximum effort to move their affected limb, which often causes an increase in scalp-hemodynamics.Using current methods, this results in false positive cerebral activity.Additionally, stress should be minimized as much as possible in clinical practice.Measurement during rehabilitation therefore needs to be from broad cortical areas, with few physical constraints, and with a capacity to remove scalp-hemodynamic artifacts.Previously proposed methods that densely arrange the Short- and Long-channel probes and estimate the local cerebral activity correctly do not fulfill these requirements, as probe setting is time-consuming and the measurement area is limited.Our method meets all the requirements and can be easily implemented in conventional fNIRS systems because the algorithms are simple and the number of Short-channels required is small.Indeed, we confirmed that cerebral activity can be measured without causing a stroke patient significant stress.In addition, we confirmed that our method can also be applied to a verbal fluency task that is commonly used in the clinical setting.Thus, our method is very practical and is expected to be suitable for clinical fNIRS measurement.
Functional near-infrared spectroscopy (fNIRS) is used to measure cerebral activity because it is simple and portable. However, scalp-hemodynamics often contaminates fNIRS signals, leading to detection of cortical activity in regions that are actually inactive. Methods for removing these artifacts using standard source–detector distance channels (Long-channel) tend to over-estimate the artifacts, while methods using additional short source–detector distance channels (Short-channel) require numerous probes to cover broad cortical areas, which leads to a high cost and prolonged experimental time. Here, we propose a new method that effectively combines the existing techniques, preserving the accuracy of estimating cerebral activity and avoiding the disadvantages inherent when applying the techniques individually. Our new method accomplishes this by estimating a global scalp-hemodynamic component from a small number of Short-channels, and removing its influence from the Long-channels using a general linear model (GLM). To demonstrate the feasibility of this method, we collected fNIRS and functional magnetic resonance imaging (fMRI) measurements during a motor task. First, we measured changes in oxygenated hemoglobin concentration (∆ Oxy-Hb) from 18 Short-channels placed over motor-related areas, and confirmed that the majority of scalp-hemodynamics was globally consistent and could be estimated from as few as four Short-channels using principal component analysis. We then measured ∆ Oxy-Hb from 4 Short- and 43 Long-channels. The GLM identified cerebral activity comparable to that measured separately by fMRI, even when scalp-hemodynamics exhibited substantial task-related modulation. These results suggest that combining measurements from four Short-channels with a GLM provides robust estimation of cerebral activity at a low cost.
257
Expression of Neuropeptide FF Defines a Population of Excitatory Interneurons in the Superficial Dorsal Horn of the Mouse Spinal Cord that Respond to Noxious and Pruritic Stimuli
The superficial dorsal horn of the spinal cord receives excitatory synaptic input from primary sensory neurons that detect noxious, thermal and pruritic stimuli, and this information is conveyed to the brain via projection neurons belonging to the anterolateral tract.Although the projection cells are concentrated in lamina I, they only account for ~ 1% of the neurons in the superficial dorsal horn.The remaining nerve cells are defined as interneurons, and these have axons that remain within the spinal cord, where they contribute to local synaptic circuits.Around 75% of the interneurons in laminae I-II are excitatory cells that use glutamate as their principal fast transmitter.Behavioural assessment of mice in which excitatory interneurons in laminae I-II have been lost indicate that these cells are essential for the normal expression of pain and itch.However, the excitatory interneurons are heterogeneous in terms of their morphological, electrophysiological and neurochemical properties, and this has made it difficult to assign them to distinct functional populations.We have identified 5 largely non-overlapping neurochemical populations among the excitatory interneurons in laminae I-II of the mouse spinal cord.Cells belonging to 3 of these populations, which are defined by expression of neurotensin, neurokinin B and cholecystokinin, are concentrated in the inner part of lamina II, and extend into lamina III.These cells frequently co-express the γ isoform of protein kinase C.The other two populations consist of: cells that express enhanced green fluorescent protein under control of the promoter for gastrin-releasing peptide in a BAC transgenic mouse line, and cells that express the Tac1 gene, which codes for substance P.The GRP-eGFP and substance P cells are located somewhat more dorsally than the other three populations, in the mid-part of lamina II.We have estimated that between them, these 5 populations account for around two-thirds of the excitatory interneurons in the superficial dorsal horn.Our findings are generally consistent with the results of a recent transcriptomic study, which identified 15 clusters among dorsal horn excitatory neurons.These included cells enriched with mRNAs for CCK, neurotensin, Tac2 and Tac1.Another cluster identified by Häring et al. consisted of cells with mRNA for neuropeptide FF.Previous studies had identified NPFF-expressing cells in the superficial dorsal horn of rat spinal cord by using immunocytochemistry with anti-NPFF antibodies.Both of these studies revealed a dense plexus of NPFF-immunoreactive axons in lamina I and the outer part of lamina II, which extended into the lateral spinal nucleus, together with scattered fibres in other regions including the intermediolateral cell column and the area around the central canal.Kivipelto and Panula also administered colchicine, which resulted in NPFF staining in cell bodies, and these were located throughout laminae I and II.The aim of the present study was to identify and characterise NPFF-expressing cells in the mouse, by using a new antibody directed against the precursor protein pro-NPFF.In particular, our goal was to confirm that these were all excitatory interneurons and determine what proportion they accounted for, and to test the hypothesis that they formed a population that was distinct from those that we had previously identified.We also assessed their responses to different noxious and pruritic stimuli by testing for phosphorylation of extracellular signal-regulated kinases.All experiments were approved by the Ethical Review Process Applications Panel of the University of Glasgow, and were performed in accordance with the European Community directive 86/609/EC and the UK Animals Act 1986.We used three genetically modified mouse lines during the course of this study.One was the BAC transgenic Tg in which enhanced green fluorescent protein is expressed under control of the GRP promoter.We have recently shown that virtually all eGFP-positive cells in this line possess GRP mRNA, although the mRNA is found in many cells that lack eGFP.We also used a line in which Cre recombinase is inserted into the Grpr locus, and this was crossed with the Ai9 reporter line, in which Cre-mediated excision of a STOP cassette drives expression of tdTomato.Both GRP-EGFP and GRPRCreERT2; Ai9 mice were used for studies that assessed phosphorylated extracellular signal-regulated kinases following noxious or pruritic stimuli.The use of GRPRCreERT2;Ai9 mice for some of these experiments allowed us also to assess responses of GRPR-expressing neurons, and this will be reported in a separate study.Four adult wild-type C57BL/6 mice of either sex and 3 adult GRP-EGFP mice of either sex were deeply anaesthetised and perfused through the left cardiac ventricle with fixative containing 4% freshly depolymerised formaldehyde in phosphate buffer.Lumbar spinal cord segments were removed and post-fixed for 2 h, before being cut into transverse sections 60 μm thick with a vibrating blade microtome.These sections were used for stereological analysis of the proportion of neurons that were pro-NPFF-immunoreactive, and also to look for the presence of pro-NPFF in GRP-eGFP, somatostatin-immunoreactive and Pax2-immunoreactive cells.In order to determine whether any of the pro-NPFF-immunoreactive cells were projection neurons, we used tissue from 3 male wild-type C57BL/6 mice that had received injections of cholera toxin B subunit targeted on the left lateral parabrachial area as part of previously published study.In all cases, the CTb injection filled the LPb on the left side.Transverse sections from the L2 segments of these mice, which had been fixed as described above, were used for this part of the study.To look for evidence that pro-NPFF cells responded to noxious or pruritic stimuli, we performed immunostaining for pERK on tissue from GRPRCreERT2;Ai9 or GRP-EGFP mice.Twelve GRPRCreERT2;Ai9 female mice were used to investigate responses to pinch or intradermally injected pruritogens, and in all cases, this was carried out under urethane anaesthesia.For 3 of the mice, five skin folds on the left calf were pinched for 5 s each, and after 5 min, the mice were perfused with fixative as described above.The remaining mice received intradermal injections of histamine, chloroquine or vehicle into the left calf, which had been shaved the day before.The success of the intradermal injections was verified by the presence of a small bleb in the skin.We have previously shown that intradermal injections of vehicle result in pERK labelling if mice are allowed to survive for 5 mins after the stimulus, probably due to the noxious mechanical stimulus resulting from i.d. injection.However, if the mice survive 30 mins, pERK is seen in pruritogen-injected, but not vehicle-injected animals, and this presumably reflects prolonged activation by the pruritogens.We therefore waited until 30 mins after the injections before intracardiac perfusion with fixative, which was carried out as described above.Tissue from 6 urethane-anaesthetised GRP-EGFP mice that had had the left hindlimb immersed in water at 52 °C for 15 s or had received a subcutaneous injection of capsaicin was also used.In these cases, the tissue was obtained from experiments that had formed part of a previously published study, and injection of the vehicle for capsaicin had been shown to result in little or no pERK labelling.Capsaicin had initially been prepared at 1% by dissolving it in a mixture of 7% Tween 80 and 20% ethanol in saline.It was then diluted to 0.25% before injection.Fluorescent in situ hybridisation was performed on lumbar spinal cord sections from 3 C57BL/6 mice, tissue from which had been used in a previous study.Multiple-labelling immunofluorescence reactions were performed as described previously on 60 μm thick transverse sections of spinal cord.The sources and concentrations of antibodies used are listed in Table 1.Sections were incubated for 3 days at 4 °C in primary antibodies diluted in PBS that contained 0.3 M NaCl, 0.3% Triton X-100 and 5% normal donkey serum, and then overnight in appropriate species-specific secondary antibodies that were raised in donkey and conjugated to Alexa 488, Alexa 647, Rhodamine Red or biotin.All secondary antibodies were used at 1:500, apart from those conjugated to Rhodamine Red, which were diluted to 1:100.Biotinylated secondary antibodies were detected with Pacific Blue conjugated to avidin.Following the immunocytochemical reaction, sections were mounted in anti-fade medium and stored at − 20 °C.Sections from 3 wild-type mice were reacted with the following combinations of primary antibodies: pro-NPFF and NeuN; pro-NPFF, somatostatin and NeuN.Those reacted with the first combination were subsequently stained with the nuclear stain 4′6-diamidino-2-phenylindole.Sections from 3 GRP-EGFP mice were reacted with the following combination: pro-NPFF, eGFP, Pax2 and NeuN.Sections from mice that had received injection of CTb into the LPb were reacted with antibodies against pro-NPFF, CTb and NeuN.Sections from mice that had undergone the various types of noxious or pruritic stimulation were reacted with antibodies against pro-NPFF, pERK and NeuN.Sections were scanned with a Zeiss LSM 710 confocal microscope with Argon multi-line, 405 nm diode, 561 nm solid state and 633 nm HeNe lasers.Confocal image stacks were obtained through a 40 × oil immersion lens with the confocal aperture set to 1 Airy unit, and unless otherwise stated, the entire mediolateral width of laminae I-II was scanned to generate z-series of at least 20 μm, with a z-separation of 1 μm.Confocal scans were analysed with Neurolucida for Confocal software.The lamina II-III border was identified from the distribution of NeuN immunoreactivity, based on the relatively low neuronal packing density in lamina IIi.The lamina I-II border was assumed to be 20 μm from the dorsal edge of the dorsal horn.To determine the proportion of neurons in laminae I-II that are pro-NPFF-immunoreactive, we used a modification of the optical disector method on 2 sections each from 3 wild-type mice reacted with the first combination of antibodies.The reference and look-up sections were set 10 μm apart, and initially only the NeuN and DAPI channels were viewed.All intervening optical sections were examined, and neuronal nuclei were selected if their bottom surface lay between the reference and look-up sections.These were plotted onto an outline drawing of the dorsal horn.The pro-NPFF channel was then switched on and the presence or absence of staining was determined for each of the selected neurons.To estimate the extent of co-localisation of NPFF and somatostatin, we scanned 2 sections that had been reacted with the 2nd antibody combination from each of 3 wild-type mice.The pro-NPFF and NeuN channels were viewed, and all pro-NPFF cells throughout the full thickness of the section were identified.The somatostatin channel was then switched on, and the presence or absence of somatostatin in each selected cell was noted.We searched for overlap between pro-NPFF and eGFP or Pax2 in two sections each from three GRP-EGFP mice.Again, all pro-NPFF-immunoreactive cells throughout the depth of the section were initially identified, and the presence or absence of eGFP and Pax2 was then determined.To test whether any of the pro-NPFF cells were projection neurons, we scanned and analysed between 4 and 7 sections from each of 3 mice that had received CTb injections into the LPb.We identified all lamina I CTb + cells that were visible within each section and checked for the presence of pro-NPFF-immunoreactivity.Analysis of ERK phosphorylation in pro-NPFF cells was performed as described previously.Sections that contained numerous pERK + cells were initially selected and scanned with the confocal microscope through the 40 × oil-immersion lens to generate z-stacks through the full thickness of the section so as to include the region of dorsal horn that contained pERK cells.The outline of the dorsal horn, together with the lamina II/III border, was plotted with Neurolucida, and the mediolateral extent of the region that contained a high density of pERK cells was delineated by drawing two parallel lines that were orthogonal to the laminar boundaries.The channels corresponding to NeuN and pro-NPFF were initially viewed, and all pro-NPFF + cells within this region were plotted onto the drawing.The pERK channel was then viewed, and the presence or absence of staining in each of the selected pro-NPFF cells was noted.Multiple-labelling fluorescent in situ hybridisation was performed with RNAscope probes and RNAscope fluorescent multiplex reagent kit 320,850.Fresh frozen lumbar spinal cord segments from 3 wild-type mice were embedded in OCT mounting medium and cut into 12 μm thick transverse sections with a cryostat.These were mounted non-sequentially onto SuperFrost Plus slides and air dried."Reactions were carried out according to the manufacturer's recommended protocol.The probes used in this study, and the proteins/peptides that they correspond to, are listed in Table 2.Sections from 3 mice were incubated in the following probe combinations: Npff, Grp, Tac1; Npff, Cck, Tac2; Npff, Nts.Probes were revealed with Alexa 488, Atto 550 and Alexa 647.Sections were mounted with Prolong-Glass anti-fade medium with NucBlue.Positive and negative control probes were also tested on other sections.Sections were scanned with a Zeiss LSM 710 confocal microscope as above.Since the sections reacted with each probe combination were obtained from a 1 in 4 series, there was at least 36 μm separation between the scanned sections.Confocal image stacks were obtained through the 40 × oil immersion lens with the confocal aperture set to 1 Airy unit, and the entire mediolateral width of laminae I-II was scanned to generate a z-series of the full thickness of the section, with a z-separation of 2 μm.Confocal scans of 5 sections per animal were analysed with Neurolucida for Confocal software.Initially, only the channel corresponding to Npff mRNA was examined and all Npff positive NucBlue nuclei were identified.Then channels corresponding to other probes were viewed and any co-localisation noted.Cells were defined as positive for a particular mRNA if greater than 4 transcripts were present in the nucleus or immediate perinuclear area.The sources and dilutions of primary antibodies used in the study are listed in Table 1.The pro-NPFF antibody was raised against a fusion protein consisting of glutathione S-transferase and amino acids 22–114 of the mouse pro-NPFF protein.Staining was completely abolished by pre-incubating the antibody at its normal working concentration with the antigen at 4.4 μg/ml.The mouse monoclonal antibody NeuN reacts with a protein in cell nuclei extracted from mouse brain, which has subsequently been identified as the splicing factor Fox-3.This antibody apparently labels all neurons but no glial cells in the rat spinal dorsal horn.The eGFP antibody was raised against recombinant full-length eGFP, and its distribution matched that of native eGFP fluorescence.The Pax2 antibody was raised against amino acids 268–332 of the human protein, and it has been shown that this labels essentially all GABAergic neurons in adult rat dorsal horn."The somatostatin antiserum is reported to show 100% cross-reactivity with somatostatin-28 and somatostatin-25, but none with substance P, neuropeptide Y, or vasoactive intestinal peptide, and we have shown that staining with this antibody is abolished by pre-incubation with 10 μg/ml somatostatin.The CTb antibody was raised against the purified protein, and specificity is demonstrated by the lack of staining in regions that did not contain injected or transported tracer.The pERK antibody detects p44 and p42 MAP kinase when these are phosphorylated either individually or dually at Thr202 and Tyr204 of Erk1 or Thr185 and Tyr187 of Erk2."This antibody does not cross-react with the corresponding phosphorylated residues of JNK/SAPK or of p38 MAP kinase, or with non-phosphorylated Erk1/2.Specificity is demonstrated by the lack of staining in non-stimulated areas.Immunoreactivity for pro-NPFF was highly concentrated in the superficial dorsal horn and the LSN, with a distribution very similar to that reported previously in the rat with antibodies against NPFF.At high magnification, most immunoreactive profiles resembled axon terminals, but there were also labelled cell bodies, in which the immunoreactivity was present in the perikaryal cytoplasm.These pro-NPFF-immunoreactive cell bodies were present throughout laminae I-II, but were most numerous in the dorsal half of this region.They were not seen in the LSN.In the quantitative analysis with the disector technique, we identified a mean of 392 NeuN-positive cells in laminae I-II per mouse, and found that 4.74% of these were pro-NPFF-immunoreactive.To test the prediction that pro-NPFF cells were excitatory, we looked for the presence of Pax2.This was carried out on tissue from the GRP-EGFP mouse, which also allowed us to determine whether there was any co-expression of pro-NPFF and eGFP.We identified a mean of 55.6 pro-NPFF cells in this tissue, and found that none of these were either Pax2- or eGFP-immunoreactive.Somatostatin is expressed by many excitatory interneurons in the superficial dorsal horn, and we therefore looked for co-localisation of pro-NPFF- and somatostatin-immunoreactivity.We identified 62.7 pro-NPFF-immunoreactive cells in sections from 3 wild-type mice, and found that 85.3% of these were also immunoreactive for somatostatin.There was also extensive co-localisation of pro-NPFF and somatostatin in axonal boutons.The distribution of cells that contained Npff mRNA was the same as that of cells with pro-NPFF immunoreactivity.They were largely restricted to the superficial dorsal horn, and were most numerous in lamina I and the outer part of lamina II.In sections reacted with probes against Npff, Tac1 and Grp mRNAs, we identified 58.7 Npff mRNA + cells in tissue from each of 3 mice.We found very limited overlap with Tac1, since only 4.6% of these cells were also Tac1 mRNA +.However, there was extensive overlap with the mRNA for Grp.Grp mRNA was found in 38% of Npff mRNA + cells, and this represented 6.3% of the Grp mRNA + cells in laminae I-II.In the sections reacted for Npff, Cck and Tac2 mRNAs, we identified 57 Npff mRNA + cells from the 3 mice.None of these were positive for Tac2 mRNA, and only one was positive for Cck mRNA.In sections reacted for Npff and Nts mRNAs, we found 60.7 Npff mRNA + cells in the 3 mice.Only two of these cells were Nts mRNA +.We identified a total of 111 CTb-labelled neurons in lamina I in the 3 animals that had received injections into the LPb.None of the CTb-labelled cells were pro-NPFF-immunoreactive.The distribution of pERK-immunoreactivity in mice that had received noxious or pruritic stimuli was very similar to that described previously.In each case, pERK-positive cells were only seen on the side ipsilateral to the stimulus, in the somatotopically appropriate region of the dorsal horn, and they were most numerous in the superficial laminae.Few, if any, pERK + cells were seen in mice that had received intradermal injection of vehicle.For each of the stimuli examined, we found that some pro-NPFF-immunoreactive cells were pERK-positive, although in many cases these cells showed relatively weak pERK immunoreactivity.For the noxious heat stimulus, and for both pruritogens, the proportion of pro-NPFF cells with pERK was around 30%, while for pinch and capsaicin injection the proportions were higher.Our main findings are that the pro-NPFF antibody labels a population of excitatory interneurons in laminae I-II, that these cells account for nearly 5% of all neurons in this region, that they are distinct from cells belonging to other neurochemical populations that have recently been defined, and that many of them respond to noxious and/or pruritic stimuli.The laminar pattern of staining with the pro-NPFF antibody closely matched that described previously for NPFF antibodies in rat spinal cord, while the immunoreactive cell bodies showed a similar distribution to that seen with in situ hybridisation using NPFF probes.In addition, the lack of expression in Pax2-positive cells is consistent with the restriction of NPFF to excitatory neurons.Taken together with the finding that pre-absorption with the antigen blocked immunostaining, these observations suggest that this new antibody was indeed detecting NPFF-expressing neurons.Further confirmation of its specificity could be obtained in the future by testing the antibody on tissue from NPFF knock-out mice.Earlier immunocytochemical and in situ hybridisation studies have shown a relatively high density of NPFF-expressing cells and processes in the superficial dorsal horn of the rodent spinal cord, with a distribution that closely matched that seen with the pro-NPFF antibody and the Npff mRNA probe in the present study.Häring et al. showed that these cells, which mainly represented their Glut9 cluster, co-expressed the mRNA for Slc17a6, indicating that these were excitatory neurons.This accords with our finding that pro-NPFF-expressing cells were invariably negative for Pax2-immunoreactivity, which is present in all inhibitory neurons in the dorsal horn.None of the pro-NPFF-immunoreactive cells were retrogradely labelled with CTb that had been injected into the lateral parabrachial area, and since tracer injections into this region are thought to label virtually all projection neurons in lamina I, it is likely that all of the NPFF cells are excitatory interneurons.We have previously reported that 76% of neurons in the mouse superficial dorsal horn are excitatory, and since we found pro-NPFF-immunoreactivity in 4.7% of lamina I-II neurons, we estimate that the NPFF-expressing cells account for around 6% of all excitatory neurons in this region.Our finding that the NPFF population showed virtually no overlap with those defined by expression of Cck, Nts or Tac2, and only minimal overlap with the Tac1 population is consistent with the findings of Haring et al., since these cells would correspond to those belonging to Glut1–3, Glut4, Glut5–7 and Glut10–11.We have also examined sections that had been immunostained for pro-NPFF together with various combinations of antibodies against neurotensin, preprotachykinin B and pro-cholecystokinin, and found no overlap of any of these with pro-NPFF.With regard to GRP, there was an apparent discrepancy, since we found no overlap between pro-NPFF-immunoreactivity and eGFP expression in the GRP-EGFP mouse, but nearly 40% of the Npff mRNA + cells also contained Grp mRNA.By using multiple-label fluorescent in situ hybridisation we have found that although all Gfp mRNA + cells in this mouse line contain Grp mRNA, these only account for around 25% of the Grp mRNA + cells.Interestingly, Häring et al. reported that Grp mRNA was widely distributed among several excitatory interneuron clusters, whereas we have found that the GRP-eGFP-positive cells in lamina II form a relatively homogeneous population that shows very little overlap with neurons that express CCK, neurotensin, substance P or NKB.In addition, these cells show a unique somatotopic distribution, as they are far less frequent in regions that are innervated from glabrous skin.This suggests that the GRP-eGFP cells represent a discrete functional population, even though Grp message is far more widely expressed.We have previously estimated that the other neurochemical populations that we have identified account for around two-thirds of the excitatory neurons in laminae I-II.With our finding that the NPFF population accounts for ~ 6% of excitatory neurons, this brings the total that can be assigned to one of these populations to ~ 75% of these cells.Neuropeptide FF was initially isolated from bovine brainstem and characterised by Yang et al.The gene encoding the precursor protein pro-NPFF was subsequently identified and sequenced, and shown to code for both NPFF and an extended peptide known as neuropeptide AF.Initial studies demonstrated that intracerebroventricular injection of NPFF suppressed morphine analgesia.However, Gouarderes et al. reported that intrathecal NPFF caused a prolonged increase in both tail-flick latency and paw pressure threshold in rats, corresponding to an analgesic effect.In addition, NPFF enhanced the analgesic action of intrathecal morphine.A subsequent study suggested that the anti-nociceptive action of spinal NPFF involved both μ and δ opioid receptors, since it was reduced by co-administration of specific antagonists acting at both of these classes of opioid receptors, while sub-effective doses of NPFF analogues enhanced the effect of both μ and δ opioid agonists administered intrathecally.Two G protein-coupled receptors for NPFF have been identified, and named NPFF-R1 and NPFF-R2.These both couple to G proteins of the Gi family.NPFF-R2 is highly expressed in the spinal dorsal horn, as shown by both in situ hybridisation and RT-PCR, and the mRNA is also present in dorsal root ganglia.This suggests that NPFF released from excitatory interneurons acts on NPFF-R2 expressed by both primary afferents and dorsal horn neurons.Primary afferents with mRNA for NPFF-R2 include those that express TRPM8 or MrgA3 as well as peptidergic nociceptors.The anti-nociceptive action of NPFF is thought to involve opening of voltage-dependent potassium channels in primary afferent neurons.However, little is apparently known about the types of dorsal horn neuron that express the receptor.Binding sites for NPFF have also been identified in human spinal cord, with the highest levels in the superficial dorsal horn.This suggests that NPFF may also modulate nociceptive transmission in humans.There is also evidence that expression of both NPFF and the NPFF-R2 are up-regulated in inflammatory, but not neuropathic, pain states, and it has been suggested that this may contribute to the enhanced analgesic efficacy of morphine in inflammatory pain.Häring et al. used expression of the immediate early gene Arc to assess activation of their neuronal populations in response to noxious heat and cold stimuli.They reported that cells belonging to the Glut9 cluster could upregulate Arc following both types of stimulus, with ~ 10% of these cells showing increased Arc mRNA after noxious heat.Our findings extend these observations, by showing that many NPFF cells were pERK-positive, not only following noxious heat, but also after other noxious and pruritic stimuli.We have previously reported that among all neurons in laminae I-II, between 20 and 37% show pERK-immunoreactivity with the different stimuli that were used in this study.Comparison with the results for the NPFF cells, suggests that for most of the stimuli the NPFF cells were at least as likely to show pERK as other neurons in this region, while in the case of pinch and capsaicin they appear to be more likely to be activated.Since each experiment involved only a single type of stimulus, we cannot determine whether there was convergence of different types of nociceptive, or of nociceptive and pruritoceptive inputs onto individual cells.The proportions that we identified as being pERK-positive are therefore likely to have underestimated the fraction of cells that respond to one or more of these stimuli.Since the NPFF cells are glutamatergic interneurons, their main action is presumably through glutamatergic synapses with other dorsal horn cells.Part of the NPFF axonal plexus lies in lamina I, where it may target ALT projection cells.In addition, the NPFF axons that enter the LSN have been shown to form contacts with spinothalamic neurons in this region in the rat, although synaptic connections were not identified in that study.However, many of the NPFF axons remain in lamina II, where dendrites of projection neurons are relatively infrequent.It is therefore likely that they engage in complex synaptic circuits that transmit nociceptive and pruritoceptive information.Cells in laminae IIi-III that express PKCγ are thought to form part of a polysynaptic pathway that can convey low-threshold mechanoreceptive inputs to nociceptive projection neurons in lamina I under conditions of disinhibition, and thus contribute to tactile allodynia.However, much less is known about the roles of excitatory interneurons in circuits that underlie acute mechanical or thermal pain, or those that are responsible for hyperalgesia in either inflammatory or neuropathic pain states.Interestingly, we found that the great majority of the NPFF cells were somatostatin-immunoreactive, and are therefore likely to have been affected in previous studies that manipulated the function of somatostatin-expressing dorsal horn neurons.Since these studies implicated somatostatin cells in acute mechanical pain, as well as tactile allodynia in neuropathic and inflammatory pain states, our finding suggests that NPFF cells may contribute to these forms of pain.The somatostatin released by NPFF neurons that were activated by intradermal injection of pruritogens may be involved in itch, since somatostatin released from dorsal horn interneurons is thought to cause itch by a disinhibitory mechanism involving inhibitory interneurons that express the somatostatin 2a receptor.Assessing the roles of NPFF cells in spinal pain and itch mechanisms will require a method for selectively targeting them, presumably involving a genetically altered mouse line in which NPFF cells express a recombinase.These results show that NPFF is expressed by a distinct population that accounts for around 6% of the excitatory interneurons in laminae I and II, and that these cells are frequently activated by noxious or pruritic stimuli.
The great majority of neurons in the superficial dorsal horn of the spinal cord are excitatory interneurons, and these are required for the normal perception of pain and itch. We have previously identified 5 largely non-overlapping populations among these cells, based on the expression of four different neuropeptides (cholecystokinin, neurotensin, neurokinin B and substance P) and of green fluorescent protein driven by the promoter for gastrin-releasing peptide (GRP) in a transgenic mouse line. Another peptide (neuropeptide FF, NPFF) has been identified among the excitatory neurons, and here we have used an antibody against the NPFF precursor (pro-NPFF) and a probe that recognises Npff mRNA to identify and characterise these cells. We show that they are all excitatory interneurons, and are separate from the five populations listed above, accounting for ~ 6% of the excitatory neurons in laminae I-II. By examining phosphorylation of extracellular signal-regulated kinases, we show that the NPFF cells can respond to different types of noxious and pruritic stimulus. Ablation of somatostatin-expressing dorsal horn neurons has been shown to result in a dramatic reduction in mechanical pain sensitivity, while somatostatin released from these neurons is thought to contribute to itch. Since the great majority of the NPFF cells co-expressed somatostatin, these cells may play a role in the perception of pain and itch.
258
Pathway analysis of complex diseases for GWAS, extending to consider rare variants, multi-omics and interactions
The growth in knowledge of our genome and new development in genomic technologies have enabled the identification of risk factors of complex diseases using genome-wide association studies.“Complex” diseases are so called because they are caused by multiple genetic and environmental risk factors.In complex diseases, the causative genetic factors usually have small effect sizes.In GWAS, a huge number of genetic variants are tested simultaneously.To account for multiple testing, the p-value threshold of a single-variant test for declaring genome-wide significance was suggested to be 5 × 10− 8 .Despite opinions to relax this threshold to the order of 10− 7 , the threshold is still very stringent.Given these factors, it is a challenging task to perform GWAS powerful enough to map disease genes for complex diseases successfully.To increase the power of a GWAS, one method is to take missing heritability into account.Missing heritability refers to the inability for the disease-susceptible variants found from GWAS to explain the complete genetic component contributing to the increased risk of a phenotype .One reason is that the genetic variants of complex diseases, each only having a small effect, cannot all be detected by single-variant statistical analyses.To address this issue, it is better to consider collectively effects of interesting variants together in a meaningful way in order to increase statistical power and reduce the burden of multiple testing .Pathway analysis complements single-variant analysis in two ways.First, by combining weaker but related single-variant signals, the resulting statistics could be improved if these variants are collectively related to the phenotype .It is particularly useful for pilot studies with small sample sizes to allow investigators to prioritise variants for follow-up analysis.Second, pathway-based studies can allow the discovery of novel sets of genetic variants with related functions, which helps explain the observed data.In both cases, we hope to increase the power of the hypothesis-free GWAS by providing functional annotations, and combine effects of variants within appropriate functional units.Pathway analysis combines signals of multiple variants.However, what is the biological meaning of such analysis?,There are two main goals in biomedical research: understanding of molecular mechanisms underlying a phenotype or disease on the one hand, and discovery and design of drugs for disease treatment on the other hand .To achieve these aims, effects on the body caused by inherited genetic background and external changes have to be considered collectively.In the past, experiments were analysed in a reductionist manner, for which only a single level of data was considered at a time because of the lack of tools for analysis .Take GWAS as an example, a set of variants can be obtained by extracting variants passing a pre-defined p-value threshold in association tests.However, the functions and biological meaning of this set of variants or genes cannot be inferred by p-values alone.The retrieval of such information requires yet another layer of evaluation separate from the association study .Pathway analysis can serve as a proxy in filling the gap here to infer the relationship among the observed set of selected genes represented by significant variants, and the strengths of the relationship.As a result, the association findings could be interpreted more easily.There are other reviews focusing on pathway analysis of GWAS .This review is broadly divided into two parts.The first part discusses technical aspects that researchers may find useful before carrying out pathway analysis for GWAS data.It aims to describe how to carry out pathway analysis for common variants in GWAS, and discuss the aspects that researchers may consider if they wish to carry out the analyses.The second part discusses possible steps that enable prediction of phenotypes more accurately by using extra -omics data.We deliberate on how pathway analysis is extended to integrating rare variants, other “-omics” data, and gene-environmental interactions.We hope this review article will enable researchers of GWAS to get started with pathway analysis right away.Meanwhile, they will also appreciate the possibility and value of expanding the analysis paradigm to other data types.Ultimately, this would help us understand the aetiology of diseases better, and could possibly shed light on more effective therapeutic measures.Readers should note that, throughout the text, pathway analysis is referred to as having almost the same meaning as network analysis unless otherwise specified, which both mean a broader sense of multi-SNP analysis based on certain information."However, we would like to draw readers' attention to the fact that, in a narrower sense, pathway and network analyses are not the same based on the relationship of the genes included for analysis.There are three basic steps in pathway analysis of GWAS data.First, users need to choose and determine the gene set definitions of the pathways to be used for pathway analysis.Second, input variants are mapped onto the genes they belong to for preparing the calculation of gene and/or pathway-based statistics.Finally, pathway statistics are calculated, either by a one-step approach, which only reports the pathway-based statistics; or by a two-step approach, which calculates pathway-based statistics using intermediate gene-based statistics.Various aspects that will affect the choice of analysis software tools will be discussed below.Table 1 lists software packages for pathway analysis.Pathway analysis software packages accept various input data formats, including p-values of single-marker association tests , keywords/gene list , or raw genotype data .If covariates are to be considered in pathway analysis, it is better to control for it at an early stage of generating individual variant-level statistics.If raw data are available, obtaining covariate-adjusted statistics from raw data is straightforward.Genetic analysis software packages, such as PLINK and SNPTEST , are custom-made to generate covariate-adjusted test statistics for single-marker association analysis from genotype data.The covariate-adjusted p-values can then be used in downstream pathway analysis.However, it should be noted that covariates usually cannot be incorporated into pathway analysis algorithms directly.Therefore, if covariate adjustment is crucial to analysis, it is advised that adjustment of covariates is first carried out in single-marker analysis.Pathway analysis is then carried out using methods that allow p-values as the input data.Based on the difference in the hypothesis being tested for generating pathway p-values, pathway analysis methods can be divided into either self-contained or competitive .For the self-contained approach, we test the hypothesis that the observed pathway is associated with a phenotype by comparing against a null genetic background.For the competitive approach, we test the hypothesis that the statistics of genes within a pathway is significantly different from that not within the pathway.To reflect the difference, the competitive approach is named as “enrichment” methods while self-contained approach is named as “association” methods .What data are available limits the choice of appropriate analysis approach and hence analysis software tools and methodology.To carry out analysis using the competitive approach, data of genes not within the pathway of interest must also be available.In contrast, the self-contained approach does not require such data.Therefore, the competitive approach is not applicable to candidate-gene data, while the self-contained approach is applicable to both genome-wide and candidate-gene data .In some studies, the competitive approach is used in the discovery stage of GWAS, and then followed by the self-contained approach for replication.A recent evaluation of the statistical properties of gene-set enrichment methods suggests that competitive approach have an advantage over self-contained approach in that self-contained approach fails to take into consideration information from other biological pathways .Defining gene sets is an essential step in pathway analysis.Table 2 lists some common databases for annotating pathways.The gene set information of these data sources can be classified into functional pathways, networks, gene ontology, and associated gene sets.Some software packages allow multiple sources of gene set definitions for more comprehensive analysis.For example, i-GSEA4GWAS allows users to choose among online datasets including gene ontology, Kyoto Encyclopaedia of Genes and Genomes and BioCarta pathways to define their gene sets for data analysis.Other software packages such as Ingenuity Pathway Analysis use their own curated databases to define gene sets.Some programs require users to input gene set definitions, and statistics are calculated for the input gene sets.Examples of such programs include adaptive rank truncated product method and PLINK set association.There are both pros and cons for choosing software packages using available gene set definitions and those using user-defined definitions.The most obvious advantage of using a curated database is that users do not need to create their own gene sets.In addition, the gene sets involved are also created based on known functional knowledge, through which researchers can interpret their results easier.However, using defined pathways may deprive the users of the flexibility in defining gene sets.If researchers wish to test a customised set of genes based on their own hypotheses, then they must choose software that allows user-input gene set definitions.The web server i-GSEA4GWAS, for example, allows definition of gene sets from either curated databases or user-input gene set.Users should therefore choose appropriate software according to their hypothesis.One method to categorise pathway analysis software is according to whether it is “one-step” or “two-step” .In a two-step design, p-values of individual variants of a gene are first considered to give a gene-based p-value or score.Pathway analysis is then performed using the gene-based statistics.The one-step approach, however, does not produce gene-based statistics, and pathway-based statistics are calculated from input variants directly .There is no best answer for choosing one method over the other because there is no consensus yet on the best approach to combining single-variant statistics .However, it is advisable to use all variants and analyse the data with pathways as units at an early stage so that most information can be obtained from pathway analyses .Pathway size imposes a significant impact on analysis results.Large pathways include more genes, and therefore may have a larger number of significant genes by sheer chance.On the other hand, small pathways may also lead to false positive results by including a few isolated significant variants .To balance out the effects of both, the number of genes per pathway has been suggested to be 100 – 400 genes .Besides the pathway size, the composition of pathways may also affect results.Genes having large effects on a phenotype and genes involved in a number of pathways may render over-representation of pathways consisting of such genes, and therefore create a misinterpretation that other genes within the pathways also contribute to the phenotype.It is advised that if such genes exist in the test gene set, results should be compared using data without these genes to investigate whether there is a need to drop these genes as a quality control measure before pathway analysis.For example, the human leukocyte antigen gene is a known genetic risk factor for both psoriasis and multiple sclerosis.In order to reduce the influence of HLA, a psoriasis study followed up only pathways that were significant before and after including HLA .Similarly, a GWAS study of multiple sclerosis directly excluded HLA from pathway analysis to avoid complexity in interpreting results .To produce gene- and pathway-based statistics, variants must first be assigned to their relevant genes.A simple approach is to relate only single-nucleotide polymorphisms within genes to their relevant genes.Nevertheless, this is not satisfactory because a large number of variants located outside gene exons will be excluded.One method to relieve this is to assign genetic variants in gene-flanking regions to their relevant genes.There is no exact answer to the covering region of interest.Despite the suggestions that most regulatory elements exist within 20-kb regions flanking a gene , values from 5 kb to over 100 kb have been used in different studies.In addition to the technical aspects of the software, user friendliness, flexibility and expandability could improve ease of use.For example, programs such as IPA and MetaCore provide built-in options for network visualisation.Other software packages such as Cytoscape may provide a platform which allows installation of “apps”, i.e. plug-ins which can perform various tasks.Analysis and visualisation are therefore possible in one single platform with the possibility of adding new algorithms for analysis by installing new apps.Table 3 lists some diseases for which pathway analysis has been applied to examine their genetic data.The corresponding software packages are also indicated.In the past few years, the advancement in next-generation sequencing technologies and multi-omics technologies has made the entire analysis paradigm walk “the extra mile”.From the input-variant perspective, sequencing technologies enable the detection and therefore analysis of rare variants.Multi-omics technologies provide data beyond the genetic level, thus allowing integrative analysis using other -omics data.In this section, we discuss some aspects of pathway analysis involving rare variants and other “-omics” platforms.This allows readers to compare and contrast such analyses with analysis of GWAS data, and appreciate how genetic data can be analysed together with other -omics data.The introduction of NGS has made deep sequencing of a large number of individual samples possible at a much lower cost.This has led to the discovery of numerous low-frequency variants ranging from 1% to 5%) and rare variants.For ease of discussion in this article, we shall refer to all these variants collectively as “rare variants”.RV analysis has the potential to reveal novel variants predisposing to or causing diseases.Annotations of RVs are also more complete because functional units with clearer suggested roles are usually selected in targeted and exome sequencing studies.Together with the lowering sequencing cost, these factors have driven rapid growth in the number of RV analyses in the past decade .In GWAS, single-variant analysis is the simplest and typical analysis method.For rare-variant analysis, single-variant analysis is also possible if the samples size is large enough to produce genome-wide significant results.However, even for a disease variant with large effect, it will require more than 100,000 samples for its detection with 80% power if its MAF is low.Together with the multiple-testing penalty required to correct for the huge number of rare variants, obtaining adequate sample size for a powerful single-variant analysis is extremely challenging .Therefore, region-based analysis methods for RVs have been developed to increase power and reduce the multiple-testing penalty .There are specific methods for grouping rare variants together for analysis.Such methods have been reviewed previously .One question is interesting in our current context.Given that pathway analysis methods for common variants and region-based analysis methods for RVs are both for aggregating single-variant information, are these methods applicable to pathway analyses for both rare and common variants?,To address this, the performances of pathway-based association methods, originally for GWAS, were compared to that of region-based association methods for RVs in a simulated dataset .When common and rare variants were jointly analysed, direct application of pathway analysis software was not satisfactory.It was suggested that rare variants should be given higher weighting for better analysis performance .Later, a direct comparison between GWAS pathway analysis software and rare-variant region-based methods was carried out .In this study, a modified version of GSEA-SNP using weighted Kolmogorov–Smirnov statistics for gene-set enrichment score was chosen to represent GWAS pathway analysis software for comparison.Meanwhile, four RV region-based association methods were tested, namely weighted-sum test , simple-sum test , collapsing test in combined multivariate and collapsing method , and sequence kernel association test .Input variants included 40,918 coding variants from 822 individuals under 1000 Genomes Project , after excluding indels and including biallelic variants within annotated pathways in KEGG only .The effects of variants were simulated to depend on two factors: increasing effect with decreasing minor allele frequency, and whether it was one variant of genes from a randomly selected “causative” KEGG pathway.Four scenarios were simulated, which represented combinations of two effect-size models and two different numbers of input causative pathways.Pathway analysis of 1000 simulated datasets was carried out using 11 methods, which included variations using the five methods mentioned above."Power and type I error rate were evaluated to estimate the performance of the methods.Overall, no single method performed particularly better .Type I error was found to be inflated in most of the pathway analysis methods."However, using all SNPs' p-values for gene-based statistics and then combined with WKS was powerful with moderate type I error in all simulation scenarios. "Moreover, pathway-based methods had higher power than region-based methods, where their power was sensitive to whether or not the effect size of the data matched the methods' assumptions. "If the model of effect sizes fitted the software's assumption, region-based software was powerful; otherwise, there could be lack of power.This is consistent with the descriptions in Lee et al. that the power of RV analysis depends on the assumed underlying effects model."Furthermore, using variant-level information for pathway analysis was more powerful than collapsing variants' information into gene units first .This indicates that that one-step analysis may be more powerful than two-step pathway analysis.In brief, while analysing RVs using pathway analysis software is technically feasible, the performance depends on the consistency between assumed and actual model of variants.Recently, there are software tools developed particularly for pathway analysis of both common and rare variants.The aSPUPath test is a self-contained pathway analysis test modified from adaptive sum of powered score , which was originally developed for RV analysis.It can, by incorporating suitable weighting, cater for both common and rare variants.Parameters can be adjusted to modify the assumed direction and proportion of associated variants.This can help increase power by fitting a statistical model closer to the actual situation by which variants confer their effects.Another software tool uses smoothed functional principal component analysis .In this test, genetic variants under consideration are formulated to be represented by a functional principal component score.The difference of the average scores between cases and controls are tested.Smoothed functional principal component analysis has been shown to have better power and better-controlled type I error rates than other common region-based RV analysis software."One reason for the better power is the software's ability to capture all variants' information in constructing the principal component score .Unlike GWAS, which usually cover both genes and inter-genic regions, sequencing studies currently focus more on functional regions of the genome or targeted regions of particular interest.This is mainly because the cost of NGS is still high and because more deleterious mutations may be present in these regions .Many RV analyses require a weight to indicate the relative importance of each variant during analysis, which can help increase analysis power .Assignment of weights based on functions for targeted and exome sequencing is easier because functional annotations are more likely known before experiment.However, it remains a question as to how the weights are determined for non-coding regions since functions may not be known explicitly, and therefore only minor allele frequency may be used as the most readily available information for determination.Population stratification, as in analysis of common variants, may adversely affect results.This can be captured and corrected by traditional methods used in GWAS .Moreover, meta-analysis methods have also been developed for combined analysis of multiple RV studies .As more sequencing studies are carried out, it will be worth investigating if they are applicable to analysis of both common and rare variants for better capture of genetic architecture for analysis.Unlike pathway analysis for common variants, for rare variants, the concept of “pathway analysis” and “multiple-variant” analysis is not clearly distinguished now.Further investigation is still needed whether multiple-variant analyses within genes can be directly applied to pathway context.One interesting question is whether pathway analysis of RVs conveys the same biological meaning as that for common variants.“Partially correct” is a short answer to this question.The argument against the statement is that traditionally RVs are believed to have relatively larger effect size.Therefore, once an associated RV is identified, it is likely that the identified locus is already causative .From this perspective, pathway analysis is not necessary for RVs.However, this is not entirely the whole picture for common diseases because some RVs only exert medium or small effects .To investigate this, Kryukov et al. tried to estimate the proportion of mildly deleterious missense mutations as well as their fractions among human RVs in various variation datasets .It was found that over half of all de novo missense mutations are mildly deleterious.Moreover, the majority of amino acid substitutions with observed frequency < 1% are also mildly deleterious.The combined findings show that low-frequency missense mutations are deleterious.In addition, it has been estimated that the majority of rare missense polymorphisms in humans have small selection coefficients.This suggests that the purifying selection acting on them is relatively mild.Therefore, these rare mutations can accumulate in the population, resulting in a highly heterogeneous spectrum of individual alleles with very low frequencies .A better model to study genetics of such phenotypes would be to consider the cumulative frequencies of all RVs in interested genes and compare them between cases and controls.Previously, the high cost of sequencing and the difficulty in selecting deleterious missense RVs had limited the use of the method .With the advancement in technologies, the cost of sequencing keeps going down.Meanwhile, pathway analysis methodologies may help differentiating deleterious RVs from neutral ones.For example, using pathway analysis approach, structural differences were identified in multiple genes involved in signalling networks controlling neurodevelopment , and this approach identified rare structural variants in neurodevelopmental pathways to be associated with schizophrenia.This example demonstrated that rare variants involving multiple genes could be discovered using pathway analysis approach.One goal of genetic studies is to predict the outcome of a disease or phenotype.However, when genetic information is passed from DNA to RNA and then to protein through the central dogma of molecular biology, variable factors may interfere with the intermediate steps and therefore affect the final outcome.These factors could be “intrinsic”, i.e. regulatory events that happen inside an organism without external stimuli, such as post-transcriptional and post-translational modifications or gene-gene interactions.The factors may also be “extrinsic”, where environmental factors and external stimuli play important roles.In this section, how information other than DNA genotype data may be integrated with genetic pathway analysis will be briefly discussed.Because of the complexity in biological systems, integrating information of multiple “-omics” platforms can provide extra insight into how genetic information is conveyed to the formation of phenotypes .Although the idea of pathway analysis methods for GWAS originated from analysis of expression data, traditionally data of different “-omics” platforms were analysed separately.Recently, because of the availability of high-throughput expression and proteomic data, data integration has gained much attention.Integration of genetic and other data can be divided into “multi-stage” and “meta-dimensional” approaches .For multi-stage analyses, two different types of data are considered at each stage.A linear pipeline that uses results from a previous analysis step carries out integration of data.On the other hand, meta-dimensional analyses try to combine all data types and predict phenotype outcome using the combined data in one step .One good example of integration between genetics, gene expression and phenotype outcome is obesity.Emilsson et al. tried to explain this using a two-step approach.First, they analysed over 23,000 transcripts in blood and adipose tissue in 470 individuals to look for expression traits, i.e. gene transcripts with good correlation with clinical phenotypes for obesity.Then, linkage analysis was carried out using 1732 microsatellite markers near to genes corresponding to the transcripts from the same individuals to estimate “heritability” of the expression traits.It was found that expression traits with high heritability in blood and adipose tissues were highly reproducible between the two tissues.For the expression traits that were within the top 25th percentile for heritability in blood, 70% of them had a significant cis-acting expression quantitative trait locus in both adipose tissue and blood.This showed that expression of genes had a high genetic component.This study is important because it linked gene expression with clinical phenotypes, where such evidence was previously given by studies of cell lines only.Later, Zhong et al. tried to achieve integration for type 2 diabetes using pathway analysis approach .They first obtained SNPs associated with gene expression in 707 liver, 916 omental adipose and 870 subcutaneous adipose tissues.A total of 20,563 eSNPs were identified in 9,964 genes.Association of these eSNPs with type 2 diabetes phenotype was then assessed using a GWAS of over 3,400 individuals, after imputing eSNPs present in expression analysis but not in the GWAS.Pathway analysis using modified GSEA was then used to identify significant pathways with representing eSNPs.Nine pathways were finally identified, which were successfully validated using an independent cohort .This pipeline of analysis has further shown that integration of genetic and expression data is possible with the use of pathway analysis.Similar approaches have been adopted for other phenotypes, including basal cell carcinoma , allergic rhinitis , coronary artery disease and blood pressure .This idea was extended by Gusev et al. for transcriptome-wide association study .In short, both genetic and gene expression data were available from a small set of individuals.In a larger set of individuals with GWAS data only, expression data were obtained by imputation, and association between imputed expression data and phenotype was then carried out.The main advantage for this approach is that expression data is hard to obtain for all samples under study.This study will allow expression-phenotype association analysis with expression data being generated using an indirect approach.Using this approach, 69 loci significantly associated with obesity-related phenotypes were found .Recently, Locke et al. carried out a large-scale GWAS of body mass index using nearly 334,000 individuals.In this study, 97 significant loci were successfully identified.Different sources of evidence were used to identify significant SNPs associated with BMI.These sources included genes having or close to significant SNPs, results from pathway analysis software DEPICT and MAGENTA , cis-eQTL and literature search to identify overlapping SNPs.They have successfully found overlapping pathways, including those related to central nervous system, obesity, insulin secretion and/or adipogenesis.Gene coexpression networks and gene regulatory networks are related and yet conceptually different types of networks.Both networks consist of edges that connect genes with certain “relationships”.In GCN, this relationship refers to the coexpression pattern observed between two genes."An edge can be established when the correlation of the genes' expression exceeds a defined threshold.This simple definition does not imply any causal relationship.In other words, GCNs are undirected.On the other hand, GRNs describe the explicit causal relationships of developmental processes .GRNs explain how genomic sequences can regulate the expression of a set of genes, which in turn gives rise to the collective developmental pattern and state of differentiation.GCN is a versatile and powerful method.For example, it was used to investigate the conservation of gene expression patterns among different organisms."In Stuart et al.'s study, GCN was used to study the expression patterns of humans, flies, worms, and yeast .First, 6307 “metagenes” were defined using gene sets with similar protein sequences across the different species.The aim was to find out pairs of metagenes with coexpression."To achieve this, the coexpression of each pair of genes between two organisms was represented by Pearson's correlation.The correlations of all genes were ranked.A probabilistic method was then used to determine how likely to see the combination of ranks across all organisms by chance.Connected by 22,163 edges, 3416 metagenes were obtained using a p-value cutoff at 0.05.Five metagenes with previously unknown functions were selected for investigation of their biological functions using information from their GCNs.These metagenes showed conserved coexpression with the genes involved in cell proliferation and cell cycle.Biological experiments confirmed the functions of the metagenes in cell proliferation and cell cycle.This example shows that GCN constructed across multiple species can be used to infer functions of genes with previously unknown functions in addition to coexpression patterns.Depending on the context of transcripts used for building the networks, GCN can also be extended to study the functions of non-coding transcripts."In Yao et al.'s study, GCN was used to study enhancers expressed in the brain and their gene targets .Enhancers are non-coding DNA sequences that can carry out regulatory functions.Active enhancers have signature chromatin marks.Their transcription results in non-coding enhancer RNAs.In this study, 908 enhancer regions were first identified using RNA-seq of cell and tissue samples.Of these, 673 were intronic/intergenic.By comparing RNA-seq results from adult human frontal, temporal and occipital cortices, and cerebellum, 131 brain-expressed enhancers were identified and 103 of these, defined as robust BEEs, were found to overlap with enhancer-specific histone marks H3K4me1 or H3K27ac.In order to locate the targets of rBEEs, a GCN was constructed between rBEEs and gene expression data of the brain.The authors drew several conclusions from the GCN.First, out of all 19 coexpression interaction modules found, 12 showed brain region-specific or developmental stage-specific expression.Most obvious variation in spatiotemporal gene expression occurred in the transition from fetal to postnatal brain.Moreover, the largest GCN node contained genes more highly expressed in fetal brain than in all regions of adult brain.Other GCN nodes were specific to brain regions.This indicated the importance of brain enhancers in regulating the stage of brain development.Second, in the GCN modules, there was higher topological overlap consisting of rBEE-closest gene pairs.This indicated that rBEEs were more likely to coexpress, and therefore regulate nearby genes.Third, among all the top genes coexpressed with each of the rBEEs and also located in cis with the corresponding rBEE, there were genes related to neuronal differentiation and autism spectrum disorders."This indicated rBEE's targets identified by GCN had functional relevance to brain cell development and brain-related clinical phenotypes.One potential use of GCN of RNA-seq data in non-coding RNA is the annotation of long intervening non-coding RNAs.Recently, a protocol was introduced to identify lincRNAs and to characterise their functions using a GCN .The GCN integrates the expression of protein-coding and lincRNA genes.In short, lincRNAs were first identified using coding-noncoding index, a tool that catalogues coding and non-coding sequence features of different species.Functions of the lincRNAs were then predicted using ncFAN.ncFAN first tries to construct a GCN between lincRNA and protein-coding genes.Then, according to the functional terms annotated for the coding genes connected in a certain hub, the function of the hub can be predicted.This example suggests another possible application of GCN in predicting the functions of non-coding sequences.One noteworthy point is that previous expression profiles were captured mainly by microarrays.Because of the advancement in NGS, expression profiling using RNA-seq has become more popular.The debate of whether using RNA-seq or DNA microarray is beyond the scope of this paper although both technologies have their strengths and weaknesses .GCNs are undirected networks that only show coexpression pattern.However, if we have both protein and expression data, it is possible to construct directed gene regulatory networks, which is able to explain more about the causal relationship between genes.This idea was used for identifying GCNs and GRNs in maize .The study included three types of datasets for 23 tissue samples spanning across the vegetative and reproductive stages of maize.The three types of data included messenger RNA sequencing, electrospray ionization tandem mass spectrometry data of unmodified protein, and that of phosphorylated protein.Weighted gene coexpression network analysis , an R package for construction of GCN, was used to discover similarly expressed genes.In total, 36 genes with similar expression patterns in at least 4 tissues were discovered.The phosphorylation patterns of genes were similar to their mRNA profile.Their phosphorylation also occurred in tissues known to be related to developmental phenotype.These suggested that the phosphorylation of these proteins was important in determining their functions.GRN, a directed regulatory network, was then used to further explore expression pattern of genes together with protein data.GRNs were constructed by observing the expression correlation between mRNA, protein, and phosphoprotein expression profile of transcription factors.It was found, using data from two previously validated TFs, that GRNs constructed using protein data predicted target genes better.When this method was extended to all TFs, it was found that different data sources resulted in disparate GRN predictions.Using combinations of the data sources to build GRNs were found to have better predicting power than single-input GRNs.This study provides an example of extending analysis of genomics to proteomics data, as well as how this enables the direction of gene regulation to be discovered.In the earlier BMI example, the evidence shows that BMI may be related to the control of appetite because DEPICT also identified brain tissues to be a related tissue enriched in the dataset.This example has shown that integrating different sources in a meta-dimensional manner can deduce possible pathways related to a phenotype.DEPICT was built on the data from a cancer expression study that tried to investigate the relationship between copy number and expression level in cancer cells .In the software, there are 14,461 “reconstituted” gene sets that capture gene sets with similar functions and expression patterns.The set was curated based on the expression pattern of 77,840 samples and the functional annotation of constituent genes.In addition, tissue/cell type enrichment was carried out using another set of 37,427 microarrays of human tissues/cells to determine if genes are highly expressed in any of the tissue and cell type annotations of the human subjects.Using this information, DEPICT can help identify genes in associated loci using input SNPs, “reconstituted” gene sets enriched in genes of associated loci, and tissue/cell types implicated by the associated loci .Another integrated analysis software is Gene Set Association Analysis .The software tries to carry out gene set association analysis using both GWAS and expression data.GSAA is based on multi-layer association tests of gene expression and genetic association data.First, a single SNP score is produced using one of the five methods provided by the software."Then, a SNP set is defined as the SNPs within a gene and the gene's flanking region.SNP set association score of the region is given by the maximum single SNP score among all SNPs in the region.Second, a gene expression score is calculated using the difference in the means of expression between the phenotypic classes divided by standard deviation."Third, a gene association score is given by combining the SNP set association score and gene expression score using either Z-score sum, Fisher's method or rank sum method provided by the software.Finally, Kolmogorov-Smirnov test is used to determine which gene sets are associated with the phenotype most."It was found that, in Crohn's disease data, GSAA was able to report more significant pathways than GSEA, which only uses gene expression data .iGWAS is a method that uses mediation to model effects of SNPs and gene expression on disease phenotype.In brief, the authors previously tried to model total genetic effects, i.e. the total effects of SNPs and gene expression, on a phenotype .iGWAS extended the method by adopting counterfactuals to separate total genetic effects on phenotype into two components: one that can be mediated through gene expression, and the other not mediated through gene expression.iGWAS can test the association of both components using an omnibus test.With asthma as an example, iGWAS has been found to be able to confirm previously reported associated genes .Another example is weighted gene coexpression network analysis .WCGNA is widely used for the construction of GCNs.It consists of a collection of R scripts for different stages of building networks, including network construction, module detection, and calculations of topological properties .Compared with Bayesian networks, WCGNA requires less time and fewer samples for training.Besides, when compared with GSEA, there is no need of a priori information as input.Multiple-testing threshold can also be alleviated because WGCNA only considers a subset of edges for network construction .While originally designed for expression analysis, it has the potential to be used for other data type too .GWAS results can be extended by making use of information about metabolism, known as metabolomics GWAS, to infer consequences of genetic variants at the metabolite level .Software tools, such as iPEAP , are available for applications from “traditional” GWAS to state-of-the-art NGS experiments , which suggests the promising prospect of analysis in the area.Although integrating data from multi-omics platforms can provide insight into the relationship between genes and phenotypes, there are issues to be resolved.Firstly, while it is relatively easy to obtain genotype data, expression and metabolomic data are more difficult and costly to obtain.Moreover, while obtaining both genotypes and gene expression profiles for the same individuals would be best for building disease prediction models, it is very hard to have a large sample size.Some software tools were developed using their own genotype and expression data so that users can carry out analysis without their own expression data.However, if the disease/tissue to be analysed is not in the default database of such software, producing expression data with experiments is still an inevitable step.Furthermore, the relationships between different levels of data may not be linear , which may render the use of simple regression models not applicable or more difficult.Further development in statistical tools may help capture such non-linear relationships .Overall, increased sample size should be the most direct approach to improving the power of a study.However, validation using independent sample set, or cross-validation using sub-groups of dataset in hand may also help improve predictive ability with limited numbers of samples and resources .While bioinformatics analysis using information from the central dogma allows us to understand our biology, there are yet other levels of information that allow us to further understand the molecular biology in the body.Here we wish to briefly go through a few examples of glycomics and metabolomics analysis.DNA sequences determine the sequence of a protein.However, the structure and function of the protein can be modified by a very complex process of glycosylation involving regulation of many genes .There are a number of modifications that can affect the structure of glycoproteins, and can each be represented by one level of information.These levels include glycogenomics, glycoproteomics and glycomics .Integration of various sources of data is still at an early stage because there are still several hurdles in data analysis .Despite this, there are a few pioneering studies in glycomics.For example, Brennan et al. tried to analyse mass spectral and gene expression data .In the study, they compared the glycosylation patterns of androgen-dependent and androgen-independent lymph node carcinoma of the prostate cells.They utilised several layers of mathematical rules to generate networks to infer the abundance of glycan structures.An increase in H type II and Lewis Y glycan structures in the androgen-independent cells and the corresponding elevated activity of a fucosyltransferase could be found.However, this could not be found by single-stage analysis .This example showed that a systems biology approach combining expression and mass spectrometry data could be used to discover novel findings.Another possible multiple -omics application is in metabolomics analysis.Metabolomics is the study of chemical traces of the cell during certain cellular activities.Expression-based analysis allows the observation of the molecules present in the cells .By incorporating genetics, hopefully the variations that causes such dynamics could be predicted.This is achieved by GWAS with metabolic traits, or mGWAS .Similar to genetic studies, detection of metabolic traits can be divided into targeted or non-targeted methods.Targeted methods are mainly based on mass spectrometry while non-targeted methods use both MS and nuclear magnetic resonance .The first mGWAS is a study of metabolite profile in serum .The study included a GWAS analysis of 363 metabolites in 284 males.Four significant variants coding for enzymes were identified, where the corresponding phenotype matched the pathway in which the enzymes were involved .For lipidomics, Hicks et al. carried out a GWAS with sphingolipid traits.Lipids were quantified using electrospray ionization tandem mass spectrometry .Thirty-two variants passed genome-wide significance threshold.The strongest signal spanned across 7 genes that function in ceramide biosynthesis and trafficking .Another example is a GWAS to identify genetic risk factors for polyunsaturated fatty acids .Variants of the FADS cluster showed the strongest association with plasma fatty acid concentration, and also a second strongest locus in EVOVL2 associated with longer chain n-3 fatty acids .Besides genetics, environmental factors and gene-gene interactions also play a crucial role in the aetiology of a phenotype.In fact, the manifestation of a disease or phenotype can be viewed as the interplay between genomics, epigenomics, and environment factors .Studying genetic data together with environmental variables will help us understand how the body responds to changes in external conditions.Other papers focus on study design and analysis methods .Here, we wish to focus on giving a brief idea of how gene-environment interaction is considered in GWAS with incorporation in the context of pathway analysis.Users may wish to refer to other references for interaction in candidate gene studies as well as experimental designs .Previously, G-E interaction analysis mainly focused on candidate gene regions with suggested functions .One example is the interaction study between Y402H and lifestyle factors in age-related macular degeneration .It was found that individuals having the CC genotype of Y402H and higher BMI or smoking conferred the greatest risks.With the emerging number of GWAS, analysis of G-E interactions has also evolved from hypothesis-driven candidate approach to genome-wide scale.Studies that carry out G-E interaction analysis on a genome-wide scale, i.e. genome-wide interaction studies, can be viewed as an extension of GWAS.GEWIS has been carried out for several diseases.For example, a GEWIS of asthma investigated the interaction between genetic variants and two environmental factors, in utero and early childhood tobacco exposures, in 2654 cases and 3073 controls .In this study, a logistic regression model containing independent variables representing genetic effects, tobacco exposures, and an interaction term was used for analysis.Variants in EPB41L3 and PACRG were found to be the most significant after considering interaction with in utero and early childhood exposures, respectively.In a GEWIS of myopia , a joint meta-analysis of interaction between genetics and education level in refractive error was performed.It was found that three variants in AREG, GABRR1 and PDE10A have strong evidence of interaction with education among Asian cohorts.Studies of G-E and G-G interactions both involve a large number of multiple-testing penalties due to the huge number of tests for all possible interacting pairs.Some software tools can help us select more probable interacting gene pairs.Gene-based and pathway-based interaction analyses can improve the power of GEWIS by combining signals within functional units.This can be done in a multi-stage approach, where GEWIS is carried out in the first stage, followed by gene-based and pathway-based analyses.One example is a GEWIS of lung cancer investigating the disease susceptibility in relation with asbestos exposure .This study included over 300,000 SNPs with over 1100 cases and controls.Three level of analyses, namely single-variant level, gene level and pathway level, were carried out.In single-variant level of analysis and gene-level analysis, no significant results were found.However, in pathway-level analysis using i-GSEA , Fas signaling and antigen processing pathways, which are related to apoptosis and immune function regulation respectively, were found to be significant .This study illustrates a relatively simple approach of how GEWIS can be combined with pathway analysis methods for the discovery of novel disease pathways.Indeed, taking cancer as an example, genome-wide G-E interaction study is only at its start-up stage, and many G-E interaction studies still adopted a candidate-gene approach .The main challenge in studying G-E interaction is data collection.As in the case for multi-omics analysis, there are very few collected datasets large and wide enough for comprehensive interaction analysis .This situation is slowly improving with the progress of Environmental Genome Project and Toxicogenome Project ."How an individual develops an illness can be considered as how he/she responds to the environmentally induced stress, given the person's genetic background.Therefore, the Environmental Genome Project aims to look for genes that are related to environmentally associated diseases, and then carry out functional studies to validate the results in vivo.Moreover, one disease can occur simultaneously with another, a phenomenon known as comorbidity .By analysing environmental interactions together, how an organism is “unwired” could be more clearly understood .We hope that the information can be utilised to advise authorities to improve health policies.For example, if certain lifestyles are related to higher occurrence of certain diseases, preventive policies can be made to advise the public to prevent such activities in order to promote public health – a paradigm shift towards precision medicine .One trend for pathway analysis is its application to other “-omics” data.For example, copy number variation data, an example of structural variations, can be used to carry out combined analysis with expression data.In a meta-analysis of cancer transcriptomes, CNV was compared with expression data to infer trans-acting gene sets .The correlation data were then used to look for enriched pathways to further explain the possible functional consequences of the results .Another possible data type for analysis is epigenetics data , which include the methylation patterns of DNA.One software tool for pathway analysis of epigenetic data is LRPath .It can report enriched biological concepts in input methylation data and compare methylation profiles from multiple experiments.With the expanding amount of data, the demand for software tools for multiple data types will also increase.In addition, the expression of genes within the body is dynamic.One possible way to capture such information is to take expression measurements and analysis at different time points.For example, Stanberry et al. tried to study the gene expression patterns of a person during two episodes of viral infections .The study used the approach of integrative personal omics profile , which tried to connect dynamic, longitudinal multiple -omics data with disease status.In the study, clusters of genes having similar temporal expression patterns during viral infections could be identified.This suggested that integrating different -omics data might help model the dynamics of biological systems .Moreover, traditional genomic analysis lacks cell- or tissue-specific data .Recent technological improvements in whole-genome amplification and NGS have made single-cell sequencing possible .This would enable us to understand genes that have effects on cell state, and possibly predict cell fate .This is particularly important for cancer studies because intra-tumour heterogeneity exists among cells of the same individual .Only with single-cell assays can variations in genomes among cells be detected.This has proven to be successful in breast cancer .With pathway analysis, not only the genetic variations among cells could be known, but also clues to the functional background of how this happens and its consequences could be discovered.Further improvement of accuracy of pathway analysis requires more comprehensive and replicable functional annotations, experimental data and phenotype data.To obtain replicable functional annotations, it has been suggested that functional data be reported in a standard format with minimal information, as suggested by Biological Dynamics Markup Language, a format for reporting dynamic data .Optimistically, the number of replicable pathway studies will be increased by improving the quality in reporting results of experiments.However, it should be noted that even with pathway analysis, functional analysis is still needed to confirm the actual genes and variants that exert the most important effects.Finally, we briefly summarise the issues in pathway analysis and suggested solutions in Table 5, and some issues are deliberated in detail in Box 2.This serves to inspire the readers for future development in this area.This concise review discusses different factors to be considered in carrying out pathway analysis for GWAS to analyse complex diseases, as well as how pathway analysis could be extended to rare variants, and the possibility of including other “-omics” data and taking interaction into consideration.Along with the advancement in -omics technologies are the large amounts of data generated from multiple platforms.One strength of pathway analysis is its ability to integrate information from different sources, as well as reducing dimension of analysis into meaningful units so that the power of analysis can be improved.We foresee that pathway analysis of complex disease will be “multi-dimensional”, where “-omics” and environmental factors will be considered simultaneously in analyses in order to model disease mechanisms more accurately."By learning both “intrinsic” genomic factors and external environmental factors causing diseases, better health strategies for personalised healthcare, and precise medicine based on a person's genetic and exposed environment backgrounds for the prevention and treatment of disease could be invented.The Transparency document associated with this article can be found, in the online version.
Background Genome-wide association studies (GWAS) is a major method for studying the genetics of complex diseases. Finding all sequence variants to explain fully the aetiology of a disease is difficult because of their small effect sizes. To better explain disease mechanisms, pathway analysis is used to consolidate the effects of multiple variants, and hence increase the power of the study. While pathway analysis has previously been performed within GWAS only, it can now be extended to examining rare variants, other “-omics” and interaction data. Scope of review 1. Factors to consider in the choice of software for GWAS pathway analysis. 2. Examples of how pathway analysis is used to analyse rare variants, other “-omics” and interaction data. Major conclusions To choose appropriate software tools, factors for consideration include covariate compatibility, null hypothesis, one- or two-step analysis required, curation method of gene sets, size of pathways, and size of flanking regions to define gene boundaries. For rare variants, analysis performance depends on consistency between assumed and actual effect distribution of variants. Integration of other “-omics” data and interaction can better explain gene functions. General significance Pathway analysis methods will be more readily used for integration of multiple sources of data, and enable more accurate prediction of phenotypes.
259
Improved inference of time-varying reproduction numbers during infectious disease outbreaks
Infectious disease epidemics are a recurring threat worldwide.A key challenge during outbreaks is designing appropriate control interventions, and mathematical models are increasingly used to guide this decision-making.Recent examples of the real-time use of models during outbreaks can be drawn from diseases of humans and in the Democratic Republic of the Congo in 2018-19), animals) and plants).For control measures to be optimised, the values of the parameters governing pathogen spread must be estimated from surveillance data, and temporal changes in these values must be tracked.The time-dependent reproduction number, Rt, is an important parameter for assessing whether current control efforts are effective or whether additional interventions are required.The value of Rt represents the expected number of secondary cases arising from a primary case infected at time t.This value changes throughout an outbreak.If the value of Rt is and remains below one, the outbreak will die out.However, while Rt is larger than one, a sustained outbreak is likely.The aim of control interventions is typically to reduce the reproduction number below one.Different formal definitions of Rt have been proposed, and a number of methods are available to estimate reproduction numbers in real-time during epidemics.Fraser distinguishes between the case reproduction number and the instantaneous reproduction number.The case reproduction number represents the average number of secondary cases arising from a primary case infected at time t; this parameter therefore reflects transmissibility after time t.In contrast, the instantaneous reproduction number represents the average number of secondary cases that would arise from a primary case infected at time t if conditions remained the same after time t.The latter therefore characterises the “instantaneous” transmissibility at time t, and is more straightforward to estimate in real-time than the case reproduction number because it does not require assumptions about future transmissibility.Wallinga and Teunis developed an approach to estimate the case reproduction number.They applied their method to data from the 2003 SARS epidemic, showing that the effective reproduction number decreased after control measures were implemented, with similar trends in different affected countries.Their approach involves considering all possible transmission trees consistent with the observed epidemic data, and generates an estimated value of the case reproduction number at each timestep with observed cases.This method has been applied to estimate reproduction numbers during epidemics of diseases including Ebola virus disease, Middle-East Respiratory Syndrome and porcine reproductive and respiratory syndrome.It has also been extended to permit inference in different settings including in populations consisting of multiple host types, as well as to allow estimates to be informed by other types of data.Because of the importance of tracking temporal changes in epidemiological parameters, software implementing the framework of Wallinga and Teunis was developed to allow such analyses to be performed.Other methods to estimate reproduction numbers at the start of an epidemic are also reviewed in Obadia et al. and implemented in the same R software package R0.Recognising that estimates of the instantaneous reproduction number may provide a superior real-time picture of an outbreak as it is unfurling, Cori et al. subsequently developed a method and software for estimating the instantaneous reproduction number using branching processes.This method has been used to analyse a number of recent outbreaks.As with the approach of Wallinga and Teunis, it relies on two inputs: a disease incidence time series and an estimate of the distribution of serial intervals.Although the approach of Cori et al. has been used frequently, its applicability may have been limited in some contexts because of two important drawbacks.First, an estimate of the serial interval distribution may not be available early in an outbreak, or may be associated with significant uncertainty.This is particularly the case for outbreaks of emerging infections, for which the natural history is not known or is poorly characterised.Second, this approach assumes implicitly that all incident cases after the first time-point arise from local transmission, i.e. it does not account for the possibility that cases are imported from other locations or derive from alternative host species.However, epidemiological investigations throughout outbreaks often provide valuable data that can inform the serial interval distribution and the sources of infection of cases.Here we extend the statistical framework of Cori et al. for estimating the time-dependent reproduction number.Rather than relying on previous estimates of the serial interval, our method integrates data on known pairs of index and secondary cases from which the serial interval is directly estimated, with corresponding uncertainty in the serial interval fully accounted for.Our method also allows incorporation of available information on imported cases.We use data from the outbreaks of H1N1 influenza in the USA in 2009 and Ebola virus disease in West Africa from 2013 to 2016 to show how directly including the latest serial interval observations can improve the precision and accuracy of estimates of the time-dependent reproduction number during an outbreak.We use data on MERS cases in Saudi Arabia from 2014 to 2015 to illustrate the importance of accounting for imported cases appropriately when quantifying transmissibility.Our approach is implemented in a new version of the R package EpiEstim, as well as an online interactive user-friendly interface for users that are not familiar with R statistical software.We propose a two-step procedure to estimate the time-dependent reproduction number from data informing the serial interval and from data on the incidence of cases over time.The first step uses data on known pairs of index and secondary cases to estimate the serial interval distribution; the second step estimates the time-varying reproduction number jointly from incidence data and from the posterior distribution of the serial interval obtained in the first step.The distribution of serial intervals can be estimated during an ongoing outbreak using interval-censored line list data – namely lower and upper bounds on timings of symptom onset in index and secondary cases.Serial interval data of this form are often collected during outbreaks, particularly in household studies from which chains of transmission can be reconstructed.Historical Ebola outbreaks provide a number of examples of this.For example, in the Ebola virus disease outbreak in the Democratic Republic of the Congo in 1995, such data were obtained from sources including hospital records and interviews with members of households with cases of Ebola.Similarly, during the outbreak in Uganda in 2000, timings of symptoms were recorded throughout chains of transmission using contact tracing.Uncertainty in the reported dates, as well as lack of knowledge of the precise timings of symptom appearance even if exact dates are known, leads to interval-censored data.Following Reich et al., we perform Bayesian parametric estimation of the serial interval distribution from such data using data augmentation Markov chain Monte Carlo.In most of the analyses presented here, we use a gamma distributed serial interval distribution offset by one day, although other distributions are also implemented in our R package and in principle any parametric distribution could be used.We apply our method to analyse disease incidence time series and serial interval data from a number of past outbreaks, described in this section and made available, when possible, in Tables S1-S4.These are also included in our R package EpiEstim 2.2 and in the accompanying EpiEstim App online application.From 18th April to 1st May 2009, an outbreak of H1N1 influenza occurred that infected more than 800 students and employees in a New York high school.The disease incidence data were shown in Fig. 1 of Lessler et al. and are reproduced in our Table S1.Interval-censored serial interval observations were also collected from 16 pairs of cases during this outbreak, as reported in Table 2 of the Supplementary Appendix of Lessler et al., and are reproduced in Table S2 of our supplementary material.Disease incidence data were available describing the numbers of individuals experiencing onset of acute respiratory illness in a school in Pennsylvania in April and May 2009.These data were included with the first version of EpiEstim, and are also reproduced here in Table S3.We used these data in combination with serial interval data from the 2009 H1N1 influenza pandemic in USA.Specifically, serial interval data were collected from pairs of cases between 17th April and 8th May 2009, and were reported in Table 1 of Morgan et al.We converted the dates of infection of index/secondary cases into intervals to account for uncertainty in the precise timings of infection on the days concerned: for example, for an index case on 18th April and a secondary infection on 25th April, the length of the serial interval was between 6–8 days.We performed analyses including cases from early in the outbreak, as well as using data from the whole outbreak.We also analysed data from the West African 2013–2016 Ebola outbreak.We considered the daily incidence of confirmed and probable cases in Liberia between 28th May and 31st July 2014, computed from the World Health Organization line-list data as described by the International Ebola Response Team and shown in Fig. 4a.In this time interval, 418 symptomatic confirmed and probable cases were reported.There were 16 confirmed and probable cases reported before this time, but these occurred sporadically and hence we conducted our analysis using data from 28th May 2014 onwards.Line-list serial interval data were available from the World Health Organization.Infected individuals were asked who their potential infectors might have been.Up to 31st July 2014, nine such cases were available for which information on exposure to a confirmed, probable or suspected case could be retrieved in this way.Data from 295 further pairs of cases up until 4th December 2014 were also available and used in our analyses.A dataset consisting of the daily numbers of laboratory confirmed human cases of MERS in Saudi Arabia between 11th August 2014 and the 18th December 2015 was extracted from the EMPRES-I system from FAO.The dataset indicates which cases were in humans who have regular contacts with animals, particularly camels.Since the dromedary camel is considered as a reservoir species of the MERS-coronavirus, we interpreted reported regular contact with animals as an indication of infection from the reservoir.This allowed us to distinguish between cases arising from human-human transmission, for example transmission in households or hospitals, and human cases derived directly from the animal reservoir.For the serial interval, we assumed an offset gamma distribution with mean 6.8 days and standard deviation 4.1 days, as estimated by Cauchemez et al.We first applied our method to estimate the time-dependent reproduction number Rt throughout an outbreak of H1N1 influenza in a New York School, for which both incidence and serial interval data were available.We fitted a gamma distributed serial interval offset by one day.Results are shown in Fig. 2.The median reproduction number estimate for the first seven days of the outbreak was 3.3 – with 95% credible interval given by – and the mean estimate for this period was 3.4.These estimates are consistent with a previous estimate of the reproduction number over this time period of 3.3 from a study by Lessler et al.Those authors used a similar approach to quantify the serial interval distribution to the method used here, but estimated the reproduction number based on the initial exponential growth rate of the outbreak.The method for estimating the time-dependent reproduction number, Rt, by Cori et al. previously relied on a pre-existing estimate of the serial interval distribution.In practical applications of that method, typically single serial interval distributions, estimated from previous outbreaks or based on early data from the ongoing outbreak, have been used to estimate Rt throughout an epidemic.In our approach, we propose to integrate the estimation of the serial interval distribution within the estimation of Rt.This allows Rt to be estimated directly not only from the most up-to-date incidence data, but also from up-to-date serial interval data.As more serial interval data become available during an outbreak, the uncertainty surrounding the serial interval distribution estimates, and in turn the reproduction number estimates, typically reduces.To illustrate this principle, we estimated the changes in the reproduction number for an outbreak of H1N1 influenza in a school in Pennsylvania in 2009.We used serial interval data collected in a household study undertaken early in the 2009 influenza pandemic in San Antonio, Texas.We estimated the reproduction number using two subsets of these serial interval data: first, only the data that were available early in the study, and; second, all the data from the study.Results are shown in Fig. 3.The mean Rt estimates using only the early serial interval data were mostly greater than those using all serial interval data.Moreover, using only the early serial interval data led to larger uncertainty in the serial interval distribution estimates, and in turn in the Rt estimates.In particular, the upper bound of the 95% credible interval obtained using the early serial interval data was much higher than when all serial interval data were used.If control strategies were designed based on a pessimistic scenario corresponding to this upper bound, the Rt estimates based on the early serial interval data could have led to designing unnecessarily intense interventions.Of course, intense interventions when based on all available data are justifiable, but it is important for interventions to continue to be re-evaluated as new data become available during an outbreak.We also analysed data from the West African 2013–2016 Ebola outbreak.The incidence data are shown in Fig. 4a.We computed three estimates of the time-dependent reproduction number using three different assumptions on the serial interval: using a single distribution for the serial interval; using the full posterior distribution of serial intervals estimated from the nine pairs of cases observed up to 31st July 2014; and using the full posterior distribution of serial intervals estimated from all 304 pairs of cases observed up to 4th December 2014.In all analyses of the Ebola data, we used an offset gamma serial interval distribution.For, the serial interval distribution was constructed to match the mean and standard deviation of the observed nine pairs of early cases.Using a single distribution for the serial interval rather than the full posterior distribution of serial intervals led to similar central estimates but a large underestimation of the uncertainty in the reproduction number.Furthermore, using the early serial interval data led to underestimating the mean reproduction number by as much as 26% compared to using all the serial interval data that were available.We used incidence data of MERS cases in Saudi Arabia from 2014 to 2015 – shown in Fig. 5a – to estimate the reproduction number throughout that outbreak.Data were available describing some cases as being likely importations from the animal reservoir.We assumed all other cases were due to local human-human transmission.We compared estimates of the reproduction number obtained when using and not using this information.As expected, disregarding the information on the imported cases and instead assuming that those cases arose from local human-human transmission led to overestimation of the reproduction number.The blue shaded time-windows in Fig. 5b highlight times at which the mean reproduction number estimated assuming only local transmission is greater than one when in fact the reproduction number estimated using information on imported cases is below one.Fig. 5c shows that the relative error in the mean Rt estimates when ignoring imported cases varies over time but is sometimes very large, with relative errors of over 50% in October 2014, and January, May and July 2015.Quantifying disease transmissibility during outbreaks is crucial for designing effective control measures and assessing their effectiveness once implemented.This assessment forms a critical part of real-time situational awareness.Indeed, in circumstances in which the incidence of cases is still increasing, but the time-dependent reproduction number is dropping, there might be a very different outlook compared to if the incidence of cases and the reproduction number are both increasing.Assessment of the reproduction number can also be used for planning future interventions.We have developed a framework for estimating time-dependent reproduction numbers in real-time during outbreaks.Our approach builds on a well-established method and addresses two important limitations of the approach as proposed in that study.The first important feature of our framework is that data on pairs of infector/infected individuals can be included in the estimation procedure, so that the serial interval distribution and the time-dependent reproduction number can be estimated jointly from the latest available data.This leads to more precise estimates of transmissibility, as well as accurate quantification of the uncertainty associated with these estimates.Second, our method allows datasets that distinguish between locally transmitted and imported cases to be analysed appropriately.We describe these limitations of the previous method in more detail below.We have shown that these key features lead to improved inference of pathogen transmissibility, with illustrations using datasets from epidemics of H1N1 influenza, Ebola virus disease and MERS.We have also implemented our modelling framework in an online tool, allowing it to be used easily in outbreak response settings by stakeholders.Various methods exist for estimating the values of reproduction numbers, particularly the basic reproduction number, from epidemic data for an in-depth review).The most commonly used approach for estimating time-dependent reproduction numbers, other than the approach of Cori et al., is that of Wallinga and Teunis.As described in the introduction, one caveat of the Wallinga and Teunis method is that it estimates the case reproduction number, which is not a measure of instantaneous transmissibility.If a policy-maker wishes to understand the impacts of control interventions in real-time, then an estimate of the case reproduction number is less useful than an estimate of the instantaneous reproduction number because the case reproduction number does not change immediately after interventions are altered; instead, it changes more smoothly and in a delayed manner.In contrast, the instantaneous reproduction number changes straight away and is therefore a useful quantity for understanding the impacts of control strategies in real-time.Furthermore, estimation of the case reproduction number at any time usually requires incidence data from later times, although we note that extensions to the Wallinga and Teunis approach have been developed to relax this assumption of the original method.Some approaches have been proposed to estimate the serial interval and reproduction numbers jointly from time series data on the numbers of new cases, but it has been shown that it may not be possible to estimate both these quantities precisely from those data alone in the early stages of an outbreak.Our approach instead extends the framework of Cori et al., and relies on observations of transmission pairs in addition to the time series data to estimate the serial interval and the time-varying reproduction number in a two-step estimation process.As described above, the first limitation of the approach of Cori et al. is that it makes use of pre-existing estimates of the serial interval distribution as an input.This potentially leads to delays between studies inferring the serial interval and subsequent analyses estimating transmissibility, or means that estimates of transmissibility are based on estimates of the serial interval from earlier outbreaks.Here, we used data from the 2013–2016 Ebola outbreak in Liberia to show that failing to account for full uncertainty in the serial interval distribution may lead to underestimating the uncertainty surrounding reproduction number estimates.Moreover, ignoring recent data on the serial interval can dramatically impact estimates of the reproduction number and the uncertainty associated with those estimates.This is of practical importance – as an example, a number of studies conducted during and after the 2013-16 West African Ebola outbreak used the same single serial interval estimate obtained near the beginning of the outbreak.Our results suggest that using the latest available data on pairs of index and secondary cases, and fully accounting for the corresponding uncertainty in the serial interval estimates, may lead to very different, but more robust estimates of the reproduction number.It is worth noting that the pairs of index/secondary cases included in the estimation should be as representative as possible; in particular, if too recent index cases are considered, some of their secondary cases may not have been observed yet, leading to artificial underestimation of the serial interval.Although some approaches for estimating reproduction numbers allow imported cases to be accounted for, the second limitation of the approach of Cori et al. is that it assumes that all cases in an outbreak occur from local transmission, which can be erroneous.For some diseases – e.g. MERS and yellow fever – transmission from alternative hosts can be common.Continued importation of cases into a local population from other geographical locations can also occur.For example, a number of cases of H1N1 influenza in New Zealand in 2009 were known imports from other locations.Failing to properly account for such non-locally transmitted cases can lead to overestimating the reproduction number, as we illustrated in our application to data on MERS in Saudi Arabia from 2014 to 2015.Epidemiological studies often collect data on exposure routes for each case, and this information on the local or non-local source of incident cases should be included, when available, in estimates of pathogen transmissibility.Of course, such information might not be available directly from epidemiological data.In this case, one option might be to use statistical methods along with genetic and epidemiological data to differentiate between local and imported cases.One of the aims of epidemic control is to reduce the reproduction number below one.Failing to account for full uncertainty in the serial interval, not including recently available serial interval data, and failing to differentiate between local and imported cases might lead to incorrect assessment of the effectiveness of current control measures.Throughout most of this article, we discussed disease control in the context of whether or not the mean estimate of the reproduction number was less than or greater than one.However, policy-makers may prefer to choose more risk-averse policies.When the goal of interventions has been to minimise a function describing the cost of an outbreak, the idea of intervening to ensure that percentile estimates of that cost are minimised has been proposed.A similar idea here might be directing control strategies towards ensuring that a specific percentile estimate of the reproduction number falls below one.In this context, inadequate quantification of the uncertainty surrounding reproduction number estimates may be as important as biases in the central estimates.As well as in response to interventions, the reproduction number may change over time due to other factors.Seasonal variations in the parameters governing disease spread play a significant role in transmission of a number of pathogens.For example, transmission of vector-borne pathogens varies due to factors including seasonal temperature variation, and outbreaks of childhood diseases such as measles are affected by school term dates.These, and indeed any factor resulting in changes in pathogen transmissibility), will be reflected in time-dependent reproduction number estimates generated using our approach, so these estimates need to be interpreted carefully when assessing the effectiveness of interventions.As with most previous methods, we propose estimation of the reproduction number based on the incidence of symptomatic cases and the serial interval distribution, rather than the incidence of infections and the distribution of the generation time).In some circumstances, the serial interval distribution might not match the generation time distribution.As an extreme example, the generation time can only take positive values, however for diseases for which infectiousness occurs before the onset of symptoms, negative values of the serial interval might be possible.Since the onset of symptoms occurs after the time of infection, considering the incidence of symptomatic cases instead of the incidence of infection also leads to delays in estimates of the reproduction number.This is unavoidable in most cases as surveillance systems typically do not record the timings of new infections.However, for analyses carried out retrospectively, if the distribution of the incubation period is known, then it is possible to eliminate this time lag by back-calculating the likely infection times from the times at which symptoms were recorded.The instantaneous reproduction number can then be inferred from these back-calculated data.We note that this might contribute uncertainty in reproduction number estimates if there is significant variability in the time between infection and detection of symptoms between individuals, as is the case for Ebola.An important feature of our method, like previous ones, is that, if the proportion of cases that go unreported remains constant throughout an outbreak, estimates of the reproduction number are unaffected by underreporting.However, reporting can vary over time within an outbreak.An interesting future extension of our approach might be accounting for uncertainty in the precise numbers of incident cases at each timestep.If information is available to quantify changes in reporting over time, this would permit correction to allow for temporal variation in underreporting, which might otherwise be interpreted as variation in the reproduction number.Underreporting has hindered estimation of disease burden for a number of diseases including dengue, yellow fever and Ebola.Other additions to our work might involve allowing for reporting delays.In conclusion, we have extended the commonly used approach of Cori et al. for estimating the time-dependent reproduction number to include important new features.We hope that our improved modelling framework is sufficiently flexible that it will be used by epidemiologists and policy-makers in a wide range of future outbreak response scenarios.This should be facilitated by our R package and our online interactive user-friendly interface.All authors except JES, ZNK, EM and TJ contributed to extending and linking the CoarseDataTools and EpiEstim R packages during the Hackout 3 meeting; JES, RNT and AC developed the software application; RNT and AC wrote the manuscript; All authors revised the manuscript; All authors discussed the research and approved the final version of the manuscript.The Hackout 3 meeting at the Institute for Data Science was funded by the NIHR Modelling Methodology Health Protection Research Unit and the MRC Centre for Outbreak Analysis and Modelling.Additional funding for JES was obtained through the RECON project at the NIHR Modelling Methodology Health Protection Research Unit.RNT thanks Christ Church for funding his research via a Junior Research Fellowship.SC acknowledges financial support from the AXA Research Fund, the Investissement d’Avenir program, the Laboratoire d’Excellence Integrative Biology of Emerging Infectious Diseases program, the Models of Infectious Disease Agent Study of the National Institute of General Medical Sciences and the INCEPTION project.AC acknowledges joint centre funding from the UK Medical Research Council and Department for International Development, as well as funding from the United States Agency for International Development.The results of this work do not necessarily reflect the views of USAID or any other funding body.
Accurate estimation of the parameters characterising infectious disease transmission is vital for optimising control interventions during epidemics. A valuable metric for assessing the current threat posed by an outbreak is the time-dependent reproduction number, i.e. the expected number of secondary cases caused by each infected individual. This quantity can be estimated using data on the numbers of observed new cases at successive times during an epidemic and the distribution of the serial interval (the time between symptomatic cases in a transmission chain). Some methods for estimating the reproduction number rely on pre-existing estimates of the serial interval distribution and assume that the entire outbreak is driven by local transmission. Here we show that accurate inference of current transmissibility, and the uncertainty associated with this estimate, requires: (i) up-to-date observations of the serial interval to be included, and; (ii) cases arising from local transmission to be distinguished from those imported from elsewhere. We demonstrate how pathogen transmissibility can be inferred appropriately using datasets from outbreaks of H1N1 influenza, Ebola virus disease and Middle-East Respiratory Syndrome. We present a tool for estimating the reproduction number in real-time during infectious disease outbreaks accurately, which is available as an R software package (EpiEstim 2.2). It is also accessible as an interactive, user-friendly online interface (EpiEstim App), permitting its use by non-specialists. Our tool is easy to apply for assessing the transmission potential, and hence informing control, during future outbreaks of a wide range of invading pathogens.
260
Efficacy and safety of a new intradermal PCV2 vaccine in pigs
Porcine Circovirus type 2 is the causative agent of the “Post-weaning Multisystemic Wasting Syndrome”, but is also involved in a number of other disease syndromes which have been collectively named Porcine Circovirus Diseases .The most pronounced PCVDs are Porcine Respiratory Disease Complex, Porcine Dermatitis and Nephropathy Syndrome, enteritis, reproductive failure, granulomatous enteritis, congenital tremors and exudative epidermitis.Subclinical PCV2 infections are characterized by poor growth performance .Although intramuscular vaccines against PCV2 are routinely used in the pig industry, no intradermal vaccines have been available until now.Intradermal vaccination has the advantage of targeting antigen presenting cells in the epidermis in close proximity to skin-draining lymph nodes .Combined with needle free administration, ID vaccination is also more animal friendly and prevents accidental transmission of pathogens caused by reusing needles as well as broken needles in the muscle.The objective of the present studies was to evaluate the safety and efficacy under laboratory and field conditions of a new intradermal vaccine that is based on Porcilis® PCV and is named Porcilis® PCV ID.In addition, concurrent use with Porcilis® M Hyo ID ONCE was also investigated.A vaccine containing inactivated, baculovirus-expressed ORF2 antigen of PCV2 and a vaccine containing Mycoplasma hyopneumoniae cells, both adjuvanted with an oil-in-water emulsion, Xsolve® were tested.The vaccines were administered with the IDAL® injector either alone or concurrently as a single 0.2 ml dose to 3 week old piglets according to the manufacturer’s instructions.All laboratory studies and sample testing were conducted in compliance with GLP, while all field studies were conducted in compliance with GCP.All laboratory studies were performed after the approval of the Ethical Committee for Animal Experiments of MSD Animal Health.Two groups of healthy SPF pigs were either vaccinated with Porcilis® PCV ID at 19–21 days of age or injected with phosphate buffered saline.The piglets were monitored daily for abnormal systemic and local reactions until 42 days after vaccination.Body weight was recorded on the day of vaccination, and at 21 and 42 days post vaccination.Rectal temperature was recorded one day before vaccination, just before vaccination, 4 h after vaccination and daily for four days.Both at 14 and 28 days post-vaccination, 5 animals from the vaccinated group were sacrificed for examination of the injection site.The remaining animals were subjected to the above described procedure at 42 days post-vaccination.A GCP field safety study was performed according to a controlled, randomized and blinded design in three commercial pig farms in The Netherlands.In each farm, approximately 90 healthy 18–24 days old piglets were allocated randomly to one of three groups.The pigs in group 1 were vaccinated with Porcilis® PCV ID, the pigs in group 2 with Porcilis® PCV ID and Porcilis® M Hyo ID ONCE concurrently, the piglets in group 3 remained untreated.The piglets were observed for immediate reactions during or immediately after vaccination and general health one day before vaccination, at vaccination, 1 and 4 h after vaccination and daily for 28 days.The injection site was examined by palpating for local reactions at 1 and 4 h after vaccination and daily for 28 days.Rectal temperature was measured one day before vaccination, just before vaccination, 4 h after vaccination and daily for 4 days.All study piglets were weighed individually at admission and on day 21.The onset of immunity and duration of immunity for Porcilis® PCV ID alone or Porcilis® M Hyo ID ONCE alone, and for concurrent use of Porcilis® PCV ID and Porcilis® M Hyo ID ONCE were evaluated in experimental PCV2 or M. hyopneumoniae challenge studies; the experimental design of these studies is summarized in Table 1.In each experiment, 3 week old pigs, maternally-derived antibody positive for PCV2 and free of M. hyopneumoniae, were randomly divided into groups at the time of vaccination.Blood samples were taken just before vaccination, between vaccination and challenge, at the time of challenge and 1, 2 and 3 weeks after challenge.Fecal swabs were collected at the time of challenge and 1, 2 and 3 weeks after challenge.At 5 or 26 weeks of age, pigs were challenged intranasally with a recent Dutch PCV2 field isolate.Three weeks after PCV2 challenge, all pigs were necropsied and the inguinal lymph nodes, tonsil and lung were collected.Blood samples, fecal swabs and tissue samples were tested for quantification of the PCV2 viral load by qPCR.In addition the blood samples were also tested for the presence of PCV2 and M hyo antibodies.M. hyopneumoniae challenge was performed at 6 or 25 weeks of age intratracheally on two consecutive days with 10 ml of a culture of a Danish field isolate containing ±107 CCU/ml.Three weeks after challenge, the pigs were necropsied to evaluate lung lesions which were scored according to Goodwin & Whittlestone as previously described ; the maximum score is 55.During the studies, pigs were observed daily for general clinical abnormalities.Two combined GCP field efficacy and safety studies were performed according to a controlled, randomized and blinded design in two Hungarian pig herds with both an M. hyopneumoniae and PCV2 infection.Healthy three week old suckling piglets were allocated randomly, within litters, to treatment groups of approximately 600 or 330 piglets each.The pigs in group one were vaccinated intradermally with Porcilis® PCV ID, the piglets in group two were vaccinated intradermally with Porcilis® PCV ID and Porcilis® M Hyo ID ONCE concurrently, the piglets in group three were vaccinated intradermally with Porcilis® M Hyo ID ONCE and the piglets in group four remained untreated.The primary efficacy parameters were mortality, M. hyopneumoniae-like lung lesions at slaughter, PCV2 viraemia and the average daily weight gain during finishing.Also, serological response following vaccination or field infection was measured.The pigs were weighed individually at time of vaccination, transfer to the finishing unit and just prior to slaughter.Medication was recorded and pigs that died during the studies were examined post-mortem to establish the cause of death.Forty to 60 piglets per treatment group were selected for blood sampling approximately every 3 weeks.The normal routine at the farm of study A included sending the animals to several different slaughterhouses.This routine did not allow for scoring M. hyopneumoniae-like lung lesions, which was therefore not included in the study design.For study B all M. hyopneumoniae-like lung lesions were scored at the same slaughterhouse for all piglets.Although safety was not the primary objective of these studies, animals were observed at time of vaccination and, as a group, at 4 h after and 1, 4, 7, 14, 21 and 28 days after vaccination.For M. hyopneumoniae, a commercial ELISA was used according to the manufacturer’s instructions.Results were expressed as negative, positive or inconclusive.For PCV2, an in-house ELISA was performed as previously described .Quantification of the PCV2 viral load in serum, lymphoid organs, lung and fecal swabs were performed by qPCR as previously described for study 2 .In the lab studies the area under the curve of the qPCR data for the serum samples and fecal swabs collected after PCV2 challenge were calculated by the linear trapezoidal rule and analyzed by either the Wilcoxon Rank Sum test or the Kruskal Wallis test.Lung lesion scores in challenge experiments and the qPCR data of inguinal lymph nodes, lungs and tonsils were also analyzed by either the Wilcoxon Rank Sum test or the Kruskal Wallis test.In the field studies, the AUC data were ranked before analysis using mixed model ANOVA with vaccination group as fixed effect and production batch as random effect.Mortality was compared between treatment groups using the Cochran Mantel Haenzel method with production batch as the classification variable.For the pairwise comparisons the Genmod procedure was used.Lung lesion scores in the field study were compared between the groups with a mixed model ANOVA.The average daily weight gain was compared between the groups via a mixed model ANOVA.Vaccination group and gender with appropriate interactions were included as fixed effects and sow and production batch as random effects.The body weight at admission was included in the model as a covariate.The results of the safety study using Porcilis® PCV ID alone, are summarized in Table 2.A total of 95% of the animals developed injection site reactions with an individual maximum and mean size of 2.8 cm and 1.1 cm, respectively.These local reactions were biphasic with peaks on day 1 and days 13/14 following vaccination.Local reactions were never scored as painful.At 14 days post-vaccination, the subcutaneous adipose tissue appeared thickened in 3 of the 5 animals necropsied and appeared reddish-brownish colored in 5 of 5 animals.After 28 days post-vaccination, no injection site reactions were observed macroscopically at dissection.At 4 h after vaccination, the rectal temperature of vaccinated animals was comparable with the control animals.No systemic reactions were detected.Results of the safety study, using Porcilis® PCV ID alone, or concurrently with Porcilis® M Hyo ID ONCE are summarized in Table 3.Reactions during or immediately after vaccination were not observed.A deviation in general health was occasionally observed in all groups with no differences in the frequency between the vaccinated groups and the control group.Up to 93–95% of the animals, in both the single and concurrent group, developed local reactions.The large majority of animals with local reactions were ⩽1 cm and the largest reaction observed, <2% of the reactions measured, were 3.0 cm.Local reactions disappeared around day 21–28 for the majority of the animals in both groups.All had disappeared 50 days after vaccination.Local reactions were never scored as painful.The mean temperature profiles and the ADWG were not significantly different between treatment groups.In the field efficacy and safety study A no local reactions were observed.The maximum incidence of local reactions caused by Porcilis® PCV ID was observed 14 days pv in study B: 2% in the PCV and 8% in the PM group.The maximum incidence of local reactions caused by Porcilis® M Hyo ID ONCE was observed 21 days pv in study B: 11% in the PM group and 8% in the M group.The maximum size of the local reactions in the PCV group were 3 cm, in the PM group 4 cm and 6 cm, and in the M group 4 cm.No clinical abnormalities that could be related to treatment were observed in the periods between vaccination and challenge.However, in the PCV2 OOI study one vaccinated pig in the PM group was found lame during the study and as a result the animal was euthanized for animal welfare reasons.Although clinical signs were not observed following PCV2 challenge, qPCR data of the various samples confirmed infection.Mean viral loads in lymphoid tissues and lung were in general between 3 and 4 log10 lower in the vaccinated pigs, and the differences between the groups were statistically significant.Compared to the control animals, the viral load of the vaccinated animals was significantly reduced by 70–100% in serum and 81–100% in fecal swabs.Following vaccination, a clear antibody response against PCV2 was observed, as shown by an increase in antibody titer in the PCV2 OOI study followed by a slower decline resulting in average titers of around 4.0 log2, compared to the control group, in the PCV2 DOI study.In contrast, the control pigs in the DOI study had declining maternal antibody titers and remained serologically negative until the time of challenge.Following challenge, vaccinates developed an anamnestic response and the animals in the control group started to seroconvert.The antibody titers between vaccinated and control groups were significantly different from the time of vaccination onwards, until the time of necropsy.In the M. hyopneumoniae challenge experiments, almost all the control animals seroconverted to the challenge infection.At necropsy, 3 weeks post challenge, the median M. hyopneumoniae-induced lung lesions in the vaccinated groups were significantly reduced by 92–100% and 77% compared to the controls.The PCV2 serological profiles of the pigs in both field studies are indicative of a PCV2 infection in both studies, which was confirmed by the detection of PCV2 at low amounts in the control animals at 10 wpv and 4 wpv, respectively.Compared to the control animals the viral load in serum of the vaccinated animals was significantly reduced by >97% and >88%.Vaccination with Porcilis® PCV ID induced more than 44 g higher ADWG during finishing and more than 25 g higher ADWG during the entire study period than in the control animals.Concurrent vaccination with Porcilis® M Hyo ID ONCE induced over 51 g and over 29 g higher ADWG during finishing and during the entire study period respectively.Vaccination with Porcilis® M Hyo ID ONCE did not result in an increase in ADWG.In study A, where mortality was a primary parameter, mortality was significantly reduced by 5% in the vaccinated groups, compared to the controls.In study B, where M. hyopneumoniae-like lung lesions at slaughter were a primary parameter, the lesions were significantly reduced in the PM and M vaccinated group, compared to the controls and the PCV vaccinated group.Porcilis® PCV ID was developed specifically for intradermal administration using a needle free and intradermal injector such as the IDAL.The needle free administration improves animal health and food safety as there is no risk of needle breakage or transmission of disease by re-use of needles.The IDAL allows for a dose volume of 0.2 ml, which is one tenth of the volume of Porcilis® PCV, the intramuscular vaccine on which Porcilis® PCV ID is based.Compared to intramuscular injection, vaccination into the dermis has the advantage of the presence of dendritic cells at the site of administration and the close proximity of skin-draining lymph nodes, resulting in a direct response to the antigen in the vaccine.Therefore, the adjuvant and/or antigen concentration of an intradermal vaccine can be lower to achieve a comparable or even more effective efficacy .In the case of Porcilis® PCV ID the adjuvant is only approximately 25% of that of Porcilis® PCV and still induces both a good immune response and no severe local reactions.The presented results support that the new intradermal PCV2 vaccine can safely be administered to 3 week old piglets.The local reactions seen in the present studies, although common, were small, transient and never painful.Local reactions were observed commonly when palpating individual animals, but rarely, probably due to their small size, when only observing animals from a distance.The experimental challenge studies indicate that the onset of immunity against PCV2 infection occurs as early as 2 weeks post-vaccination and lasts for at least 23 weeks.Following vaccination in the PCV2 DOI study, a decline in PCV2 antibody titer was measured until 17 wpv after which the titers remained level at 4.0 log2 until the end of the study.Although this is a relatively low mean titer, the results after challenge show that the animals were still protected against PCV2, which could also be suggestive of the induction of cellular immunity by intradermal vaccination .Accordingly, a single vaccination of animals at 3 weeks of age may protect fattening pigs against PCV2 infections during the production life cycle.In addition the challenge studies indicate that Porcilis® PCV ID can be given concurrently with Porcilis® M Hyo ID ONCE, as demonstrated by comparable results between single and concurrent vaccination.Concurrent vaccination is both user and animal friendly as the animals only need to be handled once to protect against two major swine pathogens.The results obtained during the challenge experiments were confirmed in the field efficacy trials in the presence of PCV2 and M. hyopneumoniae infections: strong reductions in PCV2 viral load and M. hyopneumoniae-like lung lesions were measured, resulting in reduced mortality and weight loss.This is in line with what has been observed with intramuscular PCV2 and/or M. hyopneumoniae vaccines .In the field efficacy trial B, Porcilis® M Hyo ID ONCE improved the ADWG with 12 g/day compared to the controls.This result did not reach statistical significance and is contrary to previous results .A possible explanation could be the late onset of the M. hyopneumoniae infection as evidenced by the absence of a serum response in the control group until 20 wpv.Under the conditions of this farm, the highest reduction of weight loss was measured when concurrently vaccinating with Porcilis® PCV ID and Porcilis® M Hyo ID ONCE, supporting the importance of addressing both PCV2 and M. hyopneumoniae infections when present simultaneously in a farm.In conclusion, the study results support that a one-dose PCV2 vaccine administered intradermally with a needle free injector is safe and provides protection until at least 23 weeks post vaccination, which is a typical slaughter age.
The safety and efficacy of a new intradermal one dose vaccine containing Porcine Circovirus type 2 (PCV2) antigen - Porcilis® PCV ID - was evaluated in laboratory studies and under field conditions. In addition, the concurrent use with an intradermal Mycoplasma hyopneumoniae vaccine - Porcilis® M Hyo ID ONCE - was evaluated. Vaccination with Porcilis® PCV ID resulted in small transient local reactions in a high percentage of the vaccinated animals with no temperature increase. In both the onset of immunity and duration of immunity challenge studies with PCV2 or M. hyopneumoniae, significant reduction of the PCV2 load in lymphoid tissue, lungs, serum and fecal swabs and M. hyopneumoniae-induced lung lesions were observed. In two field trials on two different farms where both PCV2 and M. hyopneumoniae were present, vaccination with Porcilis® PCV ID and/or Porcilis® M Hyo ID ONCE of 3 week old piglets resulted in a significant reduction of PCV2 viraemia, mortality and lung lesion scores at slaughter. In addition, a significant positive effect on average daily weight gain (between 44 and 59 g/day) in the finishing phase was observed. The results support that this new intradermal vaccine is safe and efficacious against PCV2 and may be used concurrently with Porcilis® M Hyo ID ONCE.
261
Identifying and overcoming barriers to onsite non-potable water reuse in California from local stakeholder perspectives
In response to frequent droughts, mandatory water reductions, and the increasing demand on water and wastewater systems in California, onsite water reuse has gained attention as a way to meet changing needs and reduce potable water demand.Working in combination with centralized water and wastewater systems, onsite non-potable water systems have been shown to reduce overall potable water consumption and contribute to a sustainable water supply.Onsite graywater reuse, for example, in a scenario that also included a centralized blackwater system, has been shown to reduce both potable water and electricity consumption by up to 49% for single-family applications.Under certain circumstances, decentralized systems can be more cost-effective and energy efficient than their centralized counterparts largely due to reduced transport distances and infrastructure requirements, and their ability to separate sources and treat to ‘fit-for-purpose’ levels as opposed to generic high level treatment.Additionally, ONWS can be desirable as a means of increasing water security as water sources become increasingly strained and their uses regulated,In spite of the possible benefits of ONWS, uptake has been slow and many states lack clear regulations for these types of systems.Some potential reasons for this lag in growth is that local authorities may not have the necessary knowledge, are unwilling to regulate these systems, or they may be lacking the needed resources to do so.Additionally, there has been hesitancy on the part of some water and wastewater utilities to embrace alternate water sources due to public health concerns, potential loss of revenue, and reduction of wastewater and its ability to carry solids.Other cited impediments include inconsistent graywater definitions, water quality requirements, and storage and irrigation restrictions.Also referred to as decentralized water reuse, onsite reuse is defined as the “collection, treatment, and reuse of wastewater at or near the point of generation”.Onsite reuse systems utilize alternate water sources, such as graywater, rainwater, stormwater, and blackwater, for non-potable applications such as cooling, toilet-flushing, industrial processing, irrigation, and others, thus all such systems can be referred to with the common name of onsite non-potable water systems.The term onsite reuse refers to the main function of onsite non-potable water systems.For the purposes of this research, ONWS will be used in reference to commercial/industrial and non-blackwater systems since blackwater systems are regulated under longstanding state regulated Title 22 regulations and residential systems typically have lower water saving potential than commercial or industrial installations.To date, most research on the challenges facing water reuse has been focused on centralized recycled water systems and public opinion.This study takes a different approach, exploring the challenges specific to onsite systems from local stakeholder perspectives including regulating entities, system designers, consultants, and engineers.This research is necessary and timely as onsite non-potable water reuse is an emerging field for which guidance and regulation continues to be developed.This study was undertaken to discern and understand challenges to widespread adoption of commercial onsite alternate water source reuse, and, more crucially, to uncover efforts to address these obstacles and inform future steps to meaningfully reduce these challenges.In order to accomplish these objectives, researchers convened a technical advisory committee and collected survey data, which resulted in: 1) a ranked list of the challenges facing specific groups and regions in California, 2) a description of knowledge dissemination methods that have been employed and how relevant individuals receive knowledge and resources, and 3) targeted, actionable solutions to address the challenges.Historically, water reuse in California has been predominately performed on the centralized level, but not without its share of difficulties and adversity.California, recognizing the probable benefit of using water recycling as a means to augment demands for freshwater, created the Recycled Water Task Force in 2001 to examine the potential of centralized water recycling as well as the obstacles.The task force identified that issues with public health, cost, public acceptance, and institutional barriers needed to be overcome in order to fully utilize recycled water for a full host of applications.In some instances these barriers, such as the absence of public acceptance, for example, were significant enough that they prevented projects from moving forward.Since that time, progress has been made to address the list of impediments to centralized water recycling generated by the task force.A recent survey of centralized water reuse managers cited many positive drivers for their reuse program with only 26% citing negative perceptions for recycled water.In California, onsite non-potable water systems are regulated almost entirely by the California Plumbing Code, chapters 15, 16, and 16a, which are interpreted and enforced at the local level.Currently, the CPC allows for the use of alternate water sources for both indoor and outdoor applications, but leaves water quality, treatment, monitoring and reporting requirements, as well as specific end uses, for the local regulatory authority to decide.However, if the alternate water source includes blackwater, detailed recycling requirements are found in Title 22, Chapter 3 of the California Code of Regulations, which is regulated by regional water quality control boards using uniform statewide criteria.Most water reuse is and has been performed on the centralized municipal level in part due to rigorous Title 22 requirements, such as daily coliform and continuous turbidity monitoring.The addition of non-potable reuse to the plumbing code over the past decade allowed other alternate water sources, such as graywater, to be regulated differently than blackwater and instead adhere to monitoring and reporting requirements as determined by the local authority having jurisdiction.To address the challenges facing ONWS, national organizations such as The Water Research Foundation and the US Water Alliance, which convened a National Blue Ribbon Commission for Onsite Non-Potable Water Systems, have created multiple documents targeted at regulators, utilities, and onsite design professionals and consultants.The William J. Worthen Foundation has also issued its own resource specifically for onsite water systems and designers.Table 1 lists several recently released documents targeting onsite water regulation and implementation.These resources, along with the recent passage of California Senate Bill 966, which aims to standardize water quality and monitoring requirements for onsite non-potable water reuse systems throughout the state, all aspire to increase standardization and expand awareness about ONWS.Two regions in particular in California have made progress toward advancing onsite reuse.For example, San Francisco, a region where the city and the county share the same jurisdictional area, has created a dedicated Non-Potable Water program developed to provide permitting guidance to adopters of onsite systems and to require onsite water systems for new construction of buildings over 250,000 square feet as per local ordinance since 2015.As a result of this ordinance, San Francisco now has several buildings with non-potable water systems using rainwater, bay water, graywater, condensate, stormwater, and foundation drainage to achieve potable water reductions between 8% and 65%.Likewise, Los Angeles County has also developed its own Guidelines for Alternate Water Sources, although there is a lack of coordination between the county and its cities since they operate independently.Outside of these regions, however, there are few locations in California with clear direction, leaving regulators and system designers and consultants alike unsure how to proceed.In order to assess the real challenges and barriers currently faced by onsite water systems in California, this study utilized a technical advisory committee and an electronic exploratory survey.Together these methods formed the basis for this study, illuminated the current state of beliefs and knowledge held by onsite non-potable reuse stakeholders, and guided the recommendations of appropriate solutions.A technical advisory committee, composed of professionals from different perspectives who are all actively involved with onsite water reuse in California, was created to offer insight and guidance for this project.Nine members participated in the meetings, six from the regulatory perspective and three engineers.The TAC met a total of three times.The first meeting was focused on establishing common challenges onsite reuse systems face and reviewing survey questions.The second meeting was designed to discuss solutions and changes that could be made to reduce or eliminate the listed impediments to ONWS.The final meeting was used to develop an action plan and review the overall findings of the survey and the research project as a whole.TAC input was critical to creating the final list of top ten challenges, reviewing the survey, and developing the recommended solutions that came out of the survey results and are discussed in this paper.Generally, exploratory survey methods are used for new topics and when a survey population is difficult to identify.Given the newness of this field and that many local regulatory programs have not been established, it was difficult to structure the survey so as to represent all current and prospective regulators and system design professionals.As such, this survey and its results were meant to provide new insight into the topic without necessarily enabling statistical inferences about the sector as a whole.An opt-in internet based survey was selected given its ease of distribution and low cost.The survey was composed of 12 questions sent out by email to individuals in California using Qualtrics software between August 21 and September 18, 2018.Survey questions were created based on TAC discussions and reviewed prior to piloting and full scale distribution.The survey provided standardized definitions at the beginning and asked respondents about their work affiliations and locations.Questions were related to their personal beliefs about ONWS, their knowledge of onsite reuse, their familiarity with existing resources, and their perception of challenges preventing the growth of ONWS.The goals of the survey were to identify common challenges and barriers impeding onsite water reuse, assess existing efforts to address these challenges, and determine who is affected by these challenges and why these challenges exist.The information gathered through the survey was used, along with further research, to develop targeted solutions to the challenges.The results of the survey were filtered according to stakeholder group and location affiliation and resource familiarity so that different groups could be compared using a chi-square analysis to find significant differences between groups.Survey recipients were selected to represent the different stakeholder perspectives of onsite non-potable water reuse, including the regulatory side defined as city, county, and state regulators, and the system side, defined as system designers, consultants, and engineers.Email addresses for the regulators were found using publicly available websites for all cities and counties for which such contacts could be found.Contacts for system-side entities were found by contacting publicly listed companies and individuals that work with onsite non-potable water systems for major cities in California.Chain-sampling was allowed: survey recipients were encouraged to pass the survey along to the most appropriate respondent within their agency as well as anyone else in their organization who might deal with onsite water reuse.This method is commonly employed for unknown populations and can add a degree of bias to the sample results.The inherent bias produced by this method means those who were forwarded the survey were likely more knowledgeable and involved in onsite water reuse than if they had been selected randomly.As such, the actual knowledge and resource familiarity in this sector may be less than reported by this study.Ten common challenges preventing the growth of onsite non-potable water reuse were identified from discussions with the TAC.Following TAC review, researchers decided that while not an exhaustive list of all challenges faced by ONWS, the following are the most significant and universal issues in terms of preventing growth:As outlined in Senate Bill 966, in order to permit an ONWS, a local non-potable water regulatory program must be created and a local ordinance passed.If there is no established regulatory program or the local regulators do not know how to appropriately permit and regulate onsite systems, interested parties in the area are blocked from moving forward with onsite systems.Onsite systems can be expensive and are not always a cost-effective option for facilities.In general, cost is a function of other factors.Retrofitting versus installing ONWS in new construction, for example, has a significant impact on cost.Likewise, water rates, technology, and the market also play a role in determining the relative expense of systems.A general lack of knowledge of alternate water sources is often associated with a lack of knowledge about reuse potential as well as appropriate applications.If individuals do not know about ONWS and related options, they are unlikely to install a system or consent to regulate it.Water reuse has often been met with mixed public perception and concerns about health risks, particularly for indoor as opposed to outdoor use.If the public has adverse associations with alternate water sources, either due to health concerns or inconvenient requirements and operation, then public jurisdictions and prospective ONWS owners are less likely to embrace onsite water reuse practices.Given the lack of clarity with regards to authorities having jurisdiction per plumbing code specifications, regulators outside of building departments are commonly overlooked or not consulted for new ONWS permits.This lack of coordination between regulators in localities where the building department alone issues permits often precludes the review and oversight environmental health agencies would provide if they were involved in the process.When roles are unclear, it can be difficult to discover the appropriate AHJ and navigate an ill-defined process.Some utilities have not embraced decentralized treatment for a variety of reasons, including concerns with reduced wastewater flows.In some areas, utilities may act as a strong lobbying force against decentralized reuse, impacting statewide legislation or their local jurisdiction.Creation of a local program to regulate ONWS requires not just knowledge about regulations, but the creation of water quality standards, program rules, inspection forms, monitoring and reporting practices, enforcement criteria, and more.Development of these internal documents and procedures requires knowledge building, person-hours, and funding, which can be in limited supply.Establishing appropriate risk-based standards for every source and potential non-potable application can be an onerous task for local authorities, but blanket requirements, such as NSF 350, can either neglect risk or make systems unnecessarily protective and prohibitively costly to operate.Monitoring and reporting for ONWS are important in maintaining safe-systems and demonstrating system benefits at both individual and aggregated regional levels, all of which are critical for positive public opinion.If not selected appropriately by the local regulator, however, monitoring requirements can become a significant operating burden that prevents the implementation of a system.In most regions, there is little support or direction provided for the design of onsite non-potable water reuse system.The plumbing code is itself mostly limited to restrictions and for many specifications, such as water quality, it leaves the determination up to the local AHJ.System designers may need to identify the correct local authority, if one exists, and assertively seek answers about requirements.The survey was sent to approximately 550 recipients: about 200 opened and began the survey and 114 completed and submitted the survey.The survey results presented in the following sections are based on these responses.Of the total respondents, 52% identified as regulators at varying levels; 40% identified themselves on the system side, often within multiple categories; and 8% reported themselves as ‘other’, including responses from professionals representing water utilities, non-profits, the public education realm, and academia.Spatially, 67% of California counties were represented in this survey, however, a large number of respondents were associated with Sonoma, Santa Clara, San Francisco, or Los Angeles county.The 10 challenges presented in Section 3.1 were included in the survey, but in some cases were broken apart, for instance, cost was split into the cost of permitting and the cost of the system itself.The survey asked respondents to categorize the challenges as significantly impacting, slightly impacting, or not at all impacting the growth of ONWS in California.A ‘no response’ was also available.Fig. 1 depicts the full results for all respondents.The two most significant challenges reported were the cost of the systems themselves and the absence of a knowledgeable and supportive local regulatory program, with over 80% of respondents claiming either significant or slight impacts to growth.While found to be less impactful, a large percentage of total respondents also believed that poor access to training and resources for regulators and limited public education are having a negative impact on the growth of onsite reuse.For the purposes of determining if these challenges were perceived similarly by regulatory and the system-side respondents, the results from Fig. 1 were filtered according to affiliation to determine how they compare.Fig. 2 lists the challenges in the same order as Fig. 1, but separates the percentages for the two groups and includes the p-values based on a chi-squared analysis comparing the responses from the two groups.As evidenced by a statistically significant p-value of less than 0.05, five negative impact beliefs were linked to whether or not a respondent identified as regulatory or system side: the absence of a local regulatory program, confusing permitting process, lack of resources for designers, permitting costs, and negative perceptions about water reuse.Results demonstrated that each group was more likely to believe that challenges outside of their scope are having a negative impact.For example, stakeholders from the system side were more likely to believe that permitting concerns, such as a confusing permitting process and permitting costs, are the reasons for slow uptake of onsite reuse.Conversely, regulators were more likely to blame the lack of resources for designers and negative public perceptions.To determine the underlying causes of these challenges, with the ultimate goal of recommending effective solutions, the online survey also asked respondents about their beliefs, knowledge, and resource familiarity.Responses were explored to better understand the sources of the challenges listed in Section 3.1.Results are presented in the following sections.The greatest disparity between regulatory and system-side respondents was found in their negative beliefs as shown is Fig. 3, numbers 1–7.Regulators were found to hold more negative perceptions than the system side, especially for the beliefs that ONWSs are a health risk, difficult to manage, and not aesthetically pleasing.In spite of these differences, the fraction of all respondents that held negative beliefs about ONWS was low relative to positive beliefs, at less than 40% for all cases.This indicates that while there may be some resistance to ONWS from the regulatory side due to management concerns, for example, most negative beliefs were not commonly held among those responding.With regard to two of the largest potential benefits of ONWS, numbers 8–9 in Fig. 3, both regulatory and system-side respondents believed that indoor alternate water source reuse reduces potable water demand, but over twice as many respondents from the system side as compared to the regulatory side believed it can reduce overall energy consumption.This shows that while both sides believed there are benefits to be had by installing onsite systems, the system side held this belief more strongly than the regulatory side.The final three beliefs, numbers 10–12 listed in Fig. 3, were held by a percentage of both regulatory and system-side professionals, but are not necessarily true statements.Number 10, the belief that alternate water sources can be used for potable applications is not true in California given current regulations, however, nearly 24% of system-side respondents believed this to be true.The truth for numbers 11 and 12 regarding alternate water sources and applications depends on what local area jurisdictions decide to permit.While almost 30% of regulators believed only shower and laundry water can be reused indoors, rainwater, foundation drainage, and water from non-kitchen sinks are all acceptable alternate water sources per the CPC.Similarly, while over 45% of both regulators and system side respondents reported that toilet flushing and cooling are the only allowed indoor applications, other indoor applications such as industrial processing and washing are allowed.Both these results suggest that at least some fraction of those dealing with alternate water sources are either located in areas that restrict their use or they do not understand the full range of possibilities.These survey results imply generally that those who regulate or implement systems, may adhere to inaccurate beliefs.Such misconceptions might result in overprotective requirements, limited source options, and restricted applications, all of which would contribute to the challenges listed in Section 3.1 and reduce beneficial outcomes.While the resources listed in Table 1 are intended to address challenges facing ONWS, their impact depends on whether they reach the appropriate audience.Findings from the survey indicate that, in many instances, these resources are either not reaching their targeted audience or are not being read.Fig. 4a shows reported familiarity with the resources listed in Table 1.In all cases, greater than 50% of respondents had never heard of the resources, and, in all cases, less than 20% had read any.Breaking this down further by respondent affiliation, Fig. 4b shows that of those who had read or skimmed the resources, most affiliated themselves with the system side.Documents such as the “Step-By-Step Guide” and “Guidebook for Implementing Regulations”, which are meant specifically to help regulators create local programs, were being skimmed or read about twice as often on the system side as the regulatory side.Regulators may not be receiving these resources due to dissemination methods.Currently, the resources listed in Table 1 are posted online and discussed at conferences and webinars.Looking at Fig. 4c, only about 40% of regulators get information about ONWS from conferences and a sizable fraction, 30.5%, do not receive any information at all.When trying to access certain audiences, it is critical to understand how they receive and digest information.Many of the challenges listed in Section 3.1 might arise from a lack of knowledge and understanding of alternate water sources and ONWS.As a means to determine if this is the case, the survey asked respondents to report their level of familiarity with ONWS.Seventy percent of system-side respondents believed themselves to be very knowledgeable about onsite non-potable reuse as opposed to only 38% from the regulatory side.This knowledge disparity might explain why more regulators than system-side respondents held negative beliefs as discussed in Section 3.4.1 and why regulators on the whole found the listed challenges, as shown in Section 3.3.1, to be more impactful than the system side believed them to be.This gap in knowledge on the regulatory side is understandable given that, for many local regulators, their primary job description encompasses many roles, whereas the system side’s primary focus is on water system design, development, and operation.In order to overcome this knowledge gap, resources have been created that specifically target regulators, but, as seen in Fig. 4, in many instances these resources are not reaching or being read by their targeted audience.To determine the impact that the resources have when they are read, Fig. 5 shows the percentage of respondents that had read at least one of the resources from Table 1 as compared to those that had not read any of the resources — in terms of their valuation of ONWS, their self-reported knowledge, and their inaccurate beliefs.Stakeholders that had read at least one resource were more likely to believe that onsite reuse is extremely important for California’s future.Additionally, those who had read the resources considered themselves more knowledgeable than those who had not, with a larger percentage ranking their knowledge higher than their counterparts on a 1 out of 10 scale.What is not apparent from these results is cause and effect: whether those who considered themselves very knowledgeable were more likely to read these resources, or those who had read these resources were more likely to consider themselves knowledgeable.Interestingly, those who read at least one resource were more likely to hold inaccurate beliefs than those who had not read any.In the case of the belief that alternate water sources can be used for potable applications, this may be due to efforts in California to implement direct potable reuse, which though not yet legal, has made progress.The belief that toilet flushing and cooling are the only allowable indoor applications could follow from the fact that these applications are the most frequently mentioned when discussing indoor reuse.Without further analysis definitive causes cannot be known.To further understand the impact of challenges, two areas of California were compared, one with developed non-potable water regulatory programs and the other without.San Francisco and Los Angeles counties were considered together since both have developed non-potable water programs.On the other side, Sonoma and Santa Clara counties were investigated together since both, per phone interviews, have notable demand from entities such as technology companies and wineries, but lack developed non-potable water programs and permit guidance.These four counties represented the largest number of respondents with an even spread of respondent types.Notably, there was a difference in self-reported knowledge between the two regions.In the counties with a developed local program, 90% of respondents considered themselves to be very knowledgeable about ONWS as opposed to 61% in the regions without a dedicated local regulatory program.When comparing resource familiarity, 45–60% of respondents from the areas with a program had read or skimmed the resources from Table 1, whereas between 15–42% from the areas without a program had read or skimmed the same resources.Considering the difference in knowledge and resource familiarity, there was also a difference in how each region ranked the challenges with the top five from counties with a local program being Confusing Permitting Process, Absence of a Local Program, Poor Coordination between AHJs, Cost, and Limited Resources to Operate.In the localities without a regulatory program, the top five challenges were ranked somewhat differently as Cost, Absence of a Local Program, Confusing Permitting Process, Negative Public Perceptions, and Poor Knowledge.The top challenges in each region show that even in areas where local programs exist, respondents felt like the absence of a local program and a confusing permitting process were preventing growth throughout the state.In places such as Sonoma and Santa Clara counties — where established non-potable water programs do not exist and there is less familiarity with ONWS and less exposure to resources — challenges such as negative perceptions about onsite water and limited education were deemed more impactful than they were in places with more familiarity and knowledge.This indicates that challenges can be location specific and effective solutions to these challenges must reflect this.Given the findings from the survey and the relationships between the challenges, several solutions were formulated in consultation with the TAC to address the top challenges, acknowledging the current state of knowledge and beliefs.Currently, existing resources for ONWS are housed in various locations, do not include accompanying trainings, and are not specific to California.If an organization, or a branch thereof, could fill the role of a dedicated hub, it would address several of the challenges affecting ONWS growth.For example, some functions could include housing resources, responding to questions, conducting training and certifications, and acting as a clearinghouse.Such activities would help to clarify the permitting process, make resources more accessible, and support the creation of local regulatory programs.Housing all of these activities in the same organization would send a clear and consistent message as well as increase the effectiveness and efficiency of resources and efforts aimed at supporting the growth of ONWS.This entity would, ideally, utilize existing organizations so as to leverage work that has already been accomplished.The success of this institution would depend on a diverse stakeholder group to represent many perspectives.Two specific functions this organization should facilitate are regulator trainings and technology certification.A recurring highly listed challenge for both regulators and the system side alike was the lack of training and resources for regulators.Fig. 4c demonstrates that most regulators are not getting their resources from websites or employers, nor are they reading existing resources, as per Fig. 4b, indicating that a different approach is needed.Direct in-person trainings could relate the content of existing resources as well as emerging guidance from SB 966 in ways that would be readily accessible to regulators.In order to support this training and overcome the major hurdle of funding, trainings should be designed to qualify for continuing education units such that staff development funds could be utilized.Working within an existing training system would make the expansion of onsite non-potable water knowledge as seamless as possible.Shadow programs are another type of regulator training identified by the TAC as having been previously successful in their sector.Allowing regulators to see how another region or project is operating could provide important knowledge to less experienced regions as well as increase their confidence in reviewing their own systems.Overcoming this challenge with more diverse ONWS training resources would improve regulator knowledge and help support program development, thus helping to overcome another highly ranked challenge, the absence of a local program.Cost was the second highest ranked challenge significantly impacting the expansion of onsite water use and driving down demand.One reason these systems are considered expensive is due to the limited number of technologies competing in the ONWS market.Currently, only a handful of packaged technologies meets NSF 350 or Title 22 water quality standards.Creating certifications that match the risk-based water quality standards to be developed by the California State Water Resources Control Board under SB 966 would give ONWS stakeholders and regulators confidence in technology selection, preventing costly and overprotective requirements.Encouraging greater certification would also expand the options for prospective system owners, driving competition and eventually lowering costs.As an added benefit, increased certification to match risk-based standards would also help address negative public perceptions with ONWS such as health risk.Perhaps the most impactful solution to the listed challenges would be the creation of policies that require ONWS for new construction of a certain size or for certain alternate water sources, especially in regions where no centralized water recycling option exists.Implementation of such policies would not only catalyze the creation of local programs, it would also create demand and open up the ONWS marketplace to new innovation and competition thereby driving down cost.Alternatively, as a more gentle introduction, dual plumbing stub outs in new construction that enable the reuse of non-potable water indoors could be added to the CALGreen checklist as an optional item that could eventually be required.These policies have been implemented in a few localities, however, state level change may be difficult given unique regional and county level challenges across California.As existing challenges are overcome, policy development might become an appropriate next step.While not the highest ranked among impactful challenges, negative public perception and limited knowledge were considered by many to have at least a slight negative impact on the uptake of ONWS as per Fig. 1.While increased education about ONWS has been shown to do little to improve public attitude, drawing attention to existing successful implementations in local areas can produce more favorable opinions towards the use of alternate water sources.From the survey, the most cited reason for low demand was an unfamiliarity with alternate water sources and the concept of reuse.Highlighting successful systems not only expands awareness of ONWS but can also improve public opinion and drive demand.This could be achieved via tours of existing systems, case study documents, or informational displays in lobbies and waiting areas about the water and energy savings of systems in the area.As advances are made in the ONWS field, future research could be conducted to more fully quantify the impact and opportunities of onsite systems.This study examines beliefs of regulators and the system side, but research assessing the opinions and beliefs of the general public towards alternate water sources and onsite reuse, especially in areas without developed non-potable water programs, could help expand understanding of ONWS challenges.As challenges are removed and more ONWS are installed, data could be collected from monitoring efforts that allow for the quantification of water and energy savings for existing and projected ONWS.This could then improve understanding of the impact these types of systems may have toward meeting water and energy goals, serving to not only support the expansion of ONWS but also provide data that could be relevant to regions outside of California.Additionally, this study focused on onsite reuse in a somewhat general way and did not separate out the individual alternate sources such as rainwater or graywater.ONWS stakeholders may have different views and experiences with different alternate water sources, which could be a valuable topic for future research focusing on specific challenges for different types of reuse systems.While this research was largely focused on California and its specific regulations, similar results might be expected in other regions in the United States and the rest of the world.High ranked challenges such as cost, for example, are likely to be experienced in any location where these types of systems are implemented.In order to expand the benefits of ONWS beyond California, additional research to assess the challenges faced by other regions with different regulations could be enlightening.Consultations with a technical advisory committee and a survey indicate that onsite non-potable water systems face many challenges that are preventing growth and uptake in California.The most significant challenges uncovered were the absence of a local regulatory program, the cost of onsite systems, limited public knowledge and education, and a lack of resources for regulators.These challenges appear to be driving not only low demand for these systems, but also making it difficult for potential system installers and owners to navigate a confusing permitting process.Survey results also showed that while many respondents on the system side, including designers, consultants, and engineers, were very familiar with onsite systems, many regulators had less familiarity.While resources exist to help address these challenges and provide guidance and knowledge to regulators, these resources are either not distributed effectively or the information they contain is not being conveyed.Even when such resources reached an appropriate audience, it appeared they were not being read.If the benefits of ONWS are to be realized, these challenges need to be overcome with targeted solutions that reflect the present reality.Given the current challenges and the state of knowledge and beliefs about ONWS, several solutions could be implemented that would reduce the difficulties facing onsite systems, including the creation and delivery of trainings for local regulators, the formation of a dedicated onsite non-potable water system organization, increased certification of onsite water technologies, policy changes, and a focus on existing successful systems.
Onsite (a.k.a. decentralized) water reuse can reduce overall potable water demand and aid in meeting water reduction goals. In spite of clear benefits, onsite non-potable water systems (ONWS), specifically non-blackwater commercial systems, face many challenges that are preventing growth and expansion in California. This study utilized a technical advisory committee and a survey to identify the most significant challenges facing onsite water reuse systems, how these challenges affect ONWS stakeholders, and potential solutions at the state level. The given methods found that the most prevalent challenges hindering the growth of ONWS appeared to be the absence of a local regulatory program, system cost, poor access to training for regulators, and limited public education about alternate water sources. Survey results revealed several possible drivers for the existence of these challenges including that informational and training resources are not adequately disseminated to target groups. The study concluded that the creation of trainings for regulators, the development of an organization dedicated to onsite systems, expanded technology certifications, policy changes, and highlighting existing systems might help overcome the challenges hindering growth and allow for greater expansion of onsite non-potable water systems throughout California.
262
Characterization of Vibrio cholerae neuraminidase as an immunomodulator for novel formulation of oral allergy immunotherapy
Oral immunotherapy is of special interest as an alternative therapeutic approach for allergy treatment as it can elicit systemic as well as mucosal immune responses.Its main advantage over subcutaneous injections is the painless application resulting in a higher patient compliance .For oral application, drugs have to fulfill several criteria, such as digestion stability and sufficient bioavailability by overcoming the first pass metabolism .One promising strategy is the use of poly acid particles as a carrier system.The particles protect encapsulated proteins against gastrointestinal degradation and are fully biocompatible being eliminated via the Krebs cycle with no polymer-specific immune response .However, the immune system is shifted towards a Th1 milieu depending on particle size .The efficacy of PLGA carriers can be enhanced targeting intestinal microfold cells by particle surface modification, as demonstrated for AAL .M-cells located within the follicle associated epithelium are responsible for uptake and transport of intact particulate structures to the mucosa-associated lymphatic tissue .Since they have a plasma membrane rich in specific carbohydrate residues, such as α-L fucose and monoganglioside , lectins binding to the M-cell glycocalyx may serve as specific targeters on carrier systems for antigen or drug delivery to the immune system .Therefore, they are perfect targets for allergen entry in oral immunotherapy.In previous experiments, functionalization of PLGA particles with AAL increased the transepithelial transport mainly via M-cells and induced a Th1-dominated immune response in peripheral blood mononuclear cells of allergic patients .Moreover, oral application of allergen-loaded, AAL-coated PLGA-MPs to birch pollen-sensitized mice beneficially modulated an established Th2 biased immune response .In this study, we aimed to exploit neuraminidase of Vibrio cholerae as a targeting agent due to its structural similarities with AAL .In this context the structure of Vibrio cholerae NA is of special interest, as the catalytic center is flanked by two identical lectin-like domains .Moreover, it has a high cleaving specificity removing sialic acid residues to unmask its main interaction partner GM1 .As it should, thus, be able to especially bind to M-cells, we aimed to identify whether NA is superior as targeting molecule with special focus on its immunomodulatory capacity, compared to wheat germ agglutinin, as a binding molecule targeting epithelial cells, and to AAL, which is known for an enhanced M-cell specific binding .Safety and suitability of oral application of NA as targeting molecule have been assessed in vitro and in vivo in the present work.The targeting molecule NA from Vibrio cholerae was tested for its gastric stability in simulated gastric fluid as described previously and compared to the lectins AAL and WGA .Digestion was stopped with 1 M NaOH after 60, 120 and 180 min.The molecules were subsequently digested in simulated intestinal fluid using 3.2 mg/ml pancreatin in a ratio of 1:5 w/w.After 5, 10, 15, 30 or 45 min digestion was stopped by non-reducing SDS-PAGE buffer and boiling, according to a modified protocol .Protein integrity was evaluated by SDS-PAGE using Coomassie brilliant blue staining and silverstaining.Binding of NA to intestinal epithelial cells in comparison to WGA and AAL was investigated in vitro by flow cytometry.Colon carcinoma cells Caco-2/Tc7, with a small intestinal phenotype were cultured in Dulbecco modified minimal essential cell culture medium supplemented with 10% fetal calf serum, 1% non-essential amino acids, 10 mM HEPES, 10 mM l-glutamine, 1 U/ml penicillin and 1 μg/ml streptomycine in a humidified 5% CO2/95% air atmosphere at 37 °C.Single Caco-2 cells were washed with PBS and incubated with 16, 32, 64 or 128 μg FITC-NA or Biotin-AAL and Biotin-WGA for 30 min in suspension.For determination of background staining, cells were incubated with or without biotin-labeled IgG being followed by FITC-Avidin.Acquisition was performed using a FACS Calibur flow cytometer and calculated with FlowJo 9.3.3 software.To identify and characterize binding partners of NA and the lectins AAL and WGA on Caco-2 cells, inhibition experiments were performed.Caco-2 cells were grown for 14 days until monolayer formation in a 96-well tissue culture microplate, according to previous protocols .Cells were washed and blocked with 2% bovine serum albumin in PBS for 30 min.Subsequently, biotinylated NA, AAL or WGA pre-incubated 1:1 with increasing concentrations of α-L fucose, N,N′,N″-triacetylchitotriose, glycosylated or deglycosylated GM1 from bovine brain were added to Caco-2 cells for 60 min.FITC-Avidin was added for 30 min and fluorescence intensity was measured at 485 nm excitation and 530 nm emission.Values of remaining binding in the presence of inhibition molecules were calculated as percentage of the binding of non-inhibited targeting substances.Three independent sets of experiments were performed.OVA-loaded PLGA microparticles were prepared by the water-in-oil-in-water solvent-evaporation technique.OVA-loaded PLGA microparticles were prepared by the water-in-oil-in-water solvent-evaporation technique.Aqueous solution of OVA was emulsified with 400 mg PLGA in ethyl acetate by sonication for 2 min.After adding 8 ml aqueous solution of Poly-ethylene-alt-maleic anhydride, the emulsion was sonicated to yield the w/o/w emulsion.After pouring the mixture into 100 ml of a 0.25% aqueous solution of PEMA, the particle suspension was stirred at 600 rpm for 1 h at room temperature in order to remove the residual ethyl acetate.The MPs were re-suspended in 20 mM HEPES/NaOH-buffer pH 7.0 and washed several times with the same buffer to remove non-encapsulated OVA.Total PLGA content was determined gravimetrically after lyophilization.Particle size distribution was determined using a Malvern Mastersizer 2000 laser particle size analyzer.NA, AAL or WGA were covalently coupled to the surface of PLGA-MPs using a modified carbodiimide method .As the H-type of PLGA with free carboxyl end groups was used as a matrix, amine-containing ligands were bound to the superficial protruding polymer chains via amide bonds.The carboxylates were activated with carbodiimide to yield an active ester intermediate that readily reacts with primary amines resulting in covalently linked ligand.PLGA-MP suspension in 20 mM HEPES/NaOH-buffer pH 7.0 was activated for 2 h at RT by adding 5 ml of a solution of 1400 mg 1-ethyl-3 carbodiimide and 59 mg N-hydroxysuccinimide in the same buffer.In order to remove excess reagents, the MPs were washed three times with 20 mM HEPES/NaOH-buffer pH 7.4.MPs were resuspended in 5 ml of the same buffer, 29.4 μM NA-, AAL- or WGA solution was added and incubated under rotation at RT.Non-reacted binding sites were saturated by incubation with 3 ml glycine solution for 5 h at RT.MPs were washed three times by centrifugation to remove excess reagents, resuspended in 5 ml isotonic 20 mM HEPES/NaOH pH 7.4, and stored at −80 °C.The concentration of NA-, AAL-, and WGA on grafted PLGA-MPs was gravimetrically determined after lyophilisation.As a reference, uncoated PLGA MPs were treated as above but solely buffer added."Concentration of encapsulated OVA was measured after dissolving the MPs in 5% SDS/0.1 M NaOH for 3 h at RT by MicroBCA protein assay kit following manufacturer's instructions. "Covalent coupling to carboxylate-modified fluospheres was done according to manufacturer's instructions.Aqueous solution of OVA was emulsified with 400 mg PLGA in ethyl acetate by sonication for 2 min.After adding 8 ml aqueous solution of Poly-ethylene-alt-maleic anhydride, the emulsion was sonicated to yield the w/o/w emulsion.After pouring the mixture into 100 ml of a 0.25% aqueous solution of PEMA, the particle suspension was stirred at 600 rpm for 1 h at room temperature in order to remove the residual ethyl acetate.The MPs were re-suspended in 20 mM HEPES/NaOH-buffer pH 7.0 and washed several times with the same buffer to remove non-encapsulated OVA.Total PLGA content was determined gravimetrically after lyophilization.Particle size distribution was determined using a Malvern Mastersizer 2000 laser particle size analyzer.NA, AAL or WGA were covalently coupled to the surface of PLGA-MPs using a modified carbodiimide method .As the H-type of PLGA with free carboxyl end groups was used as a matrix, amine-containing ligands were bound to the superficial protruding polymer chains via amide bonds.The carboxylates were activated with carbodiimide to yield an active ester intermediate that readily reacts with primary amines resulting in covalently linked ligand.PLGA-MP suspension in 20 mM HEPES/NaOH-buffer pH 7.0 was activated for 2 h at RT by adding 5 ml of a solution of 1400 mg 1-ethyl-3 carbodiimide and 59 mg N-hydroxysuccinimide in the same buffer.In order to remove excess reagents, the MPs were washed three times with 20 mM HEPES/NaOH-buffer pH 7.4.MPs were resuspended in 5 ml of the same buffer, 29.4 μM NA-, AAL- or WGA solution was added and incubated under rotation at RT.Non-reacted binding sites were saturated by incubation with 3 ml glycine solution for 5 h at RT.MPs were washed three times by centrifugation to remove excess reagents, resuspended in 5 ml isotonic 20 mM HEPES/NaOH pH 7.4, and stored at −80 °C.The concentration of NA-, AAL-, and WGA on grafted PLGA-MPs was gravimetrically determined after lyophilisation.As a reference, uncoated PLGA MPs were treated as above but solely buffer added."Concentration of encapsulated OVA was measured after dissolving the MPs in 5% SDS/0.1 M NaOH for 3 h at RT by MicroBCA protein assay kit following manufacturer's instructions. "Covalent coupling to carboxylate-modified fluospheres was done according to manufacturer's instructions.SGF, SIF and SDS-PAGE experiments were done with MP preparations as described above.OVA was released from MPs by incubation in 5% SDS/0.1 M NaOH for 3 h at RT and protein integrity was evaluated by SDS-PAGE followed by Coomassie brilliant blue staining.The presence of NA, WGA and AAL on the surface of MPs was confirmed by ELISA and Western blot.For both techniques, MPs were dissolved in 5% SDS/0.1 M NaOH for 3 h at RT.The dissolved MPs were coated on a plate overnight or were separated by SDS-PAGE and transferred onto a nitrocellulose membrane.After blocking, particle-surface bound functionalization substances were detected by using NA, WGA- or AAL-specific polyclonal mouse antibodies, followed by anti-mouse IgG horseradish-peroxidase-labeled antibodies.TMB was used as substrate for ELISA, Super Signal West Pico Chemiluminescent Substrate was used for Western blot."To rule out a strong endotoxin contamination, lipopolysaccharide levels of the targeters and the functionalized microparticles were determined using Endosafe–PTS cartridges according to manufacturer's instructions.To test whether NA-coated FS bind to intestinal epithelial cells, Caco-2 cells were grown on uncoated glass chamber slides overnight.Cells were washed with ice-cold PBS and coated FS were added for 60 min at 4 °C.Cells were fixed, permeabilized and unspecific binding was blocked with 1% bovine serum albumin in PBS for 30 min at RT.Cells were stained with an α-tubulin antibody followed by an anti-mouse IgG Alexa Fluor 568 antibody for detection.The nucleus was visualized with DAPI staining.Acquisitions were done using a Zeiss Axioplan 2 fluorescence microscope.For co-culture experiments, Caco-2 cells were seeded on transwell filters, cultured for 21 days until transepithelial resistance reached 300-400 Ω/cm2 and then co-cultured with Raji-B cells) for the last 4 days of incubation period on the basolateral side of the epithelial layer as described .Tight junction integrity was confirmed by TEER measurements and M-cell formation was verified by measurement of alkaline phosphatase activity .After 1 h incubation with phenolred-free RPMI, NA-, AAL- and WGA-coated FS were added to the apical side of the epithelium for 120 min.FS transported through the epithelium against gravitation were collected from the basolateral side, centrifuged and dissolved in 0.1 M NaOH.The fluorescence intensity of transported FS was quantified at 480/530 nm.As control, uptake studies were performed with Caco-2 cells cultured without Raji-B cells.Statistics were calculated from 6 wells per condition.To investigate the effects of particle stimulation on gene expression of intestinal epithelial cells, Caco-2 cells were seeded in 12-well tissue culture plates and cultivated for 21 days.Cells were stimulated in four independent experiments either with the targeters NA, AAL or WGA or with NA-, AAL-, WGA-coated or uncoated OVA-loaded MPs for 1, 2, 3 and 6 h or were left unstimulated as negative control.Cells were harvested with 700 μl TRIZOL reagent and RNA was isolated according to RNeasy Mini Kit protocol without DNase treatment.Final RNA was eluted in 30 μl RNase free water and RNA concentrations as well as purity were measured with nanodrop ND-1000 Spectrophotometer.Reverse transcriptase PCR was performed using the High Capacity cDNA Reverse Transcription Kit without RNase Inhibitor and final cDNA was diluted 1:5 in nuclease free water.Quantitative realtime PCR according to SYBR Green Mix protocol in 384 well plates using the following primers: Claudin-1, claudin-4, ZO-1, ZO-2 was performed with 7900HT Fast Real Time PCR machine to investigate the changes of tight junction expression after cell stimulation.To determine the changes of cytokine expression in Caco-2 cells after stimulation quantitative realtime PCR according to TaqMan Universal Master Mix Protocol using the following TaqMan assays: CCL20, RANTES, TSLP, β-actin was performed using β-actin as endogenous control.PCR reactions were performed in technical triplicates for each of the four independent experiments.Significant outliers were calculated and excluded from statistical analysis.Healthy female BALB/cAnNCrl mice were purchased from Charles River Laboratory or from the Institute of Laboratory Animal Science and Genetics and housed under conventional conditions in groups of 5–8 in polycarbonate Makrolon type II cages with filter tops and espen wood bedding enriched with nesting material.Mice were fed with a special egg-free diet with free access to food and water.Experimental procedures were started after a 2-weeks acclimation period in a separate animal experimentation room by treating animals in random order within each group.Sample size calculation was based on previous own data with a power calculation based on two-sided, two-sample t-test.The concept of replacement, refinement and reduction had a fundamental impact on study design of the approved ethical protocol.Primary outcome of the animal studies was to investigate immunomodulatory capacity and safety of oral application of OVA-loaded NA-MPs in comparison to WGA- and AAL-MPs.Animals were treated according to European Union rules of animal care and with approval by the local ethics committee and the Austrian Federal Ministry of Science and Research."Naïve mouse spleen cells, isolated as described previously , were stimulated in a 96-well round-bottom tissue culture-plate either with NA, WGA, AAL or the functionalized, OVA loaded microparticles, OVA alone or the positive control concanavalin A or medium as negative control for 72 h. Supernatants were analyzed for their cytokine levels using IL4, IFN-γ and IL10 Ready-set-go ELISA kits following manufacturer's instructions.Cell supernatants were diluted 1:2.Cytokine concentrations were calculated according to a standard curve."For analysis of immediate mast cell activation, mice were gavaged once with the respective MPs preparations and sera taken 1 h after gavage were screened for mouse mast cell protease-1 using the mouse MCPT-1 Ready-set-go ELISA Kit according to manufacturer's instructions.Rectal body temperatures were measured 30 min, 1 h, 3 h, 6 h, 8 h, 24 h, 48 h and 72 h after single gavage using a Thermalert TH-5 thermometer.To evaluate the safety of repeatedly orally applied MPs, naïve mice were fed 6 times every other week with Plain- or NA-, AAL- or WGA-MPs containing 200 μg OVA on 3 consecutive days.Serum was collected before and 14 days after the last MP application.Sixteen days after the last oral application cycle, mice were orally challenged with OVA.Rectal temperature as indicator of anaphylaxis was measured before and 15 and 30 min after OC.To evaluate the safety of repeatedly orally applied MPs, naïve mice were fed 6 times every other week with Plain- or NA-, AAL- or WGA-MPs containing 200 μg OVA on 3 consecutive days.Serum was collected before and 14 days after the last MP application.Sixteen days after the last oral application cycle, mice were orally challenged with OVA.Rectal temperature as indicator of anaphylaxis was measured before and 15 and 30 min after OC.To further evaluate the safety of repeatedly orally applied OVA-loaded MPs, mice were immunized intraperitoneally with 2 μg OVA adsorbed to 2% aluminum hydroxide solution3) in 2 weeks intervals Thereafter, mice were orally challenged with PBS or 50 mg OVA to induce local inflammation and to evaluate the allergic immune response.Thereafter, mice were fed 6 times every other week with Plain- or NA-, AAL- or WGA-MPs containing 200 μg OVA on 3 consecutive days.Serum was collected before and during sensitization as well as before, during and 14 days after the last MP application.After the last oral application cycle, mice were orally re-challenged with PBS or OVA.Rectal temperature as indicator of anaphylaxis was measured before and 15 and 30 min after OC.To further evaluate the safety of repeatedly orally applied OVA-loaded MPs, mice were immunized intraperitoneally with 2 μg OVA adsorbed to 2% aluminum hydroxide solution3) in 2 weeks intervals Thereafter, mice were orally challenged with PBS or 50 mg OVA to induce local inflammation and to evaluate the allergic immune response.Thereafter, mice were fed 6 times every other week with Plain- or NA-, AAL- or WGA-MPs containing 200 μg OVA on 3 consecutive days.Serum was collected before and during sensitization as well as before, during and 14 days after the last MP application.After the last oral application cycle, mice were orally re-challenged with PBS or OVA.Rectal temperature as indicator of anaphylaxis was measured before and 15 and 30 min after OC.Mouse mast cell protease-1 levels in serum taken 1 h after OC were detected as described above.Sera taken after the last oral application from immunization described in paragraph 2.9.or after sensitization and after MP gavages from immunization described in paragraph 2.10. were diluted 1:100 and concentrations were calculated according to a standard curve.Sera were screened for OVA-specific IgE and IgA by ELISA, as described previously .Sera were diluted 1:200 for IgA and 1:20 for IgE.Rat anti-mouse IgA and IgE were diluted 1:500 and horseradish peroxidase-labeled goat anti-rat IgG 1:1000, respectively.Sera taken before immunizations were defined as background levels and were subtracted from sera post immunizations.Concentrations were calculated according to a standard curve.Mouse mast cell protease-1 levels in serum taken 1 h after OC were detected as described above.Sera taken after the last oral application from immunization described in paragraph 2.9.or after sensitization and after MP gavages from immunization described in paragraph 2.10. were diluted 1:100 and concentrations were calculated according to a standard curve.Sera were screened for OVA-specific IgE and IgA by ELISA, as described previously .Sera were diluted 1:200 for IgA and 1:20 for IgE.Rat anti-mouse IgA and IgE were diluted 1:500 and horseradish peroxidase-labeled goat anti-rat IgG 1:1000, respectively.Sera taken before immunizations were defined as background levels and were subtracted from sera post immunizations.Concentrations were calculated according to a standard curve.After sacrifice, intestines were removed and flushed with 2 ml ice-cold PBS containing protease inhibitors.Intestinal lavage fluid was screened for mucosal total IgA.Microtiter plates were coated with rat anti-mouse IgA.After blocking, standard dilution series or mucosal lavage fluid, were added.For detection, a biotin-labeled anti-mouse IgA antibody was used, followed by Streptavidin-HRP.As substrate TMB was used and stopped with hydrochloric acid before measuring optical density at 450-630 nm.OVA-specific IgA was measured in intestinal lavage fluid as described above for serum OVA-specific IgA.After sacrifice, mouse splenocytes were isolated and stimulated as described above either with OVA or the positive control concanavalin A or medium as negative control for 72 h. Cytokine levels of supernatants were analyzed for IL4, IFN-γ and IL10 using the Ready-set-go ELISA kits as described above.Cytokine concentrations were calculated according to a standard curve.Data of inhibition experiments were statistically compared with the GraphPad Prism 5 software using the two-way ANOVA and Bonferroni post-test."Fluorescent intensity results of different FS and real-time PCR experiments as well as antibody concentration, comparison of body temperature and cytokine levels of in vivo studies were analyzed by one-way ANOVA and Tukey's multiple comparison test if normally distributed. "For non-parametric testing Dunn's multiple comparison test was used.Co-culture results were statistically compared using two-way ANOVA after logarithmic transformation.A P-value <0.05 was considered statistically significant.To evaluate its suitability for oral application and, thus, its digestion stability, NA was subjected to simulated digestion experiments and compared to the lectins AAL and WGA, which have been used in previous experiments for microparticle functionalization .In SGF experiments all three coupling molecules remained stable for up to 180 min.Protein bands of NA ranged from 20 to 90 kD under denaturating conditions.AAL revealed its expected monomeric protein band at 36 kD and WGA, which appears as a dimer at 36 kD under neutral pH conditions, formed a monomeric protein band at 18 kD in SDS-PAGE .In subsequent SIF experiments, the targeters remained stable for up to 45 min.The binding of the targeting molecules to epithelial cells was investigated using Caco-2 cells, which exhibit a small intestinal epithelial phenotype and which have been widely used for investigating the potential of lectins as targeting molecules .FITC-labeled NA bound to Caco-2 cells in a dose dependent manner as analyzed by flow cytometry.In these experiments, biotinylated WGA and AAL revealed an even higher fluorescence intensity compared with NA, which might be explained by signal enhancement via the FITC-streptavidin-biotin complex in comparison to NA being FITC labeled alone.The specific binding partners of the targeting molecules were investigated in cellular inhibition experiments.Binding of NA to Caco-2 cells was inhibited in a concentration dependent manner by the addition of α-L fucose and by GM1, surrogating also an affinity to glycosylated GM1, but not by TCT.The binding of AAL to α-L fucose was lower.TCT expectedly reduced the binding of WGA substantially.No binding inhibition of AAL or WGA was found with GM1.The immunomodulatory capacity of NA in comparison to WGA and AAL on lymphocytes and epithelial cells was investigated.Spleen cells of naive BALB/c mice were stimulated with the targeters and cytokines were measured.NA and AAL induced a significant up-regulation of the Th1 cytokine IFN-γ compared to WGA and the negative medium control.The Th2 cytokine IL4 was not induced by any of the targeting molecules.Only NA increased the T-regulatory cytokine IL10 in comparison to WGA, AAL and the control.ConA was used as positive control and induced significantly higher cytokine levels compared to all other stimulation conditions.The immunomodulatory capacity of NA compared to AAL and WGA on epithelial cells was assessed in vitro using Caco-2 cells.Significant upregulation was observed only for CCL20 after NA stimulation, with a maximum after 3 h, compared to AAL and WGA.No significant differences between the targeter groups were found for TSLP, RANTES and the tight junction molecules zonula occludens-1 and ZO-2, as well as for claudin-1 and claudin-4.IL8 was substantially induced after NA stimulation but levels were not significantly different from AAL and WGA stimulation.NA or the lectins WGA and AAL were coupled to PLGA microparticles, which serve as antigen carrier being loaded with OVA as representative allergen.The mean diameter of OVA-loaded MPs was 3.9 μm and the allergen-content, measured before functionalization, was 164.2 ± 9.3 μg OVA/mg particles.NA or the lectins WGA and AAL were coupled to PLGA microparticles, which serve as antigen carrier being loaded with OVA as representative allergen.The mean diameter of OVA-loaded MPs was 3.9 μm and the allergen-content, measured before functionalization, was 164.2 ± 9.3 μg OVA/mg particles.Efficient coupling of NA, WGA and AAL to MPs was confirmed using a bicinchoninic acid assay as well as by ELISA and Western Blot.No changes in the diameter after NA-coupling were observed.After coupling, we aimed to test the binding of the functionalized particles to epithelial cells in vitro.The functionalization of particles increased the binding to Caco-2 cells compared to plain particles.Especially NA- and AAL-coated particles showed an enhanced binding to Caco-2 cells.However, NA does not only bind to epithelial cells, but shows a significantly enhanced transepithelial uptake via M-cells in a human M-cell like in vitro model .An intact epithelial monolayer was confirmed by TEER measurements.When M-cells are present, the transepithelial uptake of NA-particles and AAL-particles was significantly enhanced compared to monoculture of Caco-2 cells alone, where M-cells are missing.The M-cell binding of NA-particles was even significantly higher compared to particles coated with the well characterized M-cell binding lectin AAL, indicating superiority of NA over AAL coupling with regard to M-cell targeting.As encapsulation should protect the antigen from digestion, OVA-loaded Plain-, NA-, AAL- or WGA-MPs were tested for their stability in SGF experiments.SDS-PAGE analysis of samples exposed to SGF revealed that encapsulated OVA remained stable for up to 120 min of SGF digestion representing the average gastric transit time and up to 45 min in simulated intestinal digestion.In contrast “unprotected” OVA proteins were degraded within 30 min.Coupling of NA to particles reduced endotoxin-levels 19-fold compared to free, unbound NA, emphasizing that observed immunological changes were not due to high LPS content of MPs.Endotoxin levels of WGA- and AAL-MPs was below the detection limit.Naive spleen cells were stimulated with the respective MP formulations as described above for the targeting molecules alone.Only NA-MPs induced a significantly higher OVA-specific production of the Th1 cytokine IFN-γ and the T-regulatory cytokine IL10 compared to all other MP formulations and to medium alone.The Th2 cytokine IL4 was not upregulated in the tested groups.The immunomodulatory capacity of the respective MPs formulations on Caco-2 was evaluated in real-time PCR experiments.CCL20 was highly upregulated during NA-MPs stimulation, which was not observed for the other formulations.AAL-MPs induced a mean CCL20 fold increase of 16 after 2 h of stimulation.No differences between the groups were found for the tight junction molecules ZO-1, ZO-2, claudin-1 and claudin-4 or for TSLP and RANTES.For safety evaluation in vivo, naïve BALB/c mice received a single oral gavage of the respective OVA-loaded Plain-, NA-, AAL- or WGA-MPs preparations.Single oral application of OVA-loaded NA-MPs did not induce mast cell activation, indicated by negative mMCP-1 levels in sera.This was accompanied by a stable rectal temperature up to 72 h, as a marker of anaphylaxis, after oral MP gavage.For further safety evaluation, naïve BALB/c mice were repeatedly fed with OVA-loaded functionalized MPs.For further safety evaluation, naïve BALB/c mice were repeatedly fed with OVA-loaded functionalized MPs.Sera after the last MPs gavage were screened for OVA-specific antibody levels.No OVA-specific IgE was induced as measured titers were significantly lower in the groups fed NA-MPs, WGA-MPs and Plain-MPs than in naïve animals, the latter representing background IgE levels.Significantly higher levels of OVA-specific IgA were observed in the groups fed NA-MPs and AAL-MPs compared to WGA-MPs, Plain-MPs and naïve animals, while OVA-specific IgG1 and IgG2a levels were not significantly different between the groups.Mice were further orally challenged with OVA after the repeated MP gavages to evaluate the potential of anaphylaxis in these animals.No decline of body temperature as marker of anaphylaxis was observed in any of the groups.Additionally, no activation of mucosal mast cells, indicated by mMCP-1 levels, was found after oral OVA challenge.Screening intestinal lavage after oral provocation for antibody levels revealed significantly lower OVA-specific IgA titers in the group fed AAL-MPs compared to naïve animals, while all other groups were only substantially, but not significantly lower.Total IgA in intestinal lavage was comparable between all groups.No OVA-specific induction of cytokines was observed.In a third in vivo approach, systemically sensitized, OVA-allergic mice were fed 6 times with OVA-loaded functionalized MPs.After MP gavages, there was a slight decrease of IgE in NA-MP fed mice and a marginal increase of IgA antibody levels in all allergic mice comparing titers from before to after MP gavages, without reaching statistical significance.Comparable results were observed for IgG1 and IgG2a with no significant changes from before to after MP gavages.To evaluate the potential of anaphylaxis induction by repeated oral gavages of OVA-loaded MPs in highly allergic animals, core body temperature was measured.The different MPs had no effect on core body temperature from before to after oral MP feedings.Additionally, we did not observe elevated mediator release upon oral OVA challenge following 6 cycles of MP gavages.While no differences were found for IL4 titers between the different groups, significantly elevated OVA-specific IFNγ and IL10 levels were measured in splenocyte supernatants in the group receiving 6 cycles of NA-MPs.In a third in vivo approach, systemically sensitized, OVA-allergic mice were fed 6 times with OVA-loaded functionalized MPs.After MP gavages, there was a slight decrease of IgE in NA-MP fed mice and a marginal increase of IgA antibody levels in all allergic mice comparing titers from before to after MP gavages, without reaching statistical significance.Comparable results were observed for IgG1 and IgG2a with no significant changes from before to after MP gavages.To evaluate the potential of anaphylaxis induction by repeated oral gavages of OVA-loaded MPs in highly allergic animals, core body temperature was measured.The different MPs had no effect on core body temperature from before to after oral MP feedings.Additionally, we did not observe elevated mediator release upon oral OVA challenge following 6 cycles of MP gavages.While no differences were found for IL4 titers between the different groups, significantly elevated OVA-specific IFNγ and IL10 levels were measured in splenocyte supernatants in the group receiving 6 cycles of NA-MPs.In this study we investigated the suitability, safety and functionality of NA as a novel PLGA microparticle functionalization substance for epithelial targeting with future implication as oral immunotherapy of allergy.NA is encoded within a pathogenicity island of the Vibrio cholerae genome .It is able to hydrolyze complex gangliosides on intestinal epithelial cells to yield GM1 resulting in high local concentrations of GM1 .Thereby, NA enhances its own binding partner on enterocytes and M-cells and might increase the intestinal residence time.NA is part of the mucinase complex, and as such, is an important virulence factor, which enhances the pathogenicity of Vibrio cholerae.By degrading the mucin layer of the gastrointestinal tract, it facilitates bacterial penetration.Thus, NA-coated particulate carriers might have improved access to intestinal cells, and in particular to M-cells, which are protected by only a thin mucus layer .NA was even reported to promote cholera toxin binding to GM1 , which might contribute to the pro-allergenic effect of CT induced by cAMP overproduction .However, due to its functionality NA is not able to stimulate cAMP overproduction like CT, which was confirmed by the absence of pro-allergenic effects in our results.Our in vitro data support the assumption that, similarly to the lectin AAL, NA binds through α-L fucose and additionally through GM1.As α-L fucose and GM1 are both highly expressed on M-cells , NA might represent a promising tool not only to enhance the residence time on intestinal epithelial cells but also to target M-cells directly leading to an enhanced transepithelial uptake of allergen loaded microparticles.Indeed, binding and enhanced transepithelial transport of NA-MPs compared to the epithelial binding of the lectins WGA but also AAL, which has been shown to have M-cell specificity in previous studies , was confirmed using the human Caco-2 cell line co-cultured with Raji B-cells for an M-cell-like model .This emphasizes the superiority of NA-functionalization for PLGA particles over the previously described lectins WGA and AAL.PLGA microparticles protected OVA, as our model food allergen from gastric and intestinal degradation due to encapsulation, ensuring that intact allergens reach the intestinal immune system.In addition, NA was revealed to be extremely stable under gastric and intestinal proteolytic conditions and, thus, represents an ideal targeting molecule for the oral route ."The particulate nature of MPs ensures uptake via the Peyer's patches and enables the release and presentation of intact allergen to immune cells residing underneath the M-cells.However, as NA is part of the pathogenicity island of Vibrio cholerae, a thorough safety evaluation of this novel therapeutic formulation was pivotal.NA per se was able to induce secretion of Th1 as well as immunosuppressive cytokines like IFN-γ and IL10 in vitro, which are beneficial in allergen immunotherapy to counterbalance the Th2-biased immune response present in allergy.This cytokine induction was even more profound with NA compared to the control lectin AAL.NA and NA-MPs were capable of inducing CCL20 expression in Caco-2 cells in our study.In literature, CCL20 and IL8 were associated with a pro-inflammatory response upon incubation of Caco-2 cells with entero-invasive bacteria, such as Salmonella and Shigella .CCL20 triggers the chemotaxis of dendritic cells, which in combination with an induction of Th1 and T-regulatory immunity might be beneficial in cases of an ongoing allergic response.However, we do not anticipate that NA is a strong pro-inflammatory trigger on intestinal epithelial level, as we observed no significant effects on tight junction integrity and molecules such as ZO-1, ZO-2 and claudin-1 and claudin-4.Additionally, CCL20 is regarded as a crucial chemokine for M-cell differentiation and regulates lymphocyte recruitment in a CCL20-CCR6 dependent manner .As this chemokine is, thus, involved in MALT formation, it is tempting to speculate that NA-coated MPs induce formation of intestinal structures, which are the main, immunologically active target of this novel treatment strategy.Based on the here presented results, NA was proven to be suitable for oral application due to its digestion stability and additionally might have adjuvant, immunomodulatory properties on epithelial and lymphoid cells.Single and long-term, repeated oral application of NA-functionalized MPs were safe as no adverse effects were observed in naïve and allergic mice with regards to the formation of allergen-specific IgE or the activation of allergy effector cells, such as mast cells.Additionally, an allergen-specific immunomodulatory potential in an ongoing systemic Th2 immune response was observed.In conclusion, we propose NA of Vibrio cholerae as a suitable, highly efficient and safe bioadhesive targeting molecule at mucosal sites.Coating of MPs with NA increases intestinal uptake, protects the antigens against degradation and ensures presentation of intact antigens to immune cells at mucosal induction sites.NA and NA-functionalized MPs are able to induce a profound Th1 and T-regulatory environment, which, in the case of allergy, might counteract an ongoing Th2 response.Additionally, only NA induces CCL20, which recruits DCs and lymphocytes to the subepithelial layer and initiates MALT formation.Thus, stimulation of the Th1 and T-regulatory cells might counterbalance an ongoing Th2 response.All these data support the advantage of NA as functionalization substance for PLGA-MPs with implication as allergy treatment.However, the clinical efficacy of oral immunotherapy with NA-functionalized MPs in situations where a local Th2 driven immune response is present, has to be the focus of future research.The following are the supplementary data related to this article.Preparation of functionalized OVA-loaded PLGA-MPs.PLGA-MPs were loaded with OVA as model allergen using the double emulsion technique.PLGA-MPs were functionalized with the respective targeting substances, indicated as protein-NH2, after activation of carboxyl groups using EDAC.Safety testing in naïve mice.For safety testing naive mice were fed 6 times with OVA-loaded, Plain-, NA-, AAL- or WGA-MPs on 3 consecutive days followed by an OVA challenge.Sera were taken before immunizations and after specific MP application for measurement of antibody titers.On day 86, mice were orally challenged with OVA and rectal temperature was measured before and after 15 and 30 min.Blood taken 1 h after OC was screened for mMCP-1 levels and intestinal lavage for total and OVA-specific IgA and IgE production.Safety testing in allergic mice.For safety testing mice were systemically sensitized 5 times followed by 6 cycles of oral feeding with OVA-loaded, Plain-, NA-, AAL- or WGA-MPs on 3 consecutive days.Sera were taken before immunizations and after every round of immunization as well as after MP application for measurement of antibody titers.On day 212, mice were orally challenged with OVA and rectal temperature was measured before and after 15 and 30 min.Blood taken 1 h after OC was screened for mMCP-1 levels.Seven days later mice were sacrificed for evaluation of the systemic immune response.Supplementary data to this article can be found online at https://doi.org/10.1016/j.clim.2018.03.017.
To improve current mucosal allergen immunotherapy Vibrio cholerae neuraminidase (NA) was evaluated as a novel epithelial targeting molecule for functionalization of allergen-loaded, poly(D,L-lactide-co-glycolide) (PLGA) microparticles (MPs) and compared to the previously described epithelial targeting lectins wheat germ agglutinin (WGA) and Aleuria aurantia lectin (AAL). All targeters revealed binding to Caco-2 cells, but only NA had high binding specificity to α-L fucose and monosialoganglioside-1. An increased transepithelial uptake was found for NA-MPs in a M-cell co-culture model. NA and NA-MPs induced high levels of IFN-γ and IL10 in naive mouse splenocytes and CCL20 expression in Caco-2. Repeated oral gavage of NA-MPs resulted in a modulated, allergen-specific immune response. In conclusion, NA has enhanced M-cell specificity compared to the other targeters. NA functionalized MPs induce a Th1 and T-regulatory driven immune response and avoid allergy effector cell activation. Therefore, it is a promising novel, orally applied formula for allergy therapy.
263
Analysis of microRNA transcription and post-transcriptional processing by Dicer in the context of CHO cell proliferation
Recombinant expression of therapeutic proteins in Chinese hamster ovary cells has a long history, due to the ease of cultivation of CHO cells in suspension and protein-free media, the availability of tools for clone selection and gene amplification and due to various safety aspects.Collaborative effort has recently been put into their characterization in terms of genome, cDNA and non-coding RNA sequencing projects as well as characterization of the CHO proteome and metabolome.These data are essential for understanding and eventually also predicting and adapting CHO cell phenotypes to the requirements of modern bioprocesses.One approach to increase yields from mammalian bioprocesses is to increase the viable cell number by reducing the rate of apoptosis.Therefore, multiple cell engineering strategies were developed to increase apoptosis resistance of CHO cells by overexpression of endogenous or evolved anti-apoptotic proteins of the Bcl-family.Sophisticated transcriptomic, proteomic and metabolomic approaches identified bottlenecks in the energy metabolism of CHO cells that prevent efficient growth and/or protein production.These limitations might be overcome by engineering the expression of single genes, however, the alteration of entire gene networks seems most promising, but at the same time most difficult.In order to meet the challenge of manipulating entire gene networks without burdening the translational machinery of a cell factory, non-coding RNAs, and especially microRNAs constitute a promising alternative.To this date, miRNAs in CHO cells were identified to regulate growth, stress resistance or specific productivity by repressing the expression of hundreds of target genes.In fact, across all cell biological disciplines these small RNAs have been widely recognized as central regulators of cellular phenotype, with potential applications beyond cell engineering as therapeutic targets or diagnostic markers of disease.miRNAs are transcribed mostly from RNA Polymerase II promoters in the genome, or excised from intronic regions of mRNA primary transcripts.These primary miRNA transcripts consist of a stem-loop structure flanked by single-stranded RNA regions and are subject to two sequential maturation steps: in the nucleus the “microprocessor complex” formed by Drosha and Dgcr8 binds pri-miRNAs and cleaves off a ∼50–80 nt long precursor-miRNA structure containing the RNA stem-loop.Export into the cytoplasm occurs via Exportin-5 and results in the association of pre-miRNAs with Dicer, a ∼230 kDa protein of the helicase family consisting of two RNase-III domains as well as RNA binding, helicase and protein interaction domains.Dicer cleavage sets free a ∼22 nt miRNA duplex, from which the guide miRNA is selected and incorporated into a large protein complex called RISC.miRNAs select their targets by imperfect base-pairing to recognition sites present in 3′UTRs or coding regions of messenger RNA.The relative position of miRNA:mRNA interaction and the type of Argonaute protein incorporated in the miRNA-RISC decides whether translational repression or mRNA destabilization and degradation will occur.The imperfect nature of miRNA:target interaction allows single miRNAs to repress the expression of hundreds of different mRNAs, depending on target mRNA availability as well as interaction site accessibility, thus attributing miRNAs an important role in the global regulation of gene expression similar to transcription factors.In addition to the exploration of miRNA function by overexpression, knockdown and target validation studies, studies of miRNA biosynthesis and the regulation of this multistep process have been conducted.It is known that the maturation of specific pri-miRNAs by Drosha is dependent on the binding of proteins, for example p53 which induces the biosynthesis of selected growth-suppressive miRNAs.Unlike Drosha activity, which generally requires binding of auxiliary proteins, Dicer is constitutively active which is mirrored in low detectable levels of pre-miRNAs compared to pri-miRNAs or mature miRNAs.Rather, regulation of miRNA biosynthesis at the Dicer step depends on the inhibition of Dicer activity, or on the de-regulation of Dicer expression, which have been observed during organism development, disease progression or even in vitro cultivation.As a consequence, mature miRNA levels are subject to change on a global scale under these conditions, thus broadly affecting gene expression.To our best knowledge, no study has addressed the biological effect of deregulated miRNA biogenesis in CHO cells.Based on miRNA microarray data from five CHO suspension cell lines with slow to high proliferation rates, we observed a global increase in miRNA transcripts along an increase in growth rate.In order to test whether this shift in miRNA transcript levels is assisted or caused by enhanced miRNA transcription or maturation, expression analyses of Dicer, Drosha and Dgcr8 were performed, as well as functional analysis of Dicer by performing loss- and gain-of-function experiments.Suspension and serum-free adapted CHO-DUKXB-11 cells were grown in DMEM:Ham’ F12 supplemented with 4 mM l-glutamine and protein-free additives without growth-factors.All other cell lines were cultivated in CD CHO media supplemented with 8 mM l-glutamine or without and 1:500 anti-clumping agent.Recombinant CHO-DUKXB-11 cells expressing an erythropoietin-Fc fusion protein were grown in suspension in CD CHO media with 0.019 μM methotrexate and without l-glutamine supplementation.No defined growth factors such as Insulin or IGF were used as additives in this study.All cell lines were cultivated in suspension in Erlenmeyer shake flasks in 50 ml volume at 140 rpm in a shaking incubator in a humidified atmosphere conditioned with 7% CO2.CHO-DUKXB-11 host cells were transfected by nucleofection with 10 μg of recombinant human Dicer plasmid containing the open reading frame of human Dicer under a CMV promoter and neomycin resistance gene.Post-transfection, cells were seeded at a concentration of 3.0 × 105 cells/ml in 30 ml media and maintained at 37 °C with humidified air, 7% CO2, and constant shaking at 140 rpm for 24 h.At this point, selection media containing 800 μg/ml G418 was added, and cells were transferred to a 96 well plate at a concentration of 10,000 cells/well.Throughout selection, media was replaced every 3–4 days, and wells with growing cells were expanded to 12-well plates after 4 weeks of selection.At this stage individual wells containing stable growing CHO pools were tested for human Dicer1 incorporation and expression by PCR amplification from genomic DNA and copied DNA using specific primers and Western blot as described below.For targeted knockdown of Dicer expression in CHO cells, two 21 nt long siRNAs were designed based on the NCBI reference sequence NM_001244269.1: siRNA#1 target site: GAGTGGTAGCTCTCATTTGCT; siRNA#2 target site: TAACCTGGAGCGGCTTGAGAT.All siRNAs were custom synthesized at 25 nm scale.For transfection, both siRNAs were pooled at equimolar concentration.As control, a non-targeting RNA duplex was designed and custom synthesized.Small RNAs were transfected at 30 nM concentration in three replicates in 6-well plate format.ScreenfectA was used for lipid/RNA complex formation according to the provided protocol.Cells were seeded at 3.5 × 105 cells/ml in 2.5 ml, before complexed siRNAs were added to each well.Cultivation was performed at 37 °C in humidified air with 7% CO2 and constant shaking at 60 rpm.After 72 h cells were harvested for RNA isolation and cell density/viability measurements.Isolation of total RNA was performed using phenol–chloroform extraction from Trizol lysed CHO cell pellets.In brief, CHO suspension cells were lysed in 1 ml TRI reagent and stored at −80 °C or processed immediately.Adherent CHO cell lines were detached from the surface by trypsinization, PBS-washed and lysed in 1 ml TRI reagent.RNA extraction using chloroform and purification were performed as described previously.RNA pellets were resuspended in nuclease-free water and concentrations and purity were analyzed through absorption at 230, 260, and 280 nm using a NanoDrop spectrophotometer.In order to assess total RNA quality and the fraction of small RNAs and microRNAs, total RNA was diluted to a concentration of 100 ng/μl.Total RNA quality was estimated on a Bioanalyzer 2100 instrument using the RNA 6000 Nano Kit.SmallRNA and microRNA concentrations were measured from the same RNA aliquots using the small RNA Series II Kit according to the instructions by the manufacturer.Total RNA in various amounts ranging between 200 ng and 1 μg was used for cDNA synthesis using a M-MuLV RNase H+ reverse transcriptase supplied with the Dynamo Kit.cDNA was diluted in nuclease-free water depending on the initial input of total RNA and directly used for end-point PCR as well as real-time quantitative PCR.PCR analysis of human Dicer expression was performed using a Taq polymerase provided with the Phusion high-fidelity polymerase kit with 35 cycles of denaturation, annealing and extension.For quantitation of mRNA expression, specific qPCR primers that overlap exon–exon junctions or are separated by at least one intron, were designed for beta-Actin, human and Chinese hamster Dicer, Drosha, and Dgcr8 and are provided in Supporting Table S1.Primer specificity was tested by melting curve analysis.Standards for copy number determination were prepared by purification of PCR products and dilution to 108–103 copies/μl and included in each run.Quantitative PCRs were run in quadruplicates on a Rotorgene Q, using SYBR green fluorescent dye and a hot-start polymerase supplied with the SensiMix mastermix with 40 cycles of denaturation, annealing and elongation.SYBR Green fluorescence was acquired at 72 °C and 80 °C, and chosen for detection depending on the base of the melting peak.Cross-species microRNA microarray experiments were run as described previously.In brief, epoxy-coated Nexterion glass slides were spotted using the miRBase version 16.0 locked nucleic acid probe set consisting of 2367 probes against human, mouse and rat miRNAs in 8 replicates.For hybridization, 800 ng total RNA extracts from two biological replicates of each cell line from exponential growth phase were hybridized against a common reference pool RNA from all samples.End-labeling of miRNAs was performed using the Exiqon Power Labeling Kit together with synthetic spike-in controls according to the instructions by the manufacturer.Slides were hybridized over night at 56 °C in a Tecan HS 400 hybridization station, followed by automated washing and drying with nitrogen.Immediately after drying, arrays were scanned using the Roche Nimblegen MS200 scanner at 10 μM resolution and auto-gain settings.Feature extraction from high-resolution tiff-images was performed using GenePix software.Background correction, normalization and statistical analysis were performed as previously described, using the LIMMA package under R/Bioconductor.Normexp background correction and Global Loess normalization were performed and log2-fold changes of miRNAs for each sample were calculated against the common reference sample and served as relative expression value for each miRNA.Pearson correlation was performed to test for positive or negative correlation of miRNA expression with specific growth rate.Normalized as well as raw microarray data have been submitted to Gene Expression Omnibus and can freely be loaded and reanalyzed using the accession number GSE52994.In order to quantify mature miRNA transcript levels as well as precursor miRNA levels, the miScript kit was used.Reverse transcription was performed using 300–400 ng of total RNA and “HiFlex” RT Buffer, which allows detection of both microRNA and messengerRNA.Temperature settings were chosen according to the suppliers recommendations.cDNA was diluted 1:4 in nuclease-free water and qPCRs were run in quadruplicates using the miScript SYBR Green Kit on the Rotorgene Q instrument: 95 °C → 15 min, 40 cycles of 94 °C → 15 s, 55 °C → 30 s, 70 °C → 30 s. SYBR Green fluorescence was measured at 70 °C and 80 °C.Commercial primer assays were used for mature miRNA quantification.In-house designed primer assays were used for precursor-miRNA quantification.Protein lysates were prepared by cold lysis of 5 × 106 cells in 1× RIPA buffer for 15 min and centrifugation at 12,000 × g and 4 °C for 10 min.Total protein concentration was measured by BCA assay, and equal amounts of protein were denatured in 1× LDS buffer with 1× reducing agent at 70 °C for 10 min.Samples were separated on 4–15% gradient SDS-PAGE gels, blotted onto PVDF membrane, blocked with 3% dry milk in 1× PBS/0.1% Tween 20 and incubated with mouse anti-beta-Actin IgG or rabbit anti-Dicer IgG at 4 °C over night.Detection was performed with the IR-Dye system on an Odyssey scanner after incubation with anti-mouse or anti-rabbit secondary antibodies for 60 min at room temperature.Western blot images were analyzed with ImageJ software.To investigate the relationship between CHO cell proliferation rate and miRNA transcription in detail, a panel of 5 CHO cell lines that were previously adapted to serum-free growth in suspension were selected and batch cultivations were performed in duplicate in the same chemically defined media without the addition of growth-factors.The cell-specific growth rates that were achieved during exponential growth phase in batch cultivations were found to be lowest in case of DUKXB-11 host cells and a derived recombinant cell line expressing an Epo-Fc fusion protein.Medium μ was achieved by CHO-K1 cell lines cultivated in the presence or absence of l-glutamine as described previously.The highest specific growth rate was achieved by CHO-S cells.Fig. 1b gives an overview of the average growth rates observed in three individual batch cultivations.Total RNA was isolated during exponential growth phase on day 2 and stationary growth phase on day 5.Analysis of mature miRNA levels was performed only during exponential growth phase using a previously established microarray platform.A total of 270 miRNA probes gave signals that were significantly above the background.For these miRNAs log2-transformed fold changes were calculated against the common reference RNA sample and treated as relative expression values.LFC-values were ranked from low to high and plotted for three cell lines against the cumulative fraction.The results show an increase in miRNA transcription from the slow to fast proliferating CHO cells, which was confirmed by qPCR for selected miRNAs on the level of precursor and mature transcripts.Pearson correlation coefficients of growth rate and mature miRNA expression were calculated, and miRNAs with stringent PCC values greater 0.8 or below −0.8 were regarded as positively or negatively correlated, respectively.This resulted in a total number of 63 growth-correlating miRNAs, of which 46 exhibited a positive correlation.In order to test whether increased post-transcriptional processing of miRNAs by Dicer could mediate this effect, Dicer expression was analyzed by qPCR during exponential growth phase, as well as stationary growth phase.Indeed, we observed enhanced expression in fast proliferating cells during exponential phase.However, on day 5 when proliferation has decreased due to nutrient consumption and accumulation of toxic metabolites, the difference in Dicer expression was attenuated, which is in line with the earlier report of predominant miRNA down-regulation during stationary growth phase.Dicer up-regulation during exponential growth phase was further evaluated by immunoblot analysis, which confirmed the strong correlation of Dicer expression and specific growth rate of 5 CHO cell lines.Analogous correlation analyses for Drosha and Dgcr8 expression did not show any significant correlation.These results demonstrated that specific growth rate of CHO cell lines positively correlates with a large fraction of transcribed miRNAs as well as post-transcriptional processing by Dicer.In order to investigate more closely the effect of Dicer expression on CHO cell phenotype, and especially whether the de-regulation of Dicer directly impacts cell proliferation, we conducted loss- and gain-of-function experiments by siRNA-mediated knockdown and ectopic overexpression of Dicer, respectively.First, we designed two siRNAs directed against two positions in the coding region of Dicer, which were separated by 1850 nucleotides.Two of the characterized cell lines with medium proliferation rates were transfected using a recently optimized RNA transfection strategy for CHO cell lines and analyzed 72 h later.This time-point was chosen for analysis, since miRNA half-life is known to range between 24 h and 48 h for most miRNAs.Knockdown of Dicer to 60% and 50% residual expression on mRNA level was achieved for both cell lines, which resulted in a similar reduction of the levels of 6 selected miRNAs.In terms of growth behavior, a significant reduction of viable cell densities by 20% could be observed, without negatively affecting cell viability.These data suggest that down-regulation of miRNA maturation due to reduced post-transcriptional processing by Dicer limits the proliferation rate of CHO cells.In order to test whether an up-regulation of miRNA maturation by overexpression of Dicer can enhance cell proliferation, we transfected recombinant human endoribonuclease Dicer1, which is 94% homologous to Dicer1 of CHO-K1, into DUKXB-11 host cells, as these cells exhibited the slowest proliferation rate of 0.5 d−1.Stable bulk transfected cells were selected for several weeks and screened for human Dicer1 expression by PCR using a primer-pair specific to human Dicer.In order to estimate the overall expression of Dicer in these cells, a primer-pair capable of binding both human and Chinese hamster Dicer was designed, and used for qPCR screening: three recombinant cell lines with 1.4-fold, 2.0-fold, and 5.1-fold increase in Dicer1 expression relative to the host cell line were selected for further characterization.Therefore, three independent batch cultivations were inoculated in shake flasks at a viable cell concentration of 1.5 × 105 cells/ml, and grown until viability dropped below 70% at day 9.For E10 and F4, a moderate increase in maximum growth rate and cumulative cell days was observed compared to untransfected DUKXB-11 cells.This effect also resulted in a 24 h earlier decrease of viability below the 80% threshold.Interestingly, the stable pool with highest overexpression of Dicer showed a decrease in growth performance compared to the host cell line.In order to assess whether Dicer overexpression resulted in an induction of mature miRNA levels, we performed RT-qPCR analysis of 5 miRNAs that were positively or negatively correlated to growth rate in our microarray analysis.A comparison of miRNA levels between cell lines with significant ectopic overexpression of Dicer and endogenous up-regulation relative to DUKXB-11 host cells is shown in Fig. 5: it was found that ectopic overexpression of Dicer only slightly increases the levels of three selected mature miRNA in CHO cells when compared to the up-regulation observed between fast and slow growing cell lines and that miRNAs with negative correlation to growth rate were also upregulated.Together, these data suggest that enhanced expression of Dicer in fast growing CHO cell lines is a response to increased microRNA transcription rather than the underlying cause of miRNA up-regulation.Nevertheless, moderate overexpression of Dicer does enhance growth performance by 15–20%, presumably due to up-regulation of growth-enhancing miRNAs.However, strong overexpression of Dicer negatively impacts growth behavior as it does not differentiate between specific growth promoting and growth inhibiting microRNAs.Therefore Dicer may be regarded as a surrogate marker for specific growth rate in CHO cells, but does not constitute a promising target for engineering the growth of CHO cell lines.This study addresses the importance of miRNA regulation in the context of CHO cell proliferation.It was found that ∼75% of mature miRNA transcripts that correlate with cell-specific growth rate across several distinct CHO cell lines, are up-regulated.A similar observation was made in 2012 when Clarke et al. reported 35 positively and only 16 negatively correlated miRNAs when looking at subclones of a single CHO cell line.We therefore raised the question as to how far miRNA processing by Dicer, Drosha and Dgcr8 is relevant for this effect.We found that Dicer mRNA and protein levels – in contrast to Drosha and Dgcr8 levels – positively correlate to cell-specific growth rate during exponential growth phase.However, upon growth arrest during stationary growth phase Dicer is overall downregulated and the difference in Dicer levels between fast and slow growing cell lines is insignificant.Other studies have reported up-regulation of the entire miRNA protein machinery consisting of Argonaute, Dicer and Drosha along tumor progression – and thus faster growth rates – of serous ovarian carcinoma cells.Furthermore, in endothelial cells the removal of serum was shown to increase cellular sensitivity to apoptosis via the down-regulation of Dicer expression.In order to test whether Dicer expression is causally related to growth rate, transient down-regulation of Dicer expression, and in consequence miRNA maturation was performed and indeed significantly decreased the growth rate of CHO cells.To further confirm this relationship, we investigated whether an increase in miRNA maturation by ectopic overexpression of Dicer could improve growth.Therefore, three independent stable pools with Dicer overexpression levels between 1.5 and 5-fold were generated.In batch cultivations these three cell lines show that moderate overexpression of Dicer indeed enhances cell proliferation slightly, while more than 5-fold overexpression negatively affected growth performance.In order to investigate the effect of Dicer overexpression, qPCR analysis of selected miRNAs was performed.We observed that ectopic up-regulation of Dicer moderately increased the levels of miRNAs with positive correlation to growth.However, the degree of up-regulation was well below the induction observed for the same miRNAs between fast and slow growing cell lines.In addition, 5-fold induction of Dicer expression also resulted in significant up-regulation of mature miRNAs with negative correlation to growth.This could explain the inhibitory effect of strong Dicer overexpression on growth, and indicates that Dicer is not an ideal engineering target.Overall it seems that up-regulation of specific miRNAs supports high proliferation rates in CHO cell lines.Simultaneous up-regulation of Dicer seems to be necessary to allow rapid maturation of pre-miRNAs into mature miRNAs, but does itself not mediate growth stimulation.The weaker induction of Drosha and Dgrcr8 could be due to the fact that miRNAs derived from intronic regions can bypass Drosha/Dgcr8 cleavage.Therefore, this work establishes Dicer as a potential surrogate marker for growth rate in CHO cells, but not as a promising target for engineering proliferation.For this purpose, it will be worthwhile to test the biological function of those miRNAs exhibiting strong negative or positive correlation to growth rate, such as miR-7 or miR-17, for which respective data already exists.The authors declare no conflicts of interest.
CHO cells are the mammalian cell line of choice for recombinant production of therapeutic proteins. However, their low rate of proliferation limits obtainable space-time yields due to inefficient biomass accumulation. We set out to correlate microRNA transcription to cell-specific growth-rate by microarray analysis of 5 CHO suspension cell lines with low to high specific growth rates. Global microRNA expression analysis and Pearson correlation studies showed that mature microRNA transcript levels are predominately up-regulated in a state of fast proliferation (46 positively correlated, 17 negatively correlated). To further validate this observation, the expression of three genes that are central to microRNA biogenesis (Dicer, Drosha and Dgcr8) was analyzed. The expression of Dicer, which mediates the final step in microRNA maturation, was found to be strongly correlated to growth rate. Accordingly, knockdown of Dicer impaired cell growth by reducing growth-correlating microRNA transcripts. Moderate ectopic overexpression of Dicer positively affected cell growth, while strong overexpression impaired growth, presumably due to the concomitant increase of microRNAs that inhibit cell growth. Our data therefore suggest that Dicer dependent microRNAs regulate CHO cell proliferation and that Dicer could serve as a potential surrogate marker for cellular proliferation.
264
Visual search performance in infants associates with later ASD diagnosis
Enhanced perceptual abilities have repeatedly been described in individuals with autism spectrum disorder.For example, better performance has been reported in visual search paradigms, which measure the speed or detection accuracy of “odd-one-out” target elements presented amongst arrays of distractors.The mechanisms underlying the perceptual advantage in ASD are yet poorly understood.According to one hypothesis, the Weak Central Coherence theory, the superior ability to detect or discriminate visual features is a byproduct of the poor ability to attend to the higher-level, semantic information in visual scenes.This would explain, for example, why people with ASD can as easily find a geometric figure embedded in a meaningful or in a meaningless drawing, while control participants are slower in the former condition, when they prioritize the overall meaning.However, others have shown superior perceptual performance in tasks employing stimuli without semantic content, such as the visual search task.Across many variations of these paradigms, individuals with ASD are both quicker and more successful than controls at detecting abstract targets, as for example a letter X presented amongst Os.These studies suggest that atypicalities might be present at the earliest stages of sensory or perceptual processing in ASD as is also suggested by recent evidence for better discrimination of line orientations, increased orienting to pixel-level saliency in participants with ASD than in neurotypicals, as well as by evidence for superior pitch discrimination and memory,These sensory or perceptual atypicalities pose a challenge for understanding its etiology of ASD.Whether and how these features relate to core social and communication difficulties remains a contentious issue.Several studies carried out in older children or adults with ASD failed to measure an association between superior perception and social cognition tasks or social skills, while others did find an association, for example between line orientation discrimination and autism quotient scores.More generally, dimensional measures of social and non-social symptoms are poorly correlated, and there is evidence for reduced genetic overlap, indicating that perceptual and social atypicalities might be independent aspects of ASD.However, evidence for the fractionation of the autism phenotype comes mainly from research carried out with older children and adults.An alternative view is that atypical perception and social skills are intrinsically related during early development but may diverge later due to adaptive changes specific to each domain.In support of this hypothesis, we showed that in younger siblings of children with ASD, improved search performance at 9 months, associates with ASD symptom severity at 15 months and 2 years of age.At 2 years of age, autism symptoms no longer related to concurrent search performance.This initial work left several key questions unanswered.First, does superior performance in visual search during infancy discriminate those children that go on to receive a clinical diagnosis of ASD from the other high risk children and control participants?,Second, what drives superior search performance; is it due to better discrimination abilities or better attention to the task?,Increased arousal was shown to associate with better performance in visual search in toddlers with ASD and might be a common driver of both superior perception and social interaction atypicality.Finally, does superior perception specifically predict ASD symptoms as opposed to other aspects of early emerging psychopathology?,Building theoretical models that link perceptual and social atypicalities in ASD will greatly benefit from evidence that these features are selectively associated.We investigate these issues in a cohort of infants at familial risk for ASD.About 20% of younger siblings develop ASD themselves and another 20% will manifest subthreshold symptoms or developmental delay.Infant sibs research has yielded a variety of infancy markers of later clinical autism, indexing a broad spectrum of putative neural systems, such as attention control, face and gaze processing or motor planning.A specific impairment of the “social brain” circuitry no longer seems the most parsimonious explanation of these findings and domain general mechanisms have been suggested as developmental pathways to ASD.Showing that superior perception is one of the earliest markers of ASD in high-risk populations would support this emerging view.The current paper builds on a previous publication, with an extended sample of participants, followed-up at 3 years of age.We report on the association between visual search in infancy and ASD diagnosis at 3 years of age.In addition, we investigate the impact that target/distractor similarity and attention to the task, have on performance.Finally, associations to dimensional measures of ASD, attention- deficit/hyperactivity disorder and anxiety symptoms at age 3 years are also assessed.A cohort of 116 high-risk and 27 low-risk children participated in this longitudinal study.All HR children had at least one older sibling with a community clinical diagnosis of ASD for details).LR controls were full term infants recruited from a volunteer database at the Birkbeck Centre for Brain and Cognitive Development.Families attended four visits at 9, 15, 27 and 36 months.Three HR children did not take part in the 36-month visit and they were excluded from the analysis.Two LR children were absent in the 36-month visit but were included in the analysis as they showed typical development at the previous three visits.The final sample included in this analysis consisted of 113 HR and 27 LR children.A battery of clinical research measures was administered to all children at 36 months: the Autism Diagnostic Observation Schedule – Second Edition, a standardised observational assessment, was used to assess current symptoms of ASD.Calibrated Severity Scores for Social Affect, and Restricted and Repetitive Behaviours were computed, which provide standardised autism severity measures that account for differences in module administered, age and verbal ability.The Autism Diagnostic Interview – Revised, a structured parent interview, was completed with parents of all children.Standard algorithm scores were computed for Reciprocal Social Interaction, Communication, and Restricted, Repetitive and Stereotyped Behaviours and Interests.These assessments were conducted without blindness to risk-group status by or under the close supervision of clinical researchers with demonstrated research-level reliability.Total scores of the Social Communication Questionnaire were used as additional parent-report measures of ASD symptoms.The parent-reported Child Behavior Checklist was used to measure clinical-levels of ADHD and anxiety problem, computed by T-scores for ADHD and anxiety problems from the DSM-oriented scales.This measure has been widely used to measure emerging psychopathologies in young children.We used the early learning composite score of the Mullen Scales of Early Learning to obtain a standardised measure of mental abilities at every visit.Experienced researchers reviewed information on ASD symptomatology, adaptive functioning, and development for each HR and LR child to ascertain ASD diagnostic outcome according to DSM-5.Of the 113 HR participants included in this paper, 17 met criteria for ASD.A further 32 participants did not meet ASD criteria, but were not considered typically-developing, due either to a) scoring above ADI-R cut-off for ASD and/or scoring above ADOS-2 cut-off for ASD, or b) scoring less than 1.5 SD below the population mean on the Mullen Early Learning Composite or on the Mullen Expressive Language or Receptive Language subscales, or meeting both of points a and b above.These participants therefore comprised a HR subgroup, who did not meet clinical criteria for ASD but presented with other atypicalities.The remaining 64 HR participants were typically-developing.None of the 27 LR children met DSM-5 criteria for ASD and none had a community clinical ASD diagnosis.Descriptive characteristics and clinical measures for each group are presented on Table 1.We created arrays of eight letters, situated on an imaginary circle, and on a white background.In each array, seven of the stimuli were letters “x”, the 8th stimulus was either “+”, “v”, “s” or “o”.For the 9- and 15- month visit, 8 different arrays were created for each target type varying in the position of the target, generating 32 different stimuli in total.To increase variability, letters in an array were either black, blue, red or green.Due to time constraints, only 50% of the stimuli were presented at the 27-month visit, generating 16 trials in total.At all visits this task was the first to be administered after parents and baby were welcomed to the lab, and was followed by a battery of eye-tracking tasks.Infants were seated on mother’s lap, at approximately 60 centimetres from a Tobii T120 screen.A five-point calibration routine was run.The experiment was started only after at least 4 points were marked as being properly calibrated for each eye.The infant’s behaviour was monitored by a video camera placed above the Tobii monitor.Stimuli were presented with Tobii Studio software.Each of the stimuli was presented once, in a random order, for 1.5 s. Before each stimulus the child’s attention was directed to the centre of the screen using a 1 s long audio-video animation.At 9 months, one HR-ATYP participant was excluded due to eye-tracking equipment failure.At 15 months, 8 infants did not contribute data, one because they did not attend the lab visit, two HR-ASD participants were excluded due to eye-tracking equipment failure and for five infants the task was skipped due to fussiness.At 24 months, 5 toddlers did not take part in the visit and for 6 others the task was skipped due to fussiness.We first explored whether outcome groups differed in general attention to task, to ensure that this does not account for any group differences in visual search performance itself.Subsequently, our primary analyses used repeated measures ANOVA to test outcome group differences in first-look hits, calculated as the proportion of trials in which infants made a first saccade towards one of the targets, after fixating at the centre of the screen.Between the 9- and 15-month visits, 54 of the high-risk families took part in a randomised controlled trial of parent-mediated intervention, with an additional six families enrolled in a similar non-RCT intervention.Preliminary analysis accounted for the fact that some of the participants were taking part in these intervention programmes.As there were no significant effects of either recruitment or the intervention itself, we removed these factors from further analysis.A first set of ANOVAs was run without additional covariates and followed up by post-hoc t-tests comparing performance of the HR-ASD groups against all other groups.Covariates were entered in a second round of analyses.When at a particular age a significant effect of group was found, we carried out additional analyses to further understand the mechanisms driving these effects.Descriptive statistics for number of valid trials and first-look hits are presented in Table 2.A trial was considered valid if the participant made a first saccade to the centre of the display, within 100 ms from the beginning of the trial.Number of valid trials was entered in separate univariate ANOVAs for each age group.At 9 months, there was a main effect of the outcome group = 3.34, p = 0.021 η2 = 0.07).Post-hoc pairwise t-tests comparing all groups against HR-ASD yielded significant differences with LR, HR-TD and no difference with HR-ATYP.When MSEL, age and sex were entered as covariates, the effect of outcome group remained significant = 2.70, p = 0.049, η2 = 0.058), with no significant effects of the covariates.The group difference in the number of valid trials found at 9 months merited additional analysis to clarify their origin.These analyses are detailed in the SOM.In brief, at 9 months groups differed in the amount of looking time to the stimuli = 3.356; p = 0.021; η2 = 0.069), with the HR-ASD group showing longer looking time than the LR group.Bivariate correlations indicated that the number of valid trials were significantly associated with the total looking time to the stimuli.This suggests that both these variables reflect differences in sustained visual attention, with the HR-ASD group showing better attention to the task.At 15 months, outcome groups did not significantly differ in the number of valid trials contributed to the analysis < 1).There were also no main effects of age, sex or MSEL, in the follow-up analysis < 1).At 2 years, the same pattern was observed, with no significant effect of outcome group < 1) nor of any of the covariates < 1)).To analyse how groups differed in the proportion of first look hits, we ran repeated measures ANOVA with target types as the within-subject factor, and outcome group between subjects.This effectively means that a minimum of 4 trials was required for a participant to contribute to the analysis.At 9 months, this analysis yielded a main effect of trial type = 15.944, p <0.001, η2 = 0.115; Fig. S1).All groups performed better in the ‘o’ and ‘s’ targets trials than the ‘v’ and ‘ + ’ targets trials.There was also a main effect of group = 3.352, p = 0.021, η2 = 0.076; Fig. 1) but no significant interaction between trial type x group = 1.274, p > 0.1).Post-hoc t-tests indicated that HR-ASD had a significantly higher proportion of first looks to the target than LR, HR-TD and HR-ATYP group.First look hits did not correlate with the quantity of valid trials.However, because a group difference in the quantity of valid trials was observed at this age, this measure was entered together with MSEL, sex and age as covariates, in the follow-up ANOVA.The main effect of group remained significant = 2.80, p = 0.043, η2 = 0.066) and none of the covariates had a significant impact on target hits < 1).At 15 months, a main effect of outcome group was again observed = 4.15, p = 0.008, η2 = 0.099; Fig. 1), alongside a main effect of trial type = 15.324, p < 0.001; Fig. S1) and no significant interaction between trial type x group < 1).At this age, also, HR-ASD demonstrated superior performance when compared to HR-ATYP and LR and marginally, when compared to HR-TD.When age, sex and MSEL were added as covariates, the effect of group became marginal = 2.212, p = 0.091, η2 = 0.057).MSEL had a significant effect on performance = 4.984, p = 0.028, η2 = 0.044), with better performance in those infants with lower MSEL.At 2 years, outcome groups did not differ in performance < 1).Performance varied with trial type = 46.27, p < 0.001) but there was no interaction type x group < 1).When adding age, sex and MSEL as covariates, this yielded a main effect of sex = 5.46, p = 0.022) with boys performing better than girls and no other significant effects were observed.Additional measures were derived to further investigate the origin of the superior search performance at 9 and 15 months.Briefly, no group differences in biases to orient to a particular side of the screen were found and biases did not relate to target hit performance.Better performance in the HR-ASD group was not due to the other groups being less accurate in aiming for the target.Finally, the amount of time spent on the target, when reached, although longer than the time spent on each distractor visit, did not differ between groups.Social communication symptoms measured using the ADI and SCQ were significantly or marginally correlated with search performance at 9 months and 15 months.Since outcome groups differed also in the severity of co-occurring symptoms, we asked whether performance in the visual search tasks specifically relates to ASD symptoms or more generally to early emerging psychopathology.We found no evidence of significant association between hit performance and traits of ADHD or anxiety based on the results of bivariate correlations reported in Table 3.The patterns of correlations were similar when restricting the results to the HR group only.Number of valid trials at 9 months, however, was significantly associated with ASD, ADHD and anxiety symptoms.We subsequently ran partial correlations controlling for co-occurring ASD symptoms using the parent-rated SCQ, and the associations between the number of valid trials and ADHD or anxiety symptoms were no longer significant.We also tested the association between parent-rated SCQ and visual search performance at 9 months, controlling for the effects of ADHD and anxiety symptoms separately.The correlation remained significant when controlling for ADHD symptoms, but not when controlling for anxiety symptoms.The first key question we addressed in this paper was whether superior visual search performance during infancy is observed in HR siblings who go on to receive a later ASD diagnosis.At 9 and 15 months but not at 2 years of age, visual search performance differentiated those infants who met clinical criteria for ASD at 3 years of age from the high-risk infants without a diagnosis and from low risk controls, with superior search performance observed in the HR-ASD group.These findings extend our previous report of an association between 9-month old search performance and dimensional measures of ASD symptoms at 2 years of age and establish superior visual search as an antecedent of autism spectrum disorders, i.e. a marker associated with later diagnosis, but which manifests before the onset of clinical diagnostic symptoms.A second key aim was to better characterise the mechanisms underlying the HR-ASD infants’ superiority in the visual search task.Because superiority is demonstrated in the first-look performance, differences in oculomotor control could not explain the findings.This conclusion was backed-up by our follow-up analysis of the direction of the first look, which showed that poorer performance in the other outcome groups was not due to them just missing the target due to poor oculomotor control.Previous research had suggested that less strong side biases in ASD may help their visual search but we found that this cannot explain performance in our task."It has also been suggested that superior search results from better discrimination of target and distractor elements, given ASD participants perform better especially when target and distractors were very similar to each other.Target type did affect performance but target type did not moderate group differences in performance in our study.This does not in itself refute the hypothesis of superior discrimination ability.More fine-grained variation of target/distractor differences or direct assessments of discrimination ability will better address this hypothesis, in the future.Target detection has also been suggested to vary with arousal levels.Blaser et al. found that during a visual search task, toddlers with autism deployed greater pupil dilation in response to the stimuli, an index of increased arousal; these authors also described an association between larger pupil diameter and superior target hit performance.Since the relationship between arousal/pupil dilation and performance is U-shaped, with too little or too much arousal associated with poor task performance, the above findings suggest that ASD participants, and not the controls, were in an optimal state of arousal for visual search.Stimulus presentation in our task was too short to measure pupil dilation, but we did observe that HR-ASD infants were more attentive to the task than the other groups, spending more time looking at the visual search stimuli.However, this measure of attention did not relate to search performance per se, suggesting that two partially independent processes may account for the atypical attention and perceptual abilities associated with ASD.Interestingly, some have suggested that arousal merely amplifies pre-existing individual differences in information processing.Thus, it remains an open question whether perception or arousal-based models better explain the ASD advantage in visual search.The developmental change in the HR-ASD advantage during the first 2 years of life is intriguing, especially given that others have reported superior search later in development, including in 2-year-olds with ASD, an age at which we observed no group differences.One important difference between the Kaldy et al. task and ours is in the nature of the target/distractor differences.It was in the conjunction task that the ASD group excelled in their study.Our task is more akin to a singleton search, since the O and S targets were unique in the display of Xs in having curved lines, the + differed in line orientation and the V had no line crossing.As Fig. 1 suggests, all groups except the HR-ASD group improved in performance between 15 and 24 months, “catching-up” with the HR-ASD group.It is thus possible that the development of the visual system eventually masks group differences in simpler tasks and that more difficult searches are needed to reveal ASD superiority later on.The less prominent developmental change in the HR-ASD performance parallels findings of reduced developmental progressions of structural connectivity and suggests decreased plasticity in the HR-ASD group.Finally, although we demonstrate an association between superior visual search at 9 and 15 months and the severity of ASD symptoms at 3 years of age, no association with ADHD or anxiety symptoms was found.Many of the previously identified infant markers of ASD are based on impairments common to multiple neurodevelopmental outcomes and it was suggested that common neurodevelopmental disorders may stem from common genetic etiology.Yet, superior perception had been singled out as a unique feature of ASD.To date, visual search paradigms have seldom been used in ADHD research, except to show poorer search performance in children with this condition.In another study, participants with ASD, but not participants with ADHD showed detail-focused drawing styles.However, detail-focused or analytic processing were found to be associated with negative mood, in individuals with depression or anxiety and a recent study in adults reported an association between increased anxiety and improved letter detection.In contrast to some of these studies, we did not find an association between search performance and either parent-report ADHD or anxiety symptoms, while performance associated with various parental reports of ASD symptom severity.Given that parental reports of behavioural atypicalities tend to be highly correlated across dimensions, the differential association between superior search and ASD symptoms is noteworthy.Although the specificity of this antecedent marker will increase its value in future clinical work, it also raises a significant challenge.While more domain general early markers are being identified, it remains unclear why they impact on the emergence of particular developmental milestones, such as initiation of social interaction or eye-contact, i.e. those ASD traits measured by the ADI/SCQ.While the factors mediating the relationship between early visual attention and perception and later ASD symptoms are yet to be identified, our findings, especially the dynamic changes in perception and its association to ASD symptoms, suggest that answers to these questions are most likely to emerge from research into early development.
An enhanced ability to detect visual targets amongst distractors, known as visual search (VS), has often been documented in Autism Spectrum Disorders (ASD). Yet, it is unclear when this behaviour emerges in development and if it is specific to ASD. We followed up infants at high and low familial risk for ASD to investigate how early VS abilities links to later ASD diagnosis, the potential underlying mechanisms of this association and the specificity of superior VS to ASD. Clinical diagnosis of ASD as well as dimensional measures of ASD, attention-deficit/hyperactivity disorder (ADHD) and anxiety symptoms were ascertained at 3 years. At 9 and 15 months, but not at age 2 years, high-risk children who later met clinical criteria for ASD (HR-ASD) had better VS performance than those without later diagnosis and low-risk controls. Although HR-ASD children were also more attentive to the task at 9 months, this did not explain search performance. Superior VS specifically predicted 3 year-old ASD but not ADHD or anxiety symptoms. Our results demonstrate that atypical perception and core ASD symptoms of social interaction and communication are closely and selectively associated during early development, and suggest causal links between perceptual and social features of ASD.
265
Common and unique associated factors for medically unexplained chronic widespread pain and chronic fatigue
Chronic widespread pain and chronic fatigue are common and may be disabling; they have complex aetiologies .These functional somatic syndromes share common risk factors , a finding which has been interpreted as suggesting that chronic widespread pain and chronic fatigue are manifestations of a single disorder .An alternative view is that they are separate syndromes which frequently co-occur and this co-occurrence can be attributed to two dimensions, which have separate genetic and environmental components: an affective component and a sensory component .Comorbid anxiety and depression commonly occur in individuals with chronic fatigue and the risk factors for chronic fatigue differ between those with, and those without, concurrent anxiety or depression .It is plausible that the observation of common associated factors across chronic fatigue and chronic widespread pain is explained by co-morbid anxiety and depression.The aim of this study was to test the hypothesis that the associated factors commonly associated with both chronic widespread pain and chronic fatigue would be explained by the presence of concurrent depression/anxiety.We conducted a cross-sectional population-based study.We mailed 2985 baseline questionnaires to people aged 25–65 years registered at two general practices in North West England, one in an affluent rural area and one in a more deprived inner city area.Potential participants were selected from complete population lists using simple random sampling assuming that the sampled sub-group was representative of the population from which they were drawn.Of those 2490 were eligible to participate and were sent a questionnaire that assessed the presence of chronic widespread pain, chronic fatigue and a number of potential associated factors."Written informed consent was sought to examine participant's medical records.The aim of the medical record review was to identify recorded general medical illness that could explain the presence of pain or fatigue and to count the number of consultations over the year prior to questionnaire completion.Non-responders were sent a reminder postcard after two weeks and, if necessary, a further questionnaire after two further weeks.Since our study did not include a medical examination which would enable us to make a specific diagnosis, we refer to the relevant symptoms of pain and fatigue as “symptom groups”.Participants were asked to report the presence of any musculoskeletal pain they had experienced in the past month, whether their pain had persisted for three months or more, and to shade on a four-view blank body manikin the location of their pain.Using these data participants satisfying the criteria for chronic widespread pain included in the American College of Rheumatology 1990 criteria for fibromyalgia were identified.The fatigue scale contains 11 items that inquire about symptoms of physical and mental fatigue.Individual items are scored 0 or 1, with a total score ranging from 0 to 11.Participants with fatigue scores of 4 or more on the Fatigue Scale and who had reported symptoms for six months or more were classified as having chronic fatigue.For participants who had agreed, medical records were reviewed for 12 months before and after the date of baseline questionnaire by two raters to see if there was evidence of a recognised medical condition that could explain chronic fatigue or chronic widespread pain.A conservative approach was used; any medical illness that could cause fatigue or widespread pain led to exclusion from the symptom groups of unexplained fatigue or widespread pain so only those participants without such a condition were classified as having chronic fatigue or chronic widespread pain.Nearly half of those who had reported fatigue or widespread pain had consulted their GP with the relevant symptom and, of these, one third had undergone investigations that would be helpful in ruling out underlying organic disease.These included age, sex, marital status, current work status, number of years of formal education and details of any outstanding compensation claims.Respondents were asked if they had any common medical illnesses on a checklist and add any not listed.For analysis, participants were classified as having none, one, two or more general medical illnesses.The Somatic Symptom Inventory asks respondents to rate 13 bodily symptoms on a 5-point scale as to “how much it has bothered you over the past 6 months?,The total score ranges from 13 to 65 with high scores indicating greater bother .The Childhood Physical and Sexual Abuse questionnaire consists of 8 questions concerning abuse .Respondents were rated as having experienced childhood abuse if, before the age of 16 years, they reported that an older person touched them or they were made to touch someone else in a sexual way, or intercourse was attempted or completed; that they were hit, kicked or beaten often and/or their life was seriously threatened; they were often insulted, humiliated or made to feel guilty.The Parental Bonding Instrument includes 7 questions concerning perceived maternal care and 1 item concerning maternal control .The Relationship Scales Questionnaire measures adult attachment style by asking respondents to identify which of four sets of characteristics most closely matches the way they relate to other people .These are: secure, preoccupied, fearful and dismissing.Social Support was assessed with a question determining whether the respondent had a close confidant with whom they can discuss all concerns.The List of Threatening Experiences measures the experience of 12 threatening personal situations or events in the last 6 months .The total score of positive responses represents recent exposure to threatening experiences; we quote the results in 3 groups.We also quote separately the scores for questions regarding illness in the participant and close relatives.The Revised NEO Personality Inventory measures the personality trait of Neuroticism .It has a maximum score of 48 with high scores indicating higher levels of neuroticism.The Hospital Anxiety and Depression Scale is a valid and reliable measure of anxiety and depression in the general population which avoids questions about physical symptoms that might be caused by general medical illness .A score of 11 or more indicates probable disorder for each dimension but a total HADS score of 17 + has been used also to detect probable depressive disorder .The Short Form 12 Questionnaire assesses health status .It is a validated shortened version of the 36 item version and both versions have been used in chronic fatigue and chronic widespread pain The 12 items yield summary scores for mental and physical components of health status, which are transformed into norm based scoring.A low score represents impairment of health status.For participants who had agreed to a review of their medical records we counted all consultations with the general practitioner or practice nurse for 12 months before and after the baseline questionnaire.The study received ethical approval from the North Manchester Local Research Ethics Committee.All participants provided written informed consent to participate in the study.Multi-level modelling was used to take into account that chronic widespread pain and chronic fatigue were measured on each individual, and these symptom groups may not be independent of each other.This technique takes into account that the correlation of symptom groups within individuals will be greater than that between individuals.Each symptom group was thus treated as a within subject factor called ‘type’ with two levels representing the two symptom groups.Other variables measured at the subject level, such as childhood abuse and anxiety and depression were entered in turn into a series of logistic regression analyses using the stata command xilogit, which included age and gender as between subject covariates, and with symptom groups as the dependent variable.Initially, symptom-specific associations were calculated using a population average model.A term for the interaction between ‘type’ and the associated factor was then added to the model, and a Wald test carried out to investigate whether the strength of association of the associated factor was similar across both symptoms, while taking into account within subject correlation of having both symptoms.The Wald test provides p-values to assess the interaction of ‘type’ with the associated factor.Therefore, small p-values would indicate that differential effects are likely, while larger p-values indicate that a common effect is plausible, in which case the common effect estimate was obtained from the model.Common effect odds ratios are presented only when the Wald test for the interaction between type of disorder and the associated factor was not significant.In this case common effect odds ratios were obtained using the stata command xtlogit with age and gender as covariates, but without the interaction term.Where the interaction was significant ‘no common effect’ has been tabulated, and odds ratios for that associated factor for chronic widespread pain and chronic fatigue separately should be interpreted.These were obtained using the stata command xtlogit with age and gender as additional covariates.Scored variables, SSI, SF-12 mental and physical scores, neuroticism and HADS scores have been split into 3 tertile groups in order to assist in the interpretation of their odds ratios.These analyses were repeated with anxiety, depression and number of general illnesses as covariates in addition to age and gender.Participants classified as having chronic widespread pain or chronic fatigue were then further divided into those with and without anxiety and/or depression .The associated factors that were observed to be significantly common in both symptom groups were then compared across the three resulting groups symptom plus anxiety and/or depression, b) symptom without anxiety and/or depression and c) no symptom, using the chi-squared test for dichotomous variables and one-way ANOVA for continuous scores, followed by Bonferroni pairwise comparisons between groups.This was then repeated for 4 factors which did not show a common effect across both symptom groups.Of the 2490 questionnaires mailed, 1999 were returned of which 556 were blank or did not contain usable information.The response rate was similar in the two practices.A total of 1443 participants returned a completed questionnaire and participated in the study.Non-responders were significantly more likely to be male, and younger than the remaining eligible participants.The participation rates at the two practises were similar.We examined 990 medical records of the 992 participants who gave permission for this.Those who refused permission were younger and more likely to be female but did not differ in terms of marital status, years of education, unemployment, prevalence of chronic widespread pain or chronic fatigue by questionnaire or anxiety, depression or somatic symptoms scores.Completed follow up questionnaires were received from 741, of whom 638 also had their medical notes examined but these data are not used in this paper .After exclusions because of missing data]), 159 participants fulfilled criteria for chronic widespread pain and 229 had chronic fatigue.Of the 990 participants with medical record review, the prevalence figures were similar: 11.4% and 15.5% respectively, but 20 cases of chronic widespread pain and 28 cases of chronic fatigue could be attributed to a co-existing general medical illness.The prevalence of unexplained chronic widespread pain was 9.4%, and chronic fatigue 12.6% and our analyses concerned these participants who fulfilled criteria for the unexplained symptom definitions.Mean SF-12 physical component scores were 42.4 and 43.3 for chronic widespread pain and chronic fatigue, respectively, indicating impaired health status.The majority of the putative associated factors were associated with both chronic fatigue and chronic widespread pain and showed a common effect.The factors associated with a 2 or more fold increased odds across both symptom groups included: being separated, widowed or divorced, unemployed and seeking work, reported psychological abuse during childhood, reported physical abuse during childhood, loss of mother at age < 16, experience of a recent serious illness or injury, two or more recent threatening experiences, and a high somatic symptom score.Frequent consultations in primary care and a low SF-12 physical component score were common to both symptom groups.A number of factors showed no common effect.Fewer than 12 years of formal education and 2 or more current general medical illnesses were both more strongly associated with chronic widespread pain than with chronic fatigue.Recent serious illness or injury to a close relative was strongly associated with the presence of chronic fatigue but not chronic widespread pain.There was also no common effect of neuroticism, depression, anxiety and SF-12 mental component scores with the stronger relationship observed for those participants with chronic fatigue.After adjusting for anxiety, depression and number of general medical illnesses, in addition to age and gender, these results remained similar.The proportion of participants with concurrent anxiety and depression was 41.6% of participants with chronic fatigue and 24.7% of those with chronic widespread pain, p = 0.010.The putative associated factors which showed a common effect were more common in participants with chronic fatigue or widespread pain who reported concurrent anxiety and/or depression compared to participants with these symptoms alone.Approximately 5% of participants with chronic widespread pain or chronic fatigue without concurrent anxiety and/ordepression reported psychological abuse, which was similar to participants free of chronic widespread pain or chronic fatigue and significantly fewer than participants with these symptoms plus concurrent anxiety and/or depression.A similar pattern was found with the other putative associated factors that had a common effect.The pattern of association was different for putative associated factors with no common effect.Nearly half of participants with chronic widespread pain had received 12 or fewer years of formal education, whether or not there was concurrent anxiety and/or depression; this compared to a quarter of participants without chronic widespread pain.In chronic fatigue there was no significant difference in duration of education between the 3 groups.Over half of participants with chronic widespread pain and concurrent anxiety and/or depression had 2 or more recognised general medical illnesses; this compared with 32% of those with chronic widespread pain without anxiety and/or depression, and 11% of those without chronic widespread pain.Of participants with chronic fatigue and anxiety and/or depression 31.2% had 2 or more general medical illnesses compared to 14.3% of those with chronic fatigue alone and 13% of participants without chronic fatigue.Recent serious illness or injury in a close relative was reported more frequently by participants with chronic fatigue, whether or not they had concurrent anxiety and/or depression compared to those without the symptom groups.There was no significant difference in chronic widespread pain.Mean neuroticism scores in participants with chronic widespread pain without anxiety or depression were similar to those free of chronic widespread pain; this score was lower than that for participants with chronic widespread pain with concurrent anxiety and/or depression.Participants with chronic fatigue alone, on the other hand, had a mean neuroticism score significantly different from those without this symptom.This is the first study to show that the putative associated factors for chronic fatigue and chronic widespread pain were not associated with each symptom in an identical fashion.The factors which appear to be common to each of these were only associated with them when there was also concurrent anxiety and depression.For example, although fatigue and chronic widespread pain each showed an association with reported childhood psychological abuse, this could be attributed to the presence of anxiety or depression rather than a true correlate of the fatigue or widespread pain.Similar findings were reported in a birth cohort study where adjustment for psychopathology led to childhood physical abuse becoming non-significant as a risk marker of CFS-like illness .Rather similar effects were found in a study of widespread pain: adjustment for PTSD led to the prior experience of witnessing a traumatic event becoming non-significant .We found also that threatening life events were associated with chronic fatigue and widespread pain only in the presence of concurrent anxiety or depression; the association with chronic fatigue has been reported previously in two prospective cohort studies .This pattern of associations also held for previously married status and reported childhood psychological abuse in both chronic fatigue and chronic widespread pain.Our finding that neuroticism scores were raised in participants with chronic fatigue, whether or not there was accompanying anxiety and/or depression is similar to that concerning chronic fatigue in one birth cohort study .Our results extend those of our previous study of common associated factors across these symptom groups because we widened the range of possible associated features and found new features that did not have a common effect - duration of education, current general medical illnesses, having an ill relative and neuroticism.Although they were shown to have a common effect, depression and anxiety were much more closely associated with chronic fatigue than the other symptoms groups in our previous study .The association between chronic widespread pain and few years of education and general medical illness appears to be independent of psychiatric disorder.This has been reported previously but ours is the first demonstration of the contrast between chronic widespread pain and chronic fatigue in this respect .Whether the relationship with few years of education is a specific or general effect is not known .Our study has a number of strengths as it used well-recognised case definitions of chronic fatigue and chronic widespread pain in a population-based sample rather than self-described chronic fatigue or attenders at primary care .We excluded cases where the fatigue or pain could be explained by recognised organic disease, which has been done only in some previous population-based studies.On the other hand, we did not use an interviewer-based detailed definition of chronic fatigue preventing us from extrapolating our findings to this smaller group of the more severe chronic fatigue syndrome.This is important as childhood physical abuse was an associated factor for chronic fatigue syndrome/ME in the cohort study which did not find this association in CFS-like illness once psychopathology was adjusted for .This suggests subtle differences according to the symptom group studied and the way associated factors and psychopathology are defined and measured .It is worth noting that chronic fatigue is much more common and relevant to primary care, than chronic fatigue syndrome .We also relied on a self-administered questionnaire to assess childhood abuse and this may not be the most reliable method.Our study was limited as our main analysis was cross-sectional, preventing true assessment of risk factors.Larger prospective studies, however, have found also that neuroticism and depression are predictors of subsequent chronic fatigue .Others found that few years of education and one or more longstanding physical disease predicted later onset of chronic pain .Although our method was quite different, our findings support the suggestion from twin studies that concurrence of functional somatic syndromes can be explained, in part, by two latent traits — one primarily psychiatric and one sensory or pain component .We found no association between reported childhood psychological abuse and chronic widespread pain or fatigue in the absence of anxiety or depression, suggesting that this is not a true associated factor for these symptom groups but only applicable when there is concurrent anxiety and/or depression .This may explain why results concerning sexual abuse as a common associated factor for chronic fatigue syndrome are inconsistent .Since our study was cross-sectional we cannot comment on the temporal relationship between chronic fatigue or widespread pain and anxiety and/or depression but others have found that depression precedes fatigue and vice versa .It is most likely that there are different pathophysiological pathways to chronic fatigue syndrome .Our data suggest that some of the putative associated factors for chronic fatigue and chronic widespread pain are, in fact, associated factors for the concurrent anxiety or depression frequently observed with these symptoms.It is possible however that anxiety or depression may represent one pathway to chronic fatigue, in particular.The implications of our study are twofold.From the research perspective, our difficulty in understanding the aetiology of the functional somatic syndromes will remain while the cause of each symptom group or syndrome is sought as a single entity.Instead, our data suggest that the search for causes should look at common aetiological factors across different functional somatic syndromes, notably those associated with psychiatric disorders, simultaneously with the unique associated factors for each syndrome .Another, similar approach is to compare the aetiological pathways of multiple somatic symptoms and multiple syndromes with those of discreet syndromes .From the clinical perspective, it is helpful for clinicians and patients to know that the presence of a chronic fatigue or chronic widespread pain does not necessarily imply a history of abuse or psychiatric disorder.Such implications may get in the way of satisfactory consultations and care.On the other hand it should be routine that clinicians explore these issues with all patients who have a functional somatic syndrome, including case-finding for anxiety and depression, and discuss appropriate management options if relevant.Current evidence suggests that separate treatments for somatic symptoms and psychiatric symptoms are helpful.The former often involves specific cognitive behaviour therapy aimed at beliefs related to somatic symptoms and/or some form of exercise ; the latter often involves a psychological treatment for anxiety or depression and/or antidepressant therapy as described in NICE guideline .Authors made substantial contributions in the areas outlined below.In addition all authors discussed the results, drafted and/or revised the article critically and have given final approval of this version to be submitted for publication.The corresponding author takes responsibility for the integrity of the data and the accuracy of the data analysis.Chew-Graham, Creed, Macfarlane, McBeth: Study development, design, data collection, data analysis, manuscript preparation and revision.Davies, Jackson, Littlewood: Data collection and analysis, manuscript preparation and revision.Tomenson B: Data analysis, manuscript preparation and revision.None of the authors have conflicts of interest to report.
Objective: Chronic widespread pain and chronic fatigue share common associated factors but these associations may be explained by the presence of concurrent depression and anxiety. Methods: We mailed questionnaires to a randomly selected sample of people in the UK to identify participants with chronic widespread pain (ACR 1990 definition) and those with chronic fatigue. The questionnaire assessed sociodemographic factors, health status, healthcare use, childhood factors, adult attachment, and psychological stress including anxiety and depression. To identify persons with unexplained chronic widespread pain or unexplained chronic fatigue; we examined participant's medical records to exclude medical illness that might cause these symptoms. Results: Of 1443 participants (58.0% response rate) medical records of 990 were examined. 9.4% (N = 93) had unexplained chronic widespread pain and 12.6% (N = 125) had unexplained chronic fatigue. Marital status, childhood psychological abuse, recent threatening experiences and other somatic symptoms were commonly associated with both widespread pain and fatigue. No common effect was found for few years of education and current medical illnesses (more strongly associated with chronic widespread pain) or recent illness in a close relative, neuroticism, depression and anxiety scores (more strongly associated with chronic fatigue). Putative associated factors with a common effect were associated with unexplained chronic widespread pain or unexplained chronic fatigue only when there was concurrent anxiety and/or depression. Discussion: This study suggests that the associated factors for chronic widespread pain and chronic fatigue need to be studied in conjunction with concurrent depression/anxiety. Clinicians should be aware of the importance of concurrent anxiety or depression.
266
Set-up and input dataset files of the Delft3d model for hydrodynamic modelling considering wind, waves, tides and currents through multidomain grids
The dataset gathers the input and set-up files of a study case modelled through the Delft3D model.The data allows to model a multidomain grid with double-way communication in an offshore location of the Guajira – Colombia.Also, this data can be considered as a reference to implement multidomain grid modelling for others study cases.The input data contains information of atmosphere extracted from NARR-NOAA database , water levels calculated through the GRENOBLE model , bathymetry from ETOPO database , surface salinity and temperature of the study area derived from World Ocean Atlas database .The set-up files contain model parameters that specify the boundary conditions, grid geometry, govern equations to be solve, and the coordinates of the monitoring observation points.The wave data utilized as input information was extracted from the database provided by Oceanicos-UNAL et al. and related to the research of A.F. Osorio et al. .The data is gathered and stored within a compressed folder named as Multi_domain_2004_all_forces.zip.The Multi_domain_2004_all_forces folder contains the input and setup files of the Delf3d model mentioned above; the guajira.ddb file allows to connect the outer and inner grid.The dataset can be downloaded directly from the online version of this data article.The study area of the multidomain modelling is shown in Fig. 1, where the black square, red rhomboid and yellow triangle symbols, indicate the temperature-salinity input data, waves input data and numerical monitoring point respectively.The study area is considered as strategic because there were identified the highest wind speed and wind power density potential in Colombia according to the results revealed in Rueda-Bayona et al. .The dataset of this article is in ASCII file format, and is organized and described as follows:The data related to the atmosphere information were processed through MATLAB language with the same methodology recommended in the data article of Rueda-Bayona et al. .The bathymetry, geometry and monitoring point information,were created through the RGFGRID and QUICKIN tools of the Delft3D model.The Boundary definition, Time-series flow conditions, Transport conditions, were generated with the graphical user interface of the flow module and verified though EXCEL spreadsheets.Finally, the Wave boundary condition data,was processed in MATLAB.
This article contains the set-up and input files of the implementation of Delft3D model to determine extreme hydrodynamic forces performed in Rueda-Bayona et al. [1]. The model was configured with a multidomain grid using double-way communication between the hydrodynamic and wave module. The multidomain grids solve faster than single and nested grids because require less grid points to calculate. Also, the double-way communication between the hydrodynamic and wave modules allows to consider the non-linear interactions of wind, waves, tides and currents. Because there are no modelling examples related to multidomain grids in the open access official web site of Delft3d model, this data contributes to increase the availability information of this necessity. Finally, the files of this article are ready to be run in the Delft3D model to perform a sensitivity test recommended in Rueda-Bayona et al. [1].
267
Enhanced catalytic properties of La-doped CeO2 nanopowders synthesized by hydrolyzing and oxidizing Ce46La5C49 alloys
As a typical kind of rare earth oxide, ceria has been widely explored in ultraviolet , polishing materials , gas sensors , abrasives , solid oxide fuel cells , and catalysts , where pollutant emissions from internal combustion engines are effectively reduced.The catalytic properties of CeO2 are mainly related to the following three factors: a large oxygen storage capacity via the redox process Ce4+ ↔ Ce3+; improvement of the thermal stability of supports; and promotion of the water–gas shift reaction .The addition of different metal dopants into CeO2 lattice leads to formation of defects in crystal structure enhancing oxygen storage/release capacity and oxygen conductivity.In particular, La3+ incorporation into the ceria lattice creates lattice defects due to ionic radius difference between Ce4+ and La3+.Up to now, La-doped CeO2 nanostructures have been introduced to be synthesized by the co-precipitation , sol-gel method , and hydrothermal process .However, it is still challenging to produce large quantities of such materials using these techniques, and a complete understanding of the relationship between the structure and properties of the material has thus not been reached.In this paper, we report a mass synthesis of La-doped CeO2 nanopowders by hydrolyzing and oxidizing cerium lanthanum carbide alloys, which represents an environmentally and friendly synthesis approach .The methane catalytic performance of La3+-doped CeO2 nanopowders has been tested by methane combustion and compared with that of CeO2.The alloys with nominal compositions of Ce46La5C49 and Ce51C49 were prepared by induction melting furnace.During the melting, the graphite crucible was used.Ce, La and C melting at a high power of 35 Kw for a certain period of time was to make carbon saturated in the alloy.After carbon was fully dissolved in the alloy, the melt alloy was cast with fast cooling rate in order to obtain the alloys of Ce46La5C49 and Ce51C49.Then these alloys were crushed into grains and these powders were immersed into deionized water with 1:10–1:40 mass ratios under agitation at room temperature for 18–30 h until the reaction of hydrolysis and oxidation was completed.Subsequently, the nanopowders of CeO2 and La-doped CeO2 were obtained with further filtrating, washing and drying in the cabinet at 120 °C.Finally, they were calcined at 600 °C-800 °C for 1 h in air.A schematic of the preparation process of La-doped CeO2 nanopowders is shown in Fig. 1 .The catalytic activity testing for the methane combustion was carried out in a quartz reactor.The catalyst particles were placed in the reactor.The reactant gases went through the reactor at a rate of 80 ml/min and a space velocity of 24000 mL/.The reactants of samples were analyzed online by gas chromatograph equipped with a flame ionization detector.The XRD patterns of La-doped CeO2 and CeO2 nanopowders are shown in Fig. 2a. All the characteristic lines in the XRD patterns are symmetric which match with that of the standard fluorite-type cubic phase of CeO2.The wide diffraction peaks indicate that the grains of the samples are very fine).After calcining at 600 °C for 1 h, the XRD patterns of La-doped CeO2 nanopowders and pure CeO2 nanopowders are shown in Fig. 2a.Compared to pure CeO2 nanopowders, it can also be seen that the XRD peaks of La-doped CeO2 samples shift slightly to lower angles and the FWHM of the XRD peaks becomes broader with low intensity."The XRD peak's changes were attributed to the grain decrement with La doping into CeO2 nanopowders.The XRD peaks of La2O3 corresponding to PDF-ICDD 73-2141 are not observed.The Raman spectroscopy of the La-doped CeO2 and CeO2 nanopowders oven dried at 80 °C are shown in Fig. 2b.A strong band near 460 cm−1 observed is due to the F2g Raman active mode of the fluorite structure of CeO2 .The occurrence of the bands near 535 cm−1 and 597 cm−1 was found only in La-doped CeO2.These bands have been attributed to oxygen vacancies and intrinsic or doping defects, which are expected to be beneficial to catalytic performance.The absence of the peak near 405 cm−1 corresponding to La2O3 indicates the La3+ incorporation into the CeO2 lattice.The results are in agreement with the XRD analysis.Fig. 3a shows the XRD patterns of samples calcinated from 600 °C to 800 °C for 1 h.In comparison with pure CeO2 nanopowders, it can also be seen that the XRD peaks of La-doped CeO2 nanopowders shift slightly to lower angles and the FWHM of diffraction peaks becomes broader at the same calcinating temperature of pure CeO2 nanopowders.It means that La-doped is beneficial to forming small size of CeO2.The peaks of La2O3 phase do not appear.This indicates that the La doping increases the thermal stability of CeO2 nanopowders by the host lattice.The average crystallite size of samples was also estimated by the Debye-Scherrer equation upon all the prominent lines of the XRD data.The nanocrystalline size of samples is listed in Table 1.The methane catalytic activity curves of samples after calcining at 600 °C are shown in Fig. 3b and Table 2.The catalytic activity is characterized by T10, T50 and T90, in which the reaction temperature is corresponding to 10%, 50% and 90% methane conversions, respectively.The T50 and T90 of La-doped CeO2 nanopowders are 502 °C and 652 °C, respectively.The T50 and T90 of pure CeO2 nanopowders are 512 °C and 611 °C, respectively.It is very obvious that the T50 and T90 of the La-doped CeO2 nanopowders corresponding to the reaction temperature are lower than those of pure CeO2.This indicates that the La-doped CeO2 nanopowders have good catalytic activity because La ions are incorporated into the CeO2 lattice to form the La-Ce solid solution, which improves the activity of oxide on the nanopowders surface.The catalytic activities of samples prepared by hydrolyzing and oxidizing Ce-La-C or Ce-C alloys are superior to those reported in Ref. , which were synthesized with the aid of glucose and acrylic acid.The excellent catalytic performance of the La-doped CeO2 nanopowders is attributed to the fact that La incorporation into CeO2 refined grain size and increased the thermal stability of CeO2.The TEM image indicates grain size change of the samples during the calcination process.The TEM and HRTEM images of the La-doped CeO2 nanopowders are displayed in Fig. 4.It is found that the samples oven dried at 80 °C showed some grain agglomerations.It can be seen that the size of La-doped CeO2 nanopowders is about 3–5 nm, and some interplanar distances are determined to be about 0.309 nm, which corresponds to the plane of the CeO2 phase.After calcinating at 600 °C and 800 °C for 1 h, the size of La-doped CeO2 nanopowders is about 5–8 nm and 8–15 nm, respectively.Some interplanar distances are about 0.312 nm and 0.165 nm, which correspond to the and plane of the CeO2 phase."These results show that the size of grains' growth is a little with increasing the calcination temperature and it is a main reason for the La-doped CeO2 nanopowders to possess an excellent catalytic performance.La doping into CeO2 can effectively prevent grain growth and it is beneficial for grain refinement of CeO2 nanopowders.Because of the good thermal stability of CeO2 doped by La ion, small size grains increase from about 5.7 nm to 9.5 nm when increasing the calcination temperature from 600 °C to 800 °C."The samples of La dopants have good catalytic activities of methane combustion because La ions are incorporated into the CeO2 lattice to form Ce-La solid solution, which improves the activities of the ceria nanopowders' surface by increasing oxygen vacancies and defects.
The Ce46La5C49 alloy was first prepared in a 25 kg vacuum induction melting furnace. The La-doped CeO2 nanopowders were then prepared by hydrolysis and oxidation of Ce46La5C49 at room temperature. These nanopowders were calcinated at different temperatures in order to improve their catalytic activities. The lanthanum ions were used to partially replace the cerium ions in the CeO2 lattice, forming a solid solution of cerium lanthanum. Compared to the pure CeO2, the thermal stability of La-doped CeO2 was increased due to the lanthanum doping. The La-doped CeO2 nanopowders show enhanced CH4 catalytic performance.
268
Towards productive landscapes: Trade-offs in tree-cover and income across a matrix of smallholder agricultural land-use systems
Throughout the past century, tropical forests have declined mainly due to land conversion, and continue to be lost at alarming rates.Although recent conservation efforts may have slowed down the speed of deforestation, every year the area of tropical forest decreases by an estimated 12.3 million ha.1,With an estimated two billion extra people expected on the planet in the next 25 years, primarily in tropical areas, forests and their biodiversity face an increasingly uncertain future.Although the underlying causes and the drivers of agents’ forest clearing behaviour are complex, it is widely found that one of the main immediate causes of forest conversion in the tropics is to provide land for subsistence or commercial agriculture.Furthermore, with the scale and impact of agriculture constantly rising, and emerging as a dominant land cover in the tropics, forest biodiversity and ecosystem services will be increasingly affected by the agricultural landscape matrix.Food production and biodiversity conservation are not necessarily mutually exclusive, and there is no simple relationship between the biodiversity and crop yield of an area of farmed land.Rural land use challenges in the tropics also include environmental degradation on fragile agricultural lands, including a decrease in soil fertility experienced by farmers.Evidence from a number of studies indicates declining growth of yields under intensive cropping even on some of the better lands, e.g. the Indo-Gangetic plains.In response, tropical agroforestry systems have been proposed as a mechanism for sustaining both biodiversity and its associated ecosystem services in food production areas, by increasing tree cover, while maintaining food production.The importance of agroforestry systems in generating ecosystem services such as enhanced food production, carbon sequestration, watershed functions and soil protection is being increasingly recognized.Tree components also produce important products, e.g. wood, fruits, latex, resins etc., that provide extra income to farmers and help alleviate poverty.The economic return, especially net present value, internal rate of return, benefit-cost ratio, return-to-land and return-to-labor of agroforestry has been found to be much higher than from seasonal agricultural systems in many locations.This is especially so for marginal farmlands where agricultural crop production is no longer biophysically or economically viable, and may become incompatible with the sustainable development concept with its major focus on ‘people-centered’ development.Many ecological and economic studies have been conducted on the effect of land-use change, and management at the landscape scale, on ecosystem services.However, only a few have focused on the simultaneous delivery of different agro-ecosystem services under scenarios of increasing tree planting in smallholder land use systems, and none of these carried out their research in Asia.Thus, this study seeks to fill this gap by assessing the trade-offs between income and tree cover when incorporating trees into food-crop-based agricultural systems in two tropical Asian locations, West Java, Indonesia and eastern Bangladesh.Our analysis compares provisioning ecosystem services provided by agroforestry with seasonal food crop farming, practiced in either swidden or permanent systems.Expansion of these subsistence systems is a major contributing factor to forest loss and environmental degradation in West Java.Similarly, upland slash-and-burn swidden agriculture, which is the dominant economic land use, is a leading cause of deforestation in eastern Bangladesh.Hence, the two locations represent a complementary pair of examples for our analysis targeting the effect of increasing tree cultivation, and thus tree cover, in the dominant2 type of Asian tropical agricultural landscapes.This study will provide new information on the contribution that can be made to the income of seasonal food crop farmers by adopting agroforestry practices, specifically through production of a wider range of food and timber provisioning ecosystem services.It will meet the need for more detailed research resulting in quantitative data from different locations on a range of agroforestry systems compared with alternative farming practices, which is crucial evidence to better inform land use and farming policy and development practice.This research was conducted in Gunung Salak valley, Bogor District, West Java, Indonesia and Khagrachhari district, eastern Bangladesh.The research site in Indonesia lies between 6° 32′ 11.31” S and 6° 40′ 08.94” S latitudes and between 106° 46′ 12.04” E and 106° 47′ 27.42” E longitudes.The climate is equatorial with two distinct seasons,3 i.e. relatively dry and rainy.The region is more humid and rainy than most parts of West Java.Given the proximity of large active volcanoes, the area is considered highly seismic leading to highly fertile volcanic soils.Field data were collected from three purposively selected4 sample villages: Kp.Cangkrang, Sukaluyu and Tamansari, which are located in the northern Gunung Salak valley.The latter two villages contain a mixture of households practicing each of the two land use systems that form the major comparison of this study: subsistence seasonal swidden farming and agroforestry.The first village is located in a different part of the watershed, most of its studied households carry out a different farming system and it is included in this study as an outgroup comparison.The total population in this area is approximately 10,200 people spread across 1600 households.Villages have poor infrastructure, and household incomes are mainly based on agricultural and forest products, sold in local and district markets, in addition to wage labor and retailing.The research site in Bangladesh is part of the Chittangong Hill Tracts, the only extensive forested hilly area in Bangladesh, which lies in the eastern part of the country between 21° 11′ 55.27” N and 23° 41′ 32.47” N latitudes and between 91° 51′ 53.64” E and 92° 40′ 31.77” E longitudes.The area has three distinct seasons, i.e. hot and humid summer, cool and rainy monsoon and cool and dry winter.Mean annual rainfall is higher than the Indonesian study site, and soils were also highly fertile.Field data were collected from two purposively selected sample villages,5 Mai Twi Para and Chondro Keron Karbari Para, with a total population of approximately 750, in 135 households.These two villages have poor infrastructure, and household incomes are mainly based on the sale of agricultural and forest products in local and district markets, with wage labor providing additional household income.They both include a mixture of households practicing each of the two land use systems that form the major comparison of this study: subsistence seasonal swidden farming and agroforestry.In both research sites, agriculture is mainly a subsistence practice, conducted by small-scale farmers and deeply rooted in their culture.The main agricultural crops are mainly cultivated in agricultural fields year-round.In all the studied villages, forest products are collected from nearby forests.Farmers practicing swidden prepare new areas of land using the traditional slash-and-burn method to cultivate predominantly the food crops upland rice, maize and vegetables.They rotate crop cultivation between fields to maintain soil fertility by leaving land fallow for 2–4 years.Farmers practicing permanent monoculture agriculture in the Indonesian site grow single seasonal crops.Some farmers have replaced such traditional crops with high-value cash crops, e.g. taro, banana and papaya.In both research sites, some farmers have adopted a range of agroforestry systems, where trees are grown together with seasonal and perennial crops.Primary data of the basic socioeconomic and geographical state of the research sites were collected by rapid rural appraisals using village mapping and key informant interviews.Key informant interviews and village mapping sessions were conducted by involving the village head and three farmers, selected purposively based on their knowledge about the village and surrounding areas.Five focus group discussion sessions and field observations were used to identify the types of local cultivation systems and their products.The village heads and local farmer representative groups were present in the FGD sessions.Field observations were carried out in fifty-five farm locations identified during the RRAs and FGDs.Several pictures of local cultivation systems were taken,8 and relevant information was noted with the assistance of expert local informants.9,Semi-structured interviews were conducted to collect information on farm products and their values, land area and allocation, and other basic characteristics of the farm household, i.e. family and labor force size, age and education of the family members, income, expenditure, savings and interest in tree-based farming.In Indonesia 20 permanent monoculture,10 20 swidden and 20 agroforestry farmers were interviewed; and in Bangladesh11 40 swidden and 21 agroforestry farmers were interviewed.Due to the variation in structure and management practices of the farms in each area, purposive sampling was used to identify households that were practicing a well-managed12 form of each of the contrasted farming systems.13,We estimate that in the Indonesian study villages they represent 20%, 40% and 30% of the permanent monoculture, swidden and agroforestry farming populations respectively.In the Bangladesh study villages they represent about 50% and 60% of the swidden and agroforestry farming populations respectively.The questionnaire that guided the interviews was refined and finalized with the help of the expert local informants and during FGD sessions to make sure that the questions elicited the information required.The product value of crops was calculated with the key informant farmers during the interview based on the total production in the most recent season/year.The primary data collected from the research sites were cross-checked with data gathered from local state agriculture and forestry offices, and the ICRAF Southeast Asian Regional office and CIFOR headquarters.Descriptive statistics were used to compare characteristics of the different farmer groups.14,The size of farms and proportion of land used for different categories of land use were compared amongst the farmer groups.To compare two farmer groups, a two-sample un-paired Student’s t-test was calculated, with the assumption of unequal variance, and the Welch approximation to the degrees of freedom was used to determine the p-value.ANOVA was used to test differences amongst three farmer groups, with F-statistics reported as F, where a and b are between and within group degrees of freedom respectively.All analyses were performed in the R environment for statistical computing in a Windows platform.Net present value was calculated to assess the overall economic performance of crop production under mixed tree crops versus the non-agroforestry farming systems on the basis of a 30-year time period and a 10% discount rate as it is an appropriate rate to match the banking system local to the research site.15,Sensitivity analysis was also conducted on variation in yields, as the combination of tree species may affect understorey crop production.Means are compared to assess the different factors that may affect the decisions of non-agroforestry farmers to choose to adopt agroforestry tree-based farming, by determining the conditional probability that a farmer will adopt given a set of independent influencing factors, i.e. land area, family size, income, age, education, and credit availability.Our hypothesis is that, with less land available for permanent cultivation, farmers are more inclined to practice seasonal cultivation, e.g. swidden.Farmers with larger family size, lower family income, who are older, and less-educated are also more closely aligned to seasonal cultivation.Available credit helps to enable the adoption of agroforestry.The dependent variable in our case is binary which takes the value ‘1′ if a non-agroforestry farmer wants to practice agroforestry and ‘0′ if otherwise.The definition and expected signs of the explanatory variables and the results are described in Table 7.In both study sites, agroforestry farmers are younger than swidden farmers.In addition, in the Indonesian case, the farmers in the lower watershed village practicing permanent monoculture were of comparable age to the swidden farmers in the two villages higher in the watershed.All the Indonesian farmer groups have roughly the same educational qualifications, whereas in Bangladesh the agroforestry farmers have higher levels of education than the swidden farmers.In both areas all respondents and household heads were male.The average household labor force size is 1.2, 1.4 and 1.5 for agroforestry, swidden and permanent monoculture farmers in Indonesia, and 1.6 for both the agroforestry and swidden farmers in Bangladesh.Agroforestry farmers have higher annual income than swidden farmers in both areas.In Indonesia, the permanent monoculture farmers have higher income than the others.The savings of Indonesian farmers are lower than Bangladeshi farmers.They do not differ much amongst the farming groups in Indonesia, however agroforestry farmers in Bangladesh have double the amount of savings of swidden farmers.Each of the farmer groups as a whole cultivates plots of land under different forms of farming.The total farm size of agroforestry farmers is significantly larger than that of swidden farmers in Bangladesh.In Indonesia, farm size also differs between the groups , with swidden and agroforestry farms in the middle and upper watershed villages being significantly larger than the permanent monoculture farms in the lower watershed village.However, there was no significant difference in farm size between the swidden and agroforestry farmers.The proportion of the total land area of the interviewed agroforestry farmers that they use for agroforestry systems is significantly higher than that the swidden farmers use for swidden systems.The allocation of land to ‘other land uses’ follows a similar pattern for the two groups of farmers.The agroforestry farmers tend to cultivate a single plot of land.In Indonesia, on average the agroforestry farmers allocate 88% of their land to the single largest plot, whereas in Bangladesh it is only 58% of their land.This indicates that the land of the Bangladeshi agroforestry farmers tends to be divided into more plots with a greater diversity of plot sizes.In contrast, for the swidden farmers there is less difference between the two countries in the division of their land between plots of different sizes; in both cases the proportion of their land that is allocated to their largest plot varies widely amongst farmers.This is because there is a tendency to spread the farming risk16 across many smaller plots.In contrast, the vast majority of permanent monoculture farmers allocated a very high proportion of their land to their single largest plot.In the Indonesia study site, the agroforestry farmers earn an average income of US$382 per hectare of land that they allocate to agroforestry.This is 1.7 times higher than the income of swidden farmers per hectare of land allocated to swidden.However, the average income of the permanent monoculture farmers located lower in the watershed, who allocated 100% of their mean 0.20 ha of land to this use, was much higher.In contrast, in Bangladesh the swidden farmers had a higher income per area of land used for swidden than the agroforestry farmers had per area of agroforestry land.In Bangladesh the two groups of farmers allocated a similar proportion of their land to their dominant land use, whereas in Indonesia agroforestry farmers allocated 87% of their land to this use, but swidden farmers allocated a lower proportion of their land to swidden.Farmers in our study sites spread their production over a wide diversity of crops.In Indonesia, yam is the most common permanent monoculture crop, being cultivated by 80% of farmers.Among swidden farmers, maize and upland rice are most popular.On agroforestry farms, the most common crops are the annuals cassava and yam, followed by the fruit trees durian and nutmeg and the timber trees teak and white jabon.In Bangladesh, turmeric, rice and banana are the most widely cultivated field crops, mangium the dominant timber tree, and mango, jackfruit and lychee the dominant fruit trees for agroforestry farmers.The surveyed agroforestry farmers in Indonesia do not grow rice in their agroforestry fields, but in separate non-agroforestry fields.The average income and net present value of the main agricultural crops grown in the swidden and permanent monoculture systems is presented in Table 4.Among the crops, yam generated the highest income in Indonesia followed by upland rice, maize and peanut.In Bangladesh farmers earn the highest income from banana followed by turmeric, cucumber, maize and upland rice.To test the difference in overall economic performance of farm production under agroforestry, with a mixture of tree crops, to that of non-agroforestry farming systems, the most popular locally cultivated trees were selected: durian, nutmeg and teak in Indonesia; mango, jackfruit, lychee and mangium in Bangladesh.Risk factors, such as the effect that the tree species combination may have on productivity of the understorey crops are important in assessing economic performance.This effect depends on various factors, e.g. intensity of shade and spread of tree canopy, sunlight, rainfall, soil conditions and fertilizer inputs.Therefore, sensitivity analysis was conducted testing the effect of variation in crop yield reduction in 10% intervals from 0% to 60% on the NPV.With durian as the overstorey tree crop, all of the understorey crops, except yam, are profitable up to yield reductions of 40% compared with other cropping systems in Indonesia.Nutmeg as a tree crop provides a low return and the nutmeg system is not profitable at any level of crop loss.In contrast, teak has high value so the teak-based agroforestry system remains profitable regardless of the understorey crop yield reduction it may cause.Similarly, in Bangladesh mango- and lychee-based agroforestry systems are profitable regardless of the yield reduction with any selected crops except banana, which is profitable up to 30% loss.The jackfruit-based system is profitable up to 50% loss of most crops, but there is a big variability in the mangium system as rice, maize, sesame, turmeric and cucumber are profitable up to 30%, 20%, 40%, 10% and 10% of crop yield reduction respectively.In contrast banana is never profitable with mangium.From the information gathered during our semi-structured interviews of the non-agroforestry famer groups, a comparison of means is used to investigate the conditional probability that a farmer may adopt tree-based farming given a set of influential factors.The mean values of different influential factors, i.e. farmer age, education, land area, family size, income and credit availability, revealed no significant differences between those who have a interest in agroforestry and those who have not, in either country, except that interest in adopting agroforestry was very significantly associated with educational level for swidden farmers in Indonesia.Therefore, with this exception, there is no evidence that these factors have a significant influence on farmer choice of tree-based farming in our study areas, which is corroborated by the qualitative information obtained from FGD sessions that swidden and permanent monoculture are retained because they are deeply rooted in local traditions extending back over many generations.Profitability measured by NPV over a 30-year time period shows that farmers will achieve a positive economic performance by mixing trees and seasonal crops in agroforestry systems compared with seasonal agriculture in both countries.This finding holds across a wide range of percentage reductions in understorey crop production when trees become mature and their canopies close.Teak-based agroforestry systems, followed by durian, showed the best economic performance at the Indonesian site, both considerably outperforming seasonal crop-based farming systems.Agroforestry systems with two fruit tree species, mango and lychee, also showed a good economic performance in Bangladesh.In the short term, however, before tree crops have reached maturity, permanent monoculture and swidden farms provide higher income, as seasonal crop farms generate quicker returns than do agroforestry farms.Furthermore, when adopting tree crops, farmers have to accept reduced yields of understorey seasonal crops before receiving the increase in income from harvesting these tree crops.Farmers may also face other interacting risks, such as crop failures, fluctuating market demand and prices, pests and diseases, and climate change.Changing successfully to tree-dominated systems will require farmers to develop access to high quality tree germplasm, tree management expertise, which may be lacking in government extension services, and market channels for tree products, which are generally different from those for annual crops.Nonetheless, a more ecologically diverse farming system yielding a wider range of products is more likely to be buffered against such risks over the 30-year time period assessed in this paper.This change in farming system to agroforestry may, however, have serious subsistence and cultural costs as the cultivation of seasonal crops is primarily for household subsistence consumption and is deeply rooted in their culture.The retention of seasonal crop farming by many farmers, despite the medium-/long-term economic advantage of adopting agroforestry demonstrated by the results of the present study, is likely to be explained by culture- and tradition-linked factors retaining a decisive influence on farmer preferences.This is also indicated by their retention of comparatively small plots of seasonal crops, despite this restricting the efficiency of the productive assets.Farmers are concerned about the loss of understorey crop production in agroforestry systems, however our results provide strong evidence that these will be compensated by the generation of cash income from tree products in the medium-term.Provided that farmers can afford to bear the losses up to the time that their trees have grown to harvestable maturity, they are likely to gain a net benefit by achieving a level of income from tree products that enables them to purchase essential needs including food.However, farmers may lack confidence in this shift in the basis of their livelihoods.Even if it is likely to increase their net income, they may feel more exposed to risks of market failure of their tree crops and regret the loss of cultural identity associated with the cultivation of specific traditional crops.Thus, smallholder farmers’ decision-making about whether to shift their food production system to agroforestry in place of subsistence crop production is based on cultural considerations as well as the trade-off between short-term and a longer-term benefits.Living costs are predicted to increase in both the studied countries, however as food security largely depends on income security, even in remote places, our economic analyses demonstrate that the higher income from tree-based farming has the potential to enhance food security.Incorporation of tree species selected for the local value of their products into food-crop-based subsistence agricultural systems can also enhance household well-being by consumption of a more diverse diet of higher nutritional quality, both from the harvested fruit and from foodstuffs that can be purchased with the income generated.In this sense, farming families may increase their food sovereignty through improved access to healthy and culturally appropriate food.The higher establishment costs of agroforestry systems than traditional agricultural alternatives indicated by the present study can be attributed to their distinction from established routines of seasonal farming.All of the farmers in our study site are poor17, therefore initial capital support could be helpful to facilitate local adoption of agroforestry.Furthermore, the farmers do not have full tenure rights to the land as it is owned by the state.Therefore, swidden farmers tend to establish many small swidden plots across the landscape to spread risk16, and this practice is viewed as a major cause of tropical deforestation.In contrast, agroforestry tends to be established in larger plots, reflecting the greater investment by households in this longer-term farming practice.Tenure security is an important factor influencing land use decisions.To adopt agroforestry instead of traditional seasonal cultivation, farmers need to invest substantial amounts of financial and labor resources.Insecure land tenure constrains such investments and has induced farmers to continue their traditional land use practices.To adopt tree-based agroforestry systems, farmers may also need to develop a different set of skills, knowledge and technologies, and the present study did find evidence of a strong positive association between education level and interest in adopting agroforestry within one group of farmers.Others argue that smallholder farmers cannot use improved technology when structural constraints are imposed by institutions.Institutions not only govern the processes by which scientific and technical knowledge is created, but also facilitate the introduction and use of new technology in agricultural production.The equally important role of infrastructure, including transportation facilities and access to market centres, in facilitating land use change has been emphasised by Reardon et al., Turkelboom et al. and Allan as they increase the potential income from new crops and technologies.In Lampung, Indonesia a team of socioeconomic, forestry, horticulture and livestock specialists determined that smallholder agroforestry systems and the productivity of those systems are limited by a lack of technical information, resources and consultation.Experience from across Indonesia shows that farmers’ previous agricultural knowledge, quality of land resources, proximity to markets and level of support received all play important roles in determining the technology adopted and subsequent success.Therefore, the motivation of self-interest − the desire to profit from their investment of time and resources − is invaluable for farmers’ success, once skills, knowledge, and institutional support have been secured.If these institutional and policy requirements can be met, then agroforestry systems have great potential as a ‘land sharing’ option in the marginal farmlands that efficiently combines provision of local food security and environmental services of benefit to a wider population, instead of the ‘land sparing’ separation of agriculture and forests.The economic assessment of tree-based faming in our research shows higher net present value than that of seasonal agricultural systems in both West Java and eastern Bangladesh.Trees also help diversify farm products, which can potentially improve household nutrition and welfare.In both locations, agroforestry is a practice that has already been adopted by some households and this establishes the set of tree species that are popular in each location to incorporate into food-crop-based agricultural systems.This represents diversification of farming based on a combination of locally favoured tree and agricultural crops.Nonetheless, the cultural value of retaining the practice of seasonal agriculture with a narrow set of traditional subsistence food crops still has the potential to inhibit farm diversification through agroforestry.This resistance to changing farming practice is likely to be reinforced by the inability of many households to cope with the short-term loss of food crop productivity during the tree crop establishment phase, before tree products can be harvested resulting in longer-term net benefits.Insecurity of land tenure compounds this risk.Therefore, to implement such an initiative on the ground, a strong and long-term institutional framework is needed to provide more secure land tenure, and short-term technical and financial support during the tree establishment phase.The success of this framework will be greatly facilitated by the development and implementation of government policy involving a broad cross-section of local people to incorporate their aspirations, and sensitivity to their cultural values, in the planning and decision-making processes.This will also require provision of technical extension based on expert knowledge of tree planting and management, which is likely to benefit from further research.Participatory research may play a particularly valuable role in the areas of plant breeding to match local needs and the ecological functioning of agroforestry systems.This could result in agricultural sovereignty and self-sufficiency being operationalized spontaneously by the farmers in a smallholder tree-based farming environment that could lead to increases in tree cover in the agricultural landscape.
One of the main causes of tropical forest loss is conversion to agriculture, which is constantly increasing as a dominant land cover in the tropics. The loss of forests greatly affects biodiversity and ecosystem services. This paper assesses the economic return from increasing tree cover in agricultural landscapes in two tropical locations, West Java, Indonesia and eastern Bangladesh. Agroforestry systems are compared with subsistence seasonal food-crop-based agricultural systems. Data were collected through rapid rural appraisal, field observation, focus groups and semi-structured interviews of farm households. The inclusion of agroforestry tree crops in seasonal agriculture improved the systems’ overall economic performance (net present value), even when it reduced understorey crop production. However, seasonal agriculture has higher income per unit of land area used for crop cultivation compared with the tree establishment and development phase of agroforestry farms. Thus, there is a trade-off between short-term loss of agricultural income and longer-term economic gain from planting trees in farmland. For resource-poor farmers to implement this change, institutional support is needed to improve their knowledge and skills with this unfamiliar form of land management, sufficient capital for the initial investment, and an increase in the security of land tenure.
269
New phagotrophic euglenoid species (new genus Decastava; Scytomonas saepesedens; Entosiphon oblongum), Hsp90 introns, and putative euglenoid Hsp90 pre-mRNA insertional editing
Euglenozoa are genetically and morphologically the most distinctive protozoan phylum.They are ancestrally aerobic, non-pseudopodial zooflagellates with a microtubule-rich pellicle, a unique complex feeding apparatus, tubular extrusomes, parallel centrioles attached within a deep ciliary pocket by three distinctive microtubular roots to the pellicle and FA, and cilia ancestrally with unique dissimilar latticed paraxonemal rods.Free-living Euglenozoa almost all have two centrioles and abound in virtually all freshwater, soil, and marine habitats; they can be phagotrophic, photosynthetic, osmotrophic, or depend on ectosymbiotic bacteria.Symbiotic in animals are the parasitic, often secondarily uniciliate, Trypanosomatida, and the tadpole-gut-commensal euglenamorphids that multiplied their centrioles to 3–7.Biciliate Euglenozoa also can parasitize animals or plants.Nuclear and mitochondrial genomes of all Euglenozoa have numerous unusual properties.Some of these bizarre oddities, such as mitochondrial genome pan-editing and euglenozoan nuclear mRNA biogenesis by universal nuclear trans-splicing of mini-exons onto immensely long multigenic transcripts, are clearly highly derived not primitive eukaryotic features.In contrast, many unique molecular features, some more like those of archaebacteria than other eukaryotes or apparently primitive in various ways, suggest that Euglenozoa may be the most divergent of all protozoan and eukaryote phyla and of exceptional evolutionary interest.This position of the root of the eukaryotic tree is also supported by some ribosomal multiprotein trees with prokaryote outgroups and by the uniqueness of 19 kinetoplastid kinetochore proteins – but questioned by similar mitochondrial multiprotein trees.Despite their key importance for understanding eukaryote early evolution, relationships among the four major ultrastructurally distinctive euglenozoan groups and therefore the nature of the ancestral euglenozoan remain unclear, as rDNA trees have been repeatedly contradictory.Phylogenetic trees for several proteins indicated that classes Kinetoplastea and Diplonemea are sister groups, contradicting earlier classification into subphyla and early 18S rDNA trees.Unlike the purely heterotrophic kinetoplastids and diplonemids, euglenoids of all nutritional modes are characterized by a pellicle supported by longitudinal or spiral proteinaceous pellicular strips as well as microtubules.Multiprotein trees based on 187–192 genes robustly confirm the Kinetoplastea/Diplonemea clade.But whether this clade is sister to or evolutionarily derived from the cytologically more diverse euglenoids has not been established because neither basal euglenoids nor the enigmatic Postgaardea have been subjected to large-scale sequencing, so are absent from gene-rich trees.The status of the anaerobic class Postgaardea of heterotrophs densely clothed in episymbiotic bacteria was recently partially clarified by rDNA sequencing Calkinsia and a new genus Bihospites: Postgaardea form a clade so deep branching within Euglenozoa that its precise position is uncertain.Calkinsia, classically thought a euglenoid, was transferred to Postgaardea by Cavalier-Smith.This clade was needlessly renamed Symbiontida, but ultrastructural similarity of Calkinsia, Bihospites, and Postgaardi supports all three belonging in Postgaardea.The discovery that Bihospites has relatively complex FA with two curved ‘rods’ of novel structure, led Yubuki et al. to suggest that postgaardeans evolved by substantial modifications of euglenoid pellicle and FA.An 18S rDNA tree weakly suggested that euglenoids may be paraphyletic and both postgaardeans and the diplonemid/kinetoplastid clade might be derived from them, but another did not even show the diplonemid/kinetoplastid clade, whose reality was firmly established by 192-protein trees.18S rDNA trees of Yamaguchi et al., Lax and Simpson, and Chan et al. also placed postgaardiids weakly within euglenoids, and Lee and Simpson controversially treat them as euglenoids, though trees of Chan et al. equally weakly put them between diplonemids and kinetoplastids.Those of Chan et al. and Lax and Simpson placed Kinetoplastea as sister to all other Euglenozoa as in early distance trees, which 187-protein trees including both prokinetoplastids and metakinetoplastids decisively refute.Poor resolution of the deep branching order of Euglenozoa on 18S rDNA trees suggests rapid early radiation, but may be exacerbated by insufficient taxon sampling of early phagotrophic lineages and unequal rates of evolution with extra-long branches for Entosiphon and rhabdomonads.To reduce or circumvent these problems we sequenced 18S rRNA and Hsp90 genes from three previously undescribed phagotrophic euglenoids isolated and studied ultrastructurally some years ago by Vickerman so as to produce trees for Euglenozoa with broader taxon sampling.We describe all three as new species and show novel ultrastructural features for two.One is a Scytomonas from manure, unique among euglenoids in feeding whilst attached by its cell posterior to surfaces with actively beating single cilium that draws bacteria to its mouth.Most phagotrophic euglenoids feed whilst actively gliding and generally have two cilia.To test the widespread assumption that Euglenozoa were ancestrally biciliate and establish the ancestral phenotype for euglenoids it is important to know whether Scytomonas, the only genus with a single cilium and centriole, is primitively uniciliate or evolved from biciliate ancestors by losing the second centriole; this will help reconstructing the last common ancestor of all eukaryotes from which Euglenozoa and excavates probably diverged.Scytomonas proves to be phylogenetically sister to Petalomonas as here revised, showing that its single cilium and centriole is a secondary reduction."Thus the last common ancestor of all Euglenozoa had two centrioles – also true for eukaryotes as a whole, whether the eukaryote tree's root is really between Euglenozoa and Excavata, as we argue, or between discicristates and other eukaryotes as another ribosomal protein multigene tree suggests, or between podiates and other eukaryotes as a different multigene set suggests.Our second new euglenoid is a very deep branching relative of recently described Keelungia pulex, differing sufficiently to merit a new genus Decastava; we establish new order Decastavida for them.We discuss the evolutionary significance of the novel ultrastructure of Decastava and Scytomonas saepesedens.The third new euglenoid is Entosiphon oblongum, closely related to Entosiphon sulcatum, but differing in shape and rDNA and Hsp90 sequences.Its characterization clarifies past conflicts in the shape depicted for E. sulcatum and suggests that two separate species were often lumped under one name.Previously Entosiphon rDNA exhibited serious long-branch problems that yielded highly conflicting trees preventing its accurate phylogenetic assignment.Our Hsp90 trees are not seriously affected by long-branch artefacts, suggesting that this protein may be particularly useful for euglenozoan phylogeny and that Entosiphon is probably sister to all other euglenoids.Our discovery of frameshifts in Hsp90 genes of both and Entosiphon and Decastava give the first evidence for putative nuclear insertional RNA editing in Euglenozoa.Cultures. Scytomonas.A sample from an exposed heap of horse manure, taken about eight weeks after stacking, was steeped in Cerophyll-Prescott medium and after one week surface film material containing an abundant Vahlkampfia species was transferred to cerophyll agar plates.Scytomonas appeared when plates were flooded with fresh Cerophyll-Prescott infusion and was separated from accompanying amoebae by dilution.Further cultivation was with accompanying mixed bacterial flora in 3–4 mm depths of Cerophyll-Prescott medium in flat 50 ml Falcon flasks, with subculture at fortnightly intervals. Entosiphon oblongum and Decastava edaphica were isolated from Scottish soil.For light microscope observations and DNA extraction by standard methods all three euglenoids were grown in Volvic ™ mineral water in Petri dishes with an added boiled wheat grain to feed endogenous bacteria as food for the euglenoids.Microscopy.Cells were observed by phase contrast and differential interference microscopy and photographed using a X60 W dipping water immersion immersion objective on a Nikon Eclipse 80i microscope and a Sony HDV 1080i Handycam.They were prepared for scanning and transmission microscopy by fixation in 2% glutaraldehyde and subsequent processing as described by De Jonkheere et al.PCR and sequencing.18S rDNA was amplified by standard eukaryote-wide primers and sequenced directly.For Hsp90 nested amplification using modified primers was followed by agarose electrophoretic gel purification of bands and cloning into the pSC-A vector using the Strata-Clone PCR Cloning kit before ABI automated sequencing.Phylogeny.We used macgde v. 2.4 for manual alignment and site selection of well aligned regions by eye for analysis.Phylogenetic analysis was by RAxMLHPC-PTHREADS-SSE3 v. 7.3.0 using the GAMMA model with four rate categories and fast bootstraps and by the evolutionarily more realistic and often more accurate site-heterogeneous CAT-GTR-GAMMA model of PhyloBayes v. 3.3 with two chains for thousands of generations after log likelihood values plateaued, early pre-plateau trees being removed as burnin before summation of all other trees.RAxML used the GTR substitution model for rDNA trees and the LGF substitution model for Hsp90.For Hsp90 all Bayesian trees converged with maxdiff <0.3, mostly 0.1 or less.For 18S rDNA we manually aligned 18S rDNAs of 217 Euglenozoa plus 481 outgroup taxa representing all major eukaryote groups and selected by eye 1577 or 1541 reasonably well aligned nucleotide positions for preliminary phylogenetic analysis, depending on whether the highly divergent Percolozoa were excluded or included.To investigate the effect of taxon sampling and site selection we ran 16 rDNA trees with different taxon and sequence samples and algorithms.The methods, rationale, and results for these numerous 18S rDNA trees are described in detail in a separate paper on euglenozoan phylogeny and higher classification; some of these Bayesian trees converged well and some did not.Simply to show the positions of our new species, a composite figure combining results from the 282-taxon tree and the 287-taxon tree including Entosiphon is shown here as supplementary Fig. S1.Trees were prepared for publication using FigTree v. 1.2.2 and Eazydraw.The flagellate is 8–14 μm long and 5–10 μm maximum width with a single cilium ∼20 μm long.It has three modes of life.It glides along the substratum in characteristic Petalomonas-like fashion with the single cilium held out in front, only its terminal quarter beating.Its anterior end forms a pronounced collar, ventrally flattened, seemingly as a shovel-like scraper of bacteria from the substratum.Alternatively it attaches to the substratum or to bacterial debris by the posterior end whilst the cilium beats continuously and vigorously along its length, drawing bacteria towards the anterior mouth region.Stages in binary fission are observed in this sedentary form.A third phase is non-motile, more phase-dense than the other two, has fewer refractile inclusions and no cilium; such forms become more abundant as the culture ages.On subculture these resting forms readily give rise to ciliated individuals.A feeding cell can ingest quite large bacterial prey up to half its own size, yet has a rigid shape – unlike most macrophagous euglenoids having deformable pellicles.The body is typically pyriform and lacks the noticeable flattening of Petalomonas spp.Scanning and transmission electron microscopy show that the pellicle has five broad strips lacking grooves/troughs found in most euglenoids other than Petalomonas.There are pleat-like thickenings of the pellicle strip joints similar to those of Petalomonas cantuscygni, but unlike that species the surface lacks projecting ridges, being most similar in SEM to the biciliate Petalomonas mediocanellata that was incorrectly assumed to lack strips.Strips are reinforced by underlying mts.Around the cilium the anterior end of the cell forms a pronounced collar, more extended anteriorly on the ventral flattened side during gliding.Bacteria released from the substratum during gliding are trapped by the cilium membrane for transport to the cytostome for ingestion.Only a single cilium can be seen in the trophic non-dividing flagellate; no second centriole was seen.From transmission micrographs it arises from the base of a deep ciliary pocket extending as much as half way down the body.No thick paraxonemal rod running alongside the axoneme of nine mt doublets was seen; but vestiges of such a structure are present implying a very slender paraxonemal rod.On the opposite side to the putative rod vestige are densities suggestive of the dense sheets in similar positions in Euglena.The cilium transition zone is long and contains much dense material.The single contractile vacuole empties into the ciliary pocket close to its base.The membrane lining the pocket is reinforced by mts, but we did not serial section to determine their exact arrangement: apically ten linked microtubules may be the dorsal row continuous with the five joint-associated pairs of dense pellicular microtubules; three nearby mts may be the dorsal centriolar root.Near the junction of the cytostome and ciliary pocket are about five reinforced mts at the ciliary pocket membrane and an adjacent mt pair that may be parallel mt loop mts that support a ridge separating the ciliary and cytostomal subregions of the reservoir.Ventrally to this ridge, the ciliary pocket is elongated by an extension with nearby fibrillar densities and a dense arc strengthening the membrane near the cytostome on the opposite side from the putative MTRs.A prominent Golgi apparatus composed of ∼15 cisternae is beside the ciliary pocket."Arising from the ciliary pocket, just inside its opening on the cells left, the cytostome leads into a cytopharynx curving to the cell's right, and reminiscent of that of kinetoplastids in its simplicity.It consists of a membrane-bound channel supported by at least four reinforced supporting mts and an arc of membrane thickening material with periodic dense structure on opposite sides of the cytopharynx, as in the biciliate petalomonad Calycimonas.Two curved microfibrillar arcs are associated with the cytostome; the less obvious one nested within the more robust arc subtends a mixture of dense fibrillar material and microtubules, which we suspect may be MTR and PML looped over from the ciliary pocket.We suspect that the outer microfibrillar arc supports the collar.Elaborate mt-associated FA rods found in non-petalomonad phagotrophic euglenoids are absent.Food vacuoles containing bacterial remains abound in the cytoplasm.A hint of the long ciliary pocket/cytopharynx complex is sometimes evident on differential interference contrast micrographs.The interphase nucleus has typical euglenoid structure with chromatin masses attached to the inner nuclear membrane and a large central nucleolus.A not-so-common feature is the presence of prominent bacterial symbionts in the cavity of the nuclear envelope and associated rough endoplasmic reticulum; one also sees bacteria within cytoplasmic vesicles possibly indirectly connected to the nuclear envelope lumen.Among the most obvious cytoplasmic inclusions are branches of the mitochondrial network with discoid cristae.Hyperabundant “secretory vesicles” are probably of Golgi origin and their external secretion might play a part in establishing anchorage on the substratum.Less easily identified membrane-bound bodies with electron-dense linings and a clear interior may be paramylon bodies or acidocalcisomes.Decastava edaphica resembles some Ploeotia species in the light microscope in its ellipsoidal shape, but unlike in the type P. vitrea longitudinal striations are not obvious."Its anterior asymmetrically undulating cilium is less than one body length and posterior straight gliding cilium projects from the rounded posterior by about a third of the body length in living cells, but less in the SEM of Fig. 7B. Cilia emerge from a nearly transversely oriented cleft-like reservoir that opens on the cell's right, the posterior cilium curving sharply backwards obliquely across the cell's ventral surface.Both cilia have typical euglenoid ultrastructure with a thick laminated posterior paraxonemal rod and more slender anterior one with a tubular lattice.Its ventral non-protrusible feeding apparatus has two dense hollow rods, further apart at the front where they are joined by dense connectors, converging towards the cell posterior; the left rod begins more anteriorly and the right one extends further posteriorly.Though we did not obtain EM sections though the cytostome, from through-focal DIC microscopy we suspect that there are two connectors, a straighter supporting its ventral lip and a curved one on its dorsal side, which could make a nearly continuous dense ring around the mouth.Electron microscopy shows that FA rods are hollow and composed of an extremely dense homogeneous matrix surrounding a lighter lumen; each has two dense lateral flanges, dissimilar in cross section and orientation.The inner rod alongside the ciliary cleft has a very long inner flange that at least at its anterior extends far into the cell interior, nearly reaching the opposite side and probably supporting that entire flank of the cleft.Its opposite flange projects much less and is thinner, sharply tapered, and ends at a slight bulge in the pellicular strip at the mouth of the cleft; this strip is the longest central ventral strip, tapers least anteriorly, and has a short anterior notch at the point of posterior ciliary inflection, which extends backwards just over a fifth of its length.The right rod is on the opposite side of the cytopharynx and its flanges are oriented transversely to the ventral surface of the cell.Its outer flange, associated with the extreme left edge of the next ventral strip, is thinner and more tapered, but longer than in the inner rod; its inner flange is shorter and thicker, somewhat curved with a cusp like projection.At least six dense, relatively straight vanes are associated with the cytopharynx, four arranged as two parallel pairs; one is markedly broader than the others but we did not determine their precise number or arrangement.The cemented feeding comb present on the other side of the ciliary cleft consists of two dense arcs, the inner bearing about eight dense teeth at one end on its concave face.The ciliary cleft has a narrrow basal curved extension associated with several dense patches of undetermined structure.The 10 longitudinal pellicular strips are of approximately equal width; the three making up most of the ventral surface are somewhat wider than the four forming the dorsal surface; these seven upper and lower strips are predominantly flat or slightly concave, only the edge strips of the slightly flattened cell being obviously convex.At joints one strip overlaps the other, the outer part being cusp-like with a pointed apex in cross section.One broad longitudinal band of 8–13 mts underlies the inner part of each strip, occupying nearly half its width at the joint edge.Numerous tubular extrusomes are on either side of the ciliary cleft and alongside the FA.Mitochondria are discicristate.Food vacuoles indicate phagotrophic feeding on bacteria.Cysts have a thick tripartite wall; projections form the outermost dense layer,Entosiphon oblongum differs in shape from the original Entosiphon sulcatum, being oblong not ovoid.It closely resembles the oblong flagellate identified as E. sulcatum by Lemmerman, suggesting excessive past lumping.Because of its shape it was initially taken for a Ploeotia, but careful study showed that its siphon is extensible but less obviously than in E. sulcatum.Moreover it has the three rods characteristic of the siphon; only two are present in the FA of Ploeotia vitrea, Decastava, Keelungia, and Serpenomonas.Hsp90 groups Scytomonas with Petalomonas cantuscyngi as a maximally supported petalomonad clade; this branch is longer than any others in Euglenozoa, showing accelerated evolution, but consistently groups with Decastava as a weakly supported stavomonad clade.This proves that Scytomonas is secondarily uniciliate.Petalomonads were never the deepest branching euglenoids.18S rDNA trees robustly place Scytomonas as sister to an environmental sequence from Arctic sand, jointly sister to Petalomonas cantuscygni plus a marine microbial mat sequence or to Petalomonas alone; this Scytomonas/Petalomonas clade is robustly sister to another Arctic marine sand sequence.That joint clade is maximally supported as sister to one comprising Biundula sphagnophila, Notosolenus urceolatus, and four environmental sequences, this large clade being firmly sister to Notosolenus ostium as a petalomonad clade.Branching within Euglenozoa is identical for Hsp90 and 18S rDNA trees except for Entosiphon, whose rDNA sequences are so divergent from other Euglenozoa that they are very hard to align and invariably form the longest branch on the tree and appeared in six conflicting positions with different algorithms and taxon samples.All trees group holophyletic Euglenophyceae and heterotrophic Peranemea as a maximally or near-maximally supported clade, here called Spirocuta because of their spiral, often metabolic, pellicle that contrasts with the invariably rigid longitudinal barrel stave-like pellicle of basal euglenoid lineages.Hsp90 and CAT rDNA trees that include Percolozoa in outgroups show Entosiphon as the most divergent euglenoid, sister to all others; most other trees put Entosiphon as sister to stavomonads or to Keelungia only: one CAT tree put them as sister to the Ploeotia/Spirocuta clade, another as sister to Spirocuta only; one ML tree, clearly artefactually, put them as sister to Postgaardea.No rDNA position for Entosiphon has strong support, but all contradict earlier inclusion of Entosiphon in Peranemida, as does Hsp90 strongly.In most cases, inclusion of Entosiphon or not did not alter tree topology, just support values for deepest euglenoid branches.The Entosiphon oblongum sequence is almost identical to one from an Entosiphon clone we previously isolated from South African soil, but did not regard as E. sulcatum as its FA was not obviously protrusible.These two sequences almost certainly represent the same species and are quite distinct from the two sequences of the CCAP E. ‘sulcatum’ strains, and from a third partial sequence from activated sludge that probably represents a third Entosiphon species.Apart from three differences at the beginning and two at the end of the molecule there are only 16 internal nucleotide differences between E. oblongum and AY425008, several of which appear to be incorrectly called numbers of repeated nucleotides in AY425008, an error to which ABI software is prone.Though there may be a few genuine differences we can now reasonably identify the South African strain as E. oblongum also; as von der Heyden et al. did not video it, we would have overlooked its very slight mouthpart protrusion.The Hsp90 sequence of Entosiphon oblongum is clearly more obviously different from that of E. sulcatum, confirming that they are separate species.For both Entosiphon Hsp90 is conservative and forms a short branch on the tree that is the deepest branching within Euglenozoa, and never shows any tendency to group with Peranema, with which it was formerly classified.Except for the position of Entosiphon, nearly all features of the euglenoid part of the tree were the same for both methods with all eight alignments.Notably, Ploeotia cf. vitrea never groups with Stavomonadea, suggesting that Decastava, Keelungia, and Serpenomonas costata are rightly excluded from Ploeotia.Instead P. cf. vitrea is sister to Spirocuta, the robust clade comprising subclades Peranemida, Anisonemia, Teloprocta and Euglenophyceae.This contradicts a previous study placing P. cf. vitrea below the last common ancestor of all other Euglenozoa plus postgaardeans.When Entosiphon is excluded from the analyses, Decastava and Keelungia are significantly, sometimes strongly, supported sisters; this relationship persists if Entosiphon is added except in samples where Entosiphon is sister to Keelungia.Except when Entosiphon thus intrudes, Stavomonadea is a clade with stronger CAT than ML support.In the absence of Entosiphon, Petalomonadida is sister to Decastavida on CAT trees, and Serpenomonas the deepest-branching stavomonad; with ML Decastavida can be sister to Petalomonadida or to Serpenomonas depending on taxon/sequence sample, both alternatives insignificantly supported.The marked separation of Serpenomonas from Decastavida and Ploeotia is consistent with its radically different pellicle consisting of alternating broad and and narrow stave-like strips.To reflect its unique pellicle morphology amongst euglenoids and deep phylogenetic separation from all other orders, Serpenomonas is now put in new order Heterostavida.Entosiphon was never sister to Petalomonadida, unlike some previous rDNA trees.As Entosiphon can appear in at least seven different places on 18S rDNA trees, they are almost useless for placing it relative to other euglenoids without independent evidence to interpret them.As Cavalier-Smith explains, Hsp90 and comparative anatomy agree in strongly showing Entosiphon to be the sister to all other euglenoids.This is consistent with ML being often less accurate and other alignments having too little information for CAT to be accurate, and with our view that it is best to include as many positions and taxa as practicable, use a more broadly representative outgroup than is usual, and a heterogeneous model for 18S rDNA trees for difficult cases.Spirocuta show one significant difference from the previously most comprehensive euglenoid tree: Dinema do not group together; Dinema platysomum alone is sister to Anisonema, but Dinema sulcatum branches one node more deeply as their sister; this paraphyly is usually weakly supported.Our trees are slightly contradictory concerning Dinema/Anisonema, previously an insignificantly supported clade; all 8 of our best ML trees show Anisonemida as a clade as on Fig. S1, as do five of 8 CAT trees, but it was paraphyletic on three CAT trees as Dinema sulcatum branched even deeper below other anisonemids plus natomonads; thus with CAT but not ML its holophyly is sensitive to taxon sampling.Our trees strongly confirm that two phagotrophs formerly misidentified as Heteronema never group together.Neometanema is invariably sister to osmotrophic Rhabdomonadina.However, Teloprocta scaphurum is either sister to the photosynthetic Euglenophyceae as Lax et al. and Lee and Simpson found, or to Anisonemia or sister to Rhabdomonadina.Six independently sequenced clones of Decastava edaphica Hsp90 genes all had spliceosomal introns with typical splice junction sequences and lengths from 67 to 103 nucleotides.In both D. edaphica and Entosiphon oblongum Hsp90 genes had multiple frameshifts that would result in truncated proteins with incorrect C-terminal amino acid sequences unless corrected either by insertional RNA editing or by translational frame-shifting.In Decastava six frameshifts could in principle be corrected by single nucleotide insertions after nucleotides 917, 924, 953, 974, 978, 999.As these are highly clustered within 83 nucleotides, a single short guide RNA like that responsible for U-insertional editing in euglenozoan mitochondria could correct them all.All six independent clones of Decastava showed the same six frameshifts so they cannot be attributed to sequencing errors.All six sequenced Hsp90 clones of Decastava had differences in nucleotide sequence at 24 positions: 23 single nucleotide substitutions and one three-nucleotide deletion that removes one lysine from a run of four.The seven of these polymorphisms present in more than one clone cannot reasonably be sequencing errors.There is no reason to think that any are sequencing errors; two thirds of them do not change the amino acid sequence and those that do all do so at positions that are evolutionarily variable in euglenoids.We interpret them as natural genetic polymorphisms.Interestingly, 21 are clustered near the 5′ end of the gene and only two near the 3′ end with and only two in the major middle region between nucleotides 632–1350.No more than two alternative nucleotides were present at any one site.However, the multiply represented polymorphisms at positions 9, 27, 298, 303, 308, 530, 1584 occur in different combinations amongst the clones, suggesting intragenic recombination.The two independently cloned Hsp90 sequences of Scytomonas also show two single nucleotide differences near their 3′ end.Though inspection of the sequence traces shows they are unambiguous, both are in a more conserved region and we cannot be sure they are not cloning or PCR errors; therefore we made a composite sequence for the Fig. 9 tree choosing the amino acid at both positions that matches its closest relatives.Entosiphon oblongum sp. n. Cavalier-Smith and Vickerman.Syntypes: illustration Fig. 1J–S; culture CCAP 1220/2; sequences GenBank 18S rDNA KP306754, Hsp90 KP306762.Diagnosis: Unlike E. sulcatum cell oblong not ovoid and siphon extends only about 1 μm not ∼2.5 μm; anterior more truncated than in E. ovatum, the most similar species.Cell length rather variable: 17–28 μm; width 5–8 μm; shorter than E. sulcatum, much longer than the broadly oval E. applanatum, and still smaller marine E. limenites.Unlike E. sulcatum, polyaulax, limenites, and striatum, which have prominent longitudinal strips, and to a lesser extent E. ovatum, strip pattern not evident by light microscopy.Anterior cilium projects one cell length, often basally straight and smoothly curved towards siphon side; straight, posterior gliding cilium projects 1/6 to ¼ cell length.Etymology: oblongus L. oblong, rather long.Type locality: soil, Falkland Palace garden, Scotland.Comment 1."This strain's 18S rDNA differs by 115 nucleotides from strain CCAP 1220/1A and 122 nucleotides from CCAP 1220/1B, both identified as Entosiphon sulcatum, and is clearly therefore a different species.However, as no light micrograph is available for the sequenced CCAP strains we do not know if they were correctly identified by their isolator E. A. George, though the two sequences are sufficiently similar for them to be considered one species.We need more Entosiphon sequences supported by documented microscopic identification, as for E. oblongum."Micrographs of the E. sulcatum yielding the Hsp90 sequence are consistent with that strain being the same species as A. sulcatum, but insufficiently clear to show whether it had 10 or 12 strips.Cooment 2.Frequent misidentification or overlumping of Entosiphon sulcatum: We do not agree with Ritter von Stein and most subsequent authors that his well-described Entosiphon sulcatum with posterior cilium projecting only about half a cell length is conspecific with Anisonema sulcatum of Dujardin, with posterior cilium projecting one body length and no siphon noted.The two species differ greatly in cell length, being similar only in shape and being furrowed, rigid, and biciliate and probably congeneric.They may differ also in pellicle strip number: Rittter von Stein showed 8 predivision and 4 postdivision strips on one side of the cell, implying that the unduplicated number was 8 in total, whereas Dujardin drew 5 strips on one side.Later light microscopists noted 8, 6–12 or 12 strips in cells identified as E. sulcatum.Huber-Pestalozzi accepted E. sulcatum Stein supposedly with 4–8 strips as a separate species from E. ovatum Stokes, 1885 with supposedly 10–12.Mignot argued that euglenoid species have a constant strip number and E. sulcatum really has 12, as in his own strain, even though that disagreed with both Dujardin and Ritter von Stein; he suggested other numbers were observational errors.Different strains studied ultrastructurally have genuinely different strip numbers.Those of Belhadri and Brugerolle and Triemer both clearly show 10 old grooves/strips and 10 young ones in cells in cytokinesis, so both had 10 unduplicated strips.They could therefore be A. sulcatum of Dujardin, but not E. sulcatum of Ritter von Stein; yet one cannot be sure as neither paper had light micrographs of living cells.By contrast three other independently isolated strains all had 12 strips, and probably therefore represent different undescribed species from those with 10.Supporting this are systematic qualitative differences in pellicle structure between the 10 and 12-strip strains.All three 12-strip strains have predominantly relatively shallow, broad grooves formed by 12 S-shaped pellicle strips; in one these are fairly equal dimensionally, and the pellicle of this strain resembles that of Lentomonas, which differs mainly by having a flattened ventral face, more than do the other two whose grooves are heterogeneous in appearance with two separated by three strips being much deeper than others, suggesting that these 12-strip strains may represent two distinct species.By contrast the 10-strip strains generally have much deeper, narrower grooves that all or mostly appear in transverse section as near-circular with only a very narrow opening overhung by strongly projecting lips that are absent in all 12-strip strains.The groove lips are asymmetric; in the Triemer strain the lip on one side bearing an obvious small subgroove making it appear very similar to the asymmetric bifurcate strip suture region of Ploeotia vitrea and Serpenomonas costata; that of Belhadri and Brugerolle has a similar but less obvious structure, as does one edge of each strip in the Witold and Mignot strains; even the Solomon et al. strain has weak densities that could be reduced versions of similar strip joints.The circular cross section grooves of the 10-strip Entosiphon therefore superficially resemble the circular cross section grooves of Serpenomonas costata, but as Serpenomonas has aymmmetric fork structures on both lips this groove shape similarity is superficial convergence.Curiously, Triemer and Fritz Fig. 1 is an SEM seemingly with ∼12 strips from a culture isolated at the same time and place as the 10-strip one of Triemer, suggesting that two different clones were involved – Triemer and Farmer show a different 12-strip cell.A third E. ‘sulcatum’ strain from the same locality apparently had 12 strips with alternate grooves differing systematically in depth, unlike the rest, and may be yet another species.The most satisfactory solution on present evidence is to accept Entosiphon sulcatum as having 10 grooves and strips and establish new names for 8- or 12-strip species when supporting sequences are available."Hollande showed figures of chubby ‘sulcatum’ cells like those of Dujardin and Ritter von Stein apparently with 10 strips and another more elongate cell similar to E. ovatum cells with apparently 12, so Mignot oversimplified in saying Hollande's observations ‘confirm’ 12; even he unwittingly probably studied two species.E. ‘sulcatum’ drawn by Preisig was only 22 μm but was anteriorly more pointed and less truncated than E. sulcatum of Ritter von Stein and E. oblongum; it is morphologically more similar to E. ovatum or a close relative.Lackey wrote that Entosiphon cuneatiformis ‘bears little resemblance to the other five species of the genus’; it probably should be a new genus, as possibly should E. planum with a unique ventral groove.Entosiphon applanatum Preisig is very distinct, but not the same as Lentomonas ‘applanatum’ of Farmer and Triemer).Entosiphon wrightianum with two chromatophores and apparently no cilia is not a euglenoid, possibly a green alga.In accord with the view that most flagellates named by Skvortzov are too inadequately described for reidentification we did not compare E. oblongum with the 44 nominal Entosiphon species he named from 1957 to 1970.As the genus may be over a billion years old it could have speciated considerably.New order Decastavida Cavalier-Smith.Diagnosis: Posterior-cilium gliders with 10 longitudinal pellicular strips of approximately equal width, unlike Serpenomonas; strip joints smooth without projecting bifurcate ridges or grooves.Feeding apparatus with two dense hollow rods; dorsal jaw support strongly cemented; inner pharyngeal rod adjoining cytopharynx with prominent lateral dense flanges.Etymol: as for Decastava.New family Decastavidae Cavalier-Smith.Diagnosis: eight or nine unreinforced microtubule pairs loop from dorsal jaw support to cytostome; outer rod with prominent flanges.Type genus:Decastava gen. n. Cavalier-Smith.Diagnosis: 10 strip joints asymmetrically cusp-like in cross section with pointed non-bifurcate apex.Cytostome slit-like, separate from reservoir canal.Pharyngeal rods both with prominent lateral dense flanges, not just on the inner rod as in Keelungia, the one flanking the reservoir beginning more anteriorly and with very long inner flange.Etymol: deca – L. combining form of 10; stave E. from the resemblance of the strips to barrel staves.Type species D. edaphica:Decastava edaphica sp. n. Cavalier-Smith and Vickerman.Syntypes: illustration Fig. 1F–I; culture CCAP 1265/2; sequences GenBank KP306753 KP306756-KP306761.Diagnosis: Rigid ellipsoid biciliate gliding on posterior cilium projecting one third to under half a cell length; anterior cilium ∼10 μm beats spirally with strong kinks.Cells 12–13 μm long, ∼7 μm wide.Anterior dome-like connector of feeding rod ∼1.7–2.3 μm wide, about ¼ to 1/3 body maximal width.Pharyngeal rods with a single microtubule row facing eight vanes.Similar species: in LM like Ploeotia but no visible striations.Etymology: edaphos Gk ground, because from soil.Type locality: soil, Sourhope Scotland.Order Petalomonadida.Family Scytomonadidae:Scytomonas saepesedens sp. n. Cavalier-Smith and Vickerman.Syntype: illustration Fig. 1A–E; sequences GenBank 18S rDNA KP306755, Hsp90 KP306763–KP 306764.Diagnosis: Pyriform sedentary or gliding uniciliates; feed on bacteria via lashing cilium whilst cell attached basally to substratum more often than during gliding; also with dense granular rounded resting stage.Pellicle of 5 strips with smooth slightly dense sutures associated with cortical microtubule bands; no surface ridges or furrows; 8–14 μm long and 5–10 μm maximum width; single cilium ∼20 μm emerges from deep reservoir over half cell length; long transition zone with dense contents.Single contractile vacuole near base of ciliary pocket.Type locality: Horse manure, Scotland.Etymology: saepe L. often sedens L. sitting; because more often sedentary than gliding.Comparisons with most similar species: Scytomonas pusilla Rittter von Stein, 1878 was about 14 μm, with 20 μm cilium and essentially the same shape.Copromonas subtilis Dobell from frog gut was about 16 μm; though he studied it extensively in vivo he did not report sedentary feeding behaviour and it fed whilst gliding; unlike our strain cysts were sometimes seen, and its shape was never shown as pinched in concavely at the anterior end as in our strain.Lemmerman synonymized C. subtilis with S. pusilla, which has been widely accepted.However he overlooked that the nucleus of Scytomonas pusilla Ritter von Stein, 1878 was somewhat in the anterior half of the cell and that of C. subtilis in the posterior end.For that reason and because of the non-pinched-in anterior end of C. subtilis we reject the species synonymy, but they are probably congeneric, so we make a new combination Scytomonas subtilis comb.n. Cavalier-Smith.The strain identified as S. pusilla by Mignot was 12 μm × 6 μm; from its shape and tadpole gut habitat it was probably S. subtilis.Neither Mignot nor others who observed S. pusilla from soil reported sedentary feeding behaviour as predominates in our strain.Therefore, and because our strain is somewhat smaller, especially under culture conditions in Oxford, we made it a new species.This distinction needs testing by sequencing other non-sedentary pusilla-like strains, as does the synonymisation of Copromonas by sequencing gut symbionts.Petalomonas minuta Hollande, 1942 6–10 × 4–5 μm, with ventral groove, had cilium about the cell length or slightly longer; Petalomonas poosilla was 5 × 2–3 μm, no surface structures visible, cilium ∼1.5 x cell length .New genus Biundula Cavalier-Smith.Diagnosis: heterotrophic phagotrophic euglenoids with single emergent anterior cilium.Differ from Petalomonas in dorsal and ventral surfaces both having 2–8 smooth undulations when seen in TS, which in most species show mirror symmetry about their broad axis.Unlike Petalomonas and Scytomonas, pellicle appears continuous without obvious sutures between discrete longitudinal strips.Type species Biundula sphagnophila comb.n. Cavalier-Smith.Basionym Petalomonas sphagnophila Christen.Etymol: bi – L. two; undula L. little wave, referring to undulating dorsal and ventral surfaces.Other new combinations: Biundula sulcata comb.n. Cavalier-Smith Basionym Petalomonas sulcata Stokes 1888; Biundula sinica comb.n. Cavalier-Smith Basionym Petalomonas sinica Skvortzow, 1929; Biundula septemcarinata comb.n. Cavalier-Smith Basionym Petalomonas septacarinata Shawhan and Jahn, 1947; as that change appears to be in prevailing use and Huber-Pestalozzi attributed it to Shawhan and Jahn we deem it a justified emendation under Article 33.2.3.1.of ICZN; that change complies with Article 60.8 of ICN for algae, fungi and plants, it seems valid under both codes).Comment.The type species Petalomonas abscissa, originally described as Cyclidium abscissa but transferred to the new genus Petalomonas by Ritter von Stein is unstudied by TEM or sequencing."However, Shawhan and Jahn showed it has a flat ventral surface with two sharp dorsal longitudinal keels slightly offset to the cell's right when viewed from above, so the dorsoventrally asymmetric morphotype must be included in Petalomonas sensu stricto.The cells they studied were from freshwater and 22–28 μm, both agreeing with the type from Seine river water."Though they depict a slight posterior indentation not seen by Dujardin, and the cilium is relatively a little shorter, these differences are insufficient to question their specific identity; if we regard Dujardin's figure as a mirror image of theirs the cell shape is otherwise virtually identical and two longitudinal markings that probably represent the dorsal keels are laterally slightly offset in precisely the same direction as in Dujardin and even the left and right margins of the cell are asymmetrically curved in the same way. "That near identity contrasts with the left right symmetry of the figures of Ritter von Stein, also with a symmetric posterior truncation distinctly broader than in Dujardin and relatively less posterior narrowing of the lateral margins; moreover Ritter von Stein's cells are much larger, so we do not consider them the same species.Marine cells misidentified by Larsen and Patterson as P. abscissa are much smaller and an even more different shape – about as broad as long, unlike the type figure of Dujardin or drawing of Shawhan and Jahn, both about 1.9 times longer than broad.Moreover the marine cells had much more prominent posterior indentation and much more asymmetric ridge arrangement: only a single prominent dorsal strongly curved ridge, very asymmetrically arranged on the cells left, and the ventral surface is not flat but has a small left ventral keel below the dorsal ridge plus a central double ridge.This marine species is neither P. abscissa nor the also wrongly lumped separate species of Ritter von Stein, but an undescribed species.New family Teloproctidae Cavalier-Smith.Diagnosis: as for type genus Teloprocta gen. n. Cavalier-Smith.Diagnosis: Elongate cylindrical or spindle-shaped spirocutes with two long cilia, dorsal extended anteriorly, rigid basal region glides on surfaces.In the type species hook-like dorsal cilium with much thicker paraxonemal rod than other euglenoids captures prey, helped by a mucilaginous web, and guides it into the vestibule; defaecates pellets though posterior cytoproct.Dorsal jaw support mostly less robust and cemented than in Peranemia, without cemented anchor to reservoir canal, but its outer dense body is hypertrophied as an ‘accessory rod’.28 extremely thick equal-width pellicular strips.Four microtubule-attached vanes, two attached posteriorly to the rod surfaces and two to one side of deep posterior rod grooves; the other side of each groove has a supplementary vane not edged by a microtubule.Type species Teloprocta scaphurum Cavalier-Smith comb.n. Basionym Heteronema scaphurum.Etymol: telos Gk end, completion; proctos Gk anus, because of its terminal cytoproct.Comment: Heteronema is an anisonemid.Dujardin defined Heteronema as having a much thicker posterior cilium held straight backwards during locomotion.Its anterior cilium was thinner and undulatory and the pellicle obviously with spiral strips and contractile – essentially the same as his non-squirming Anisonema except for its contractile pellicle.Though he did not understand that the posterior cilium promoted active gliding, both genera were certainly posterior-gliding spirocutes.Saville Kent properly placed both in the same family Anisonemidae to its own new family, and Diplomastix, which are not euglenoids, but likely a heterogeneous collection of probably unidentifiable sarcomonad Cercozoa).Yet all ‘Heteronema’ species described since Dujardin have been either swimmers with equal thickness cilia that do not glide on either or else gliders on an anterior usually thicker cilium."All are certainly assigned to the wrong genus because of their profound ciliary differences from H. marina, and also because all have cytopharyngeal rods visible in the light microscope unlike either of Dujardin's genera.The type species Heteronema marina was 60 μm and identifiable if refound, contrary to one assertion."Ritter von Stein initiated over a century of confusion by ignoring and omitting Dujardin's species and transferring two entirely different biciliate metabolic spirocute zooflagellates to Heteronema: Astasia acus Ehrenberg, 1838 an elongate spindle-like cell with anterior gliding cilium, probably an undescribed genus of Acroglissida for Teloproctidae); and pyriform cells he called Heteronema globuliferum.He regarded H. globuliferum as the same species as the globular Trachelius globulifer Ehr., which is not credible, and as the seemingly uniciliate pyriform Peranema globulosum Dujardin, 1841 – we doubt that too, as Dujardin should have seen the trailing cilium that in H. globuliferum projected behind the cell by two thirds of its length, if it had been present, even though the cell was only 15–20 μm; P. globulosum is probably a peranemid but unidentifiable to species; H. globuliferum is either a peranemid or more likely an acroglissid, but not a Heteronema or anisonemid.We think all non-gliding ‘Heteronema’ belong in Natomonadida for the clade comprising Neometanema plus rhabdomonads), and most anterior ciliary gliders in Acroglissida, though some might be peranemids.Our finding considerable genetic heterogeneity in Hsp90 genes of Decastava is potentially significant in relation to the unknown population structure of euglenoids.Some heterogeneity might simply be because the studied culture may not have been strictly clonal.Probably some is a sign of multiple Hsp90 copies per nucleus through diploidy, polyploidy, or gene duplication.One expects initial divergence of multiple copies to be randomly distributed along the gene; its strong concentration at one end suggests secondary homogenization or concerted evolution, well known for multicopy rDNA where it can be partial rather than complete but seemingly not seen before for Hsp90.Gene conversion, an homologous recombination mechanism more frequent than crossing over, can partially homogenise divergent multicopy genes and probably exists in all organisms.The observed pattern is explicable by asymmetric conversion starting from near the 5′ end of the gene and extending towards the 3′ end but not all the way; if a hotspot for the double-strand breaks that initiate gene conversion existed in intron 3, conversion tracts needed to explain absence of polymorphism in the middle part of the gene would be several hundred nucleotides long – shorter than usual for yeast but longer than usual in mammals.Spliceosomal introns are scarce in Euglenozoa as are group I introns.The five spliceosomal introns in Decastava edaphica are the first for Stavomonadea and the second case for euglenozoan Hsp90 genes.Most have sequence signatures for typical spliceosomal introns, not atypical ones as were two of the three introns in Peranema.None corresponds in position to any of Peranema; those of Decastava are all short as is typical of protists, but not ultrashort like those of Peranema that in that respect closely resemble the tiny introns of Bigelowiella nucleomorphs.Their presence in Stavomonadea fits the view that such introns were probably abundant in early Euglenozoa and all eukaryotes and their virtual absence in kinetoplastids is secondary.Our putative evidence for frameshifts in all sequenced Decastava and Entosiphon Hsp90 genes is probably the first, albeit indirect, evidence for insertional RNA editing in nuclear genes of Euglenozoa.For more direct evidence for editing, Hsp90 mRNA needs to be sequenced to test whether editing not cotranslational frame-shifting is how these euglenoids make Hsp90 genes translatable.Editing by U insertion is rampant in kinetoplastid mitochondrial genes, probably evolving as a rescue mechanism from a potentially harmful class of mutations.Nuclear insertional RNA editing is rare, but U insertion occurs in humans.The putative nuclear editing in euglenoids deduced here from the observed frameshifts seems able to insert any of the four nucleotides; it is yet another instance of neutral genome evolution that probably evolved independently in euglenoids.Insertional editing may have evolved independently in early euglenozoan nuclei and mitochondria, though some editing machinery components might have been shared in early euglenozoan history.Phagotrophic euglenoids are one of the four or five most speciose zooflagellate groups in soil.When reviewing soil flagellates, Foissner noted to his surprise that as many as 55 euglenoid species were recorded – 22 photosynthetic, 6 osmotrophic, and 27 phagotrophic – more than for phagotrophic chrysomonads and nearly as many as for kinetoplastids, then seemingly the most speciose phagotrophic soil flagellates.Since then known terrestrial cercomonads have risen from 10 to 61 species, leaves and dung) and those of glissomonads from 4 to 41, so both cercozoan groups now surpass the currently known diversity of soil euglenoids or kinetoplastids.Thus most soil zooflagellates are Euglenozoa or Cercozoa.Our three new non-marine species increase terrestrial euglenoid phagotrophs to 30, showing that euglenoid terrestrial biodiversity is still poorly known; probably many more remain to be described.Many novel euglenoids have been described from marine habitats, but rarely characterised by electron microscopy or sequencing.These, as well as soil euglenoids, need more study by clonal culturing, essential for thoroughly investigating species boundaries, as established here between Entosiphon sulcatum and oblongum.Scytomonas pusilla is by far the most frequently reported soil euglenoid.With so few characters it could really be numerous genetically distinct strains comprising several or many separate species, just as we found for the similarly morphologically undistinguished and overlumped ‘Heteromita globosa’, really dozens of species and recorded even more frequently.We therefore treated S. saepesedens as a new species because of its unique feeding mode, though it is otherwise distinguishable from S. pusilla only by being generally smaller.In the past it has been common to stretch original size limits for Scytomonas to avoid describing new species, which is undesirable as it leads to drift in meaning of species names, excessively broad species, and lack of precision in reidentification later.Our trees show Scytomonas to be closely related to Petalomonas and nested so shallowly within euglenoids that its ancestors must have had two centrioles like all other Euglenozoa.We conclude that Scytomonas with a single centriole and cilium did not diverge early in euglenoid evolution but is a relatively recent simplification within Petalomonadida that evolved by losing the ventral cilium, centriole, and its roots.Mignot showed that the cilium regresses before mitosis and two equal new ones grow simultaneously during division, implying that the sole cilium is first generation – expected to have one centriolar root, possibly represented by an observed 3-mt band.For such unicilate eukaryotes with a younger cilium, there is no general rule whether the centriole persists after the ciliary shaft regresses) or not; probably Scytomonas saepesedens), so it has no great evolutionary significance whether a barren centriole or centriole vestige remains or not; in principle complete dissassembly is more economic unless retention is beneficial for attaching other structures.Nonetheless, loss of the second centriole and presence of sex in Scytomonas merit retaining its generic distinction from Petalomonas.But neither these nor the modest genetic distance from P. cantsuscygni on rDNA and to a lesser extent Hsp90 trees support keeping separate families, so we made younger Petalomonadidae a synonym of Scytomonadidae.Unfortunately we got no TS of its cytostome, so although rods are clearly absent we cannot be sure whether vanes are present as Mignot thought in Scytomonas pusilla, Calycimonas, and Petalomonas, or absent as Triemer and Farmer assumed for P. cantuscygni and Calycimonas robusta.Uniciliate Biundula and Petalomonas sensu stricto nest independently within a biciliate paraphyletic Notosolenus, so Biundula lost the posterior cilium independently of Scytomonas.Pellicular fold patterns seen in TS in Petalomonas are too varied for all to be in one genus.They comprise two main contrasting groups: a minority whose dorsal and ventral surfaces both have 2–8 smooth undulations, that in most species show mirror symmetry about their broad axis, and a majority that are dorsoventrally asymmetric – typically flat or nearly so and bowed upwards dorsally often with 1–6 very prominent ridges.As the type species P. abscissa Ritter von Stein is ventrally flat and dorsally with two nearly straight prominent dorsal ridges we restricted Petalomonas to such dorsoventrally asymmetric species and transferred five species with symmetric dorsal and ventral undulated surfaces to a new genus Biundula.On Figs 9 and S1 P. cantuscygni, ventrally slightly concave and dorsally with six equally prominent ridges, groups closely with Scytomonas, whereas on Fig. S1 Biundula sphagnophila is much further away."This greater genetic distance is also consistent with Biundula's continuous pellicular ultrastructure, not obviously subdivided into strips.By contrast in Scytomonas and P. cantuscyni one can detect very similar strip joints – in TS they appear as slightly dense pimples where adjacent strips abut with two mts associated with one strip edge.Notosolenus urceolatus has eight strips underlain by numerous mts and separated by shallow grooves at the crest of each longitudinal ridge; as in all stavomonads strip edges abut rather than overlap.These flush strip joints distinguish these three genera from all spirocute euglenoids, where adjacent strips overlap.This contrast is also seen in hulls of wooden ships; in nautical terminology Scytomonas, Notosolenus urceolatus, P. cantsuscygni, and Biundula sphagnicola are carvel built with flush strips, whereas Spirocuta are clinker built of overlapping strips.Close inspection of Fig. 4A suggests that the Scytomonas joint is not totally symmetric, like a simple butt joint as usual in carvel construction, but is a stronger shiplap joint as used in timber cladding: strip edges at the suture seem flanged, the two mirror-image flanges overlapping exactly as in a shiplap timber joint.Leander et al. asserted that Petalomonas mediocanellata has no strips, implying a continuous pellicle as in fiberglass hulls or monocoque racing cars.However, that conclusion was based soley on SEM, which we show does not reveal the five Scytomonas strips.Even in P. mediocanellata Fig. 1 of Leander et al. hints at three strips on one side of the cell, suggesting that it actually has five, just like Scytomonas.The moncoque idea is directly refuted by earlier TEMs of P. mediocannellata showing denser pellicle strip regions associated with more densely staining mts that look so similar to Scytomonas strip joints that they probably are butted/shiplap joints: two obliquely in their Figs 18 and 19 and three transversely in their Fig. 20."Two nominal Petalomonas have smooth unridged/non-undulating dorsal and ventral surfaces, but a deep lateral groove along the cell's left side.Almost certainly they are neither Petalomonas nor Biundula, but an undescribed genus that cannot be established without ultrastructure and/or sequences.In Notosolenus ostium ventral pellicular strips are visible under DIC.It is evolutionarily unlikely that any euglenoids have a truly continuous monocoque pellicle; direct conversion from carvel to clinker is mechanistically more comprehensible than to monocoque.The two unusual fibrous cytostomal arcs in Scytomonas are the first evidence for a petalomonad cytostomal skeleton.We suggest they are homologues of microfibrillar cores associated with two microtubule bands found in the Diplonema FA that we suggest were present in the diplonemid/euglenoid common ancestor; if that is correct ancestral petalomonads lost the associated cement at the same time as the cemented rods.The more robust outer arc may have some obliquely sectioned associated mts in Fig. 4B and is positionally appropriate for a PMB homologue.The slenderer inner arc seems to embrace a disordered mass of fibrillar material and microtubules, which could include MTR and PML mts, making it positionally equivalent to a component of the feeding comb of Serpenomonas and Keelungia as Cavalier-Smith will explain in detail elsewhere.By making the cytostome smaller and losing cement petalomonads lost the clear distinction between rod apparatus and comb so obvious in other stavomonads.From their position within stavomonads on our trees, they clearly lost both mouthpart and rod cement; its absence cannot be the ancestral state for euglenoids.Scytomonas FA resembles that of Calycimonas in having a dense periodic supporting material arranged in an arc near the cytostome, with no close similarity to mature FA structures of non-petalomonad stavomonads or to mature Entosiphon FA.It is however remarkably like a similar arc alongside the reservoir that forms the core of early developing Entosiphon FA before cement deposition; in both the dense arc is delimited by a ridge on each side containing separate dense structures; a conspicuous row of widely spaced mts extends away from both lateral membrane-linked densities in Entosiphon and from one in Calycimonas.In Calycimonas on the other side of the cytopharynx from the widely spaced mt row is a double ridge behind which are disordered dense fibrillar structures and mts showing characteristics of MTRs, PML and also the ventral root; we suggest that this is equivalent to the double ridge beside the Scytomonas cilium in Fig. 4B containing putative MTR and PML.The structures lying between the minor fibrous arc and a double membrane projection of Scytomonas are ultrastructurally similar and positioned identically to those lying behind the Calycimonas dense-arc-delimiting ridge that is not associated with the widely spaced mts; we suggest that both include MTRs and PML mts that loop over from those in the other ridge, and that the Scytomonas and Calycimonas FA are fundamentally similar.The major difference is that Scytomonas having lost the posterior cilium has no ventral or intermediate root, unlike biciliate Calycimonas; if correct, this gives further evidence that the MTR is fundamentally distinct from the ventral root even though in Calycimonas they are fairly close.Our inference of MTR/PML looping between ciliary and feeding pockets in the petalomonads Calycimonas and Scytomonas, as well as Entosiphon, is strongly supported in Notosolenus urceolatus where the whole loop can be directly seen in very few sections: tentatively we suggest it has looping MTRs 1–5; whether it also retains one PML pair is unclear.As petalomonads retain MTRs, which carry vanes in other euglenoids it is unsurprising that one Petalomonas also retains four vanes.Calycimonas and Notosolenus are respectively much more weakly and more strongly stained than Scytomonas’ weakly stained fibrous arc, making it harder for opposite reasons to decide if they also have the two fibrous non-membrane linked arcs we found in Scytomonas.However, the curving widely spaced mt row in Calycimonas is associated with microfibrillar material that may be related to that of the major putatively PMB Scytomonas arc.In diplonemids cemented arcs are often associated with similar shaped ER cisternae; possibly therefore the ER arc that subtends the Calycimonas dense membrane-supporting arc and associated putative MTRs is given its shape by a poorly stained slender microfibrillar arc.Dense staining of Notosolenus can easily hide two arcs; Figs 6B,C of Lee and Simpson suggest that at least one such arc may be present, and their Fig. 6E suggests it may have a membrane-associated dense arc like other petalomonads.Skimpy data for Petalomonas are less informative but do not contradict the idea that petalomonad FA are fundamentally similar and probably retain all mt and microfibrillar components of other euglenoid FAs, except probably the widespread likely homologues of diplonemid EM, and differ from others primarily in loss of cement and therefore support rods.The cytostome lip in Petalomonas, though much slenderer by SEM than in other stavomonads, has a similar C-shaped form that must reflect a basically similar underlying slenderer skeleton.The petalomonad FA is apparently a neotenous or arrested development of the ancestral stavomonad FA, an idea testable by serial sectioning developing Serpenomonas or decastavid FA and comparative serial sectioning across petalomonads.Our trees show that the traditional view of petalomonads as the most primitive of all euglenoids is wrong.Strip number reduction, frequent FA simplification, multiple transitions from clinker to carvel pellicles, and posterior ciliary losses in petalomonads might all be adaptions to cell miniaturisation associated with secondary specialization in bacterivory.As petalomonad differences from other stavomonads are secondary losses and fewer than once thought, there is no longer justification for retaining a separate class for them, so in a following paper Petalomonadida, Decastavida, and Heterostavida are grouped as new class Stavomonadea, a robust 18S rDNA clade.
We describe three new phagotrophic euglenoid species by light microscopy and 18S rDNA and Hsp90 sequencing: Scytomonas saepesedens; Decastava edaphica; Entosiphon oblongum. We studied Scytomonas and Decastava ultrastructure. Scytomonas saepesedens feeds when sessile with actively beating cilium, and has five pellicular strips with flush joints and Calycimonas-like microtubule-supported cytopharynx. Decastava, sister to Keelungia forming new clade Decastavida on 18S rDNA trees, has 10 broad strips with cusp-like joints, not bifurcate ridges like Ploeotia and Serpenomonas (phylogenetically and cytologically distinct genera), and Serpenomonas-like feeding apparatus (8–9 unreinforced microtubule pairs loop from dorsal jaw support to cytostome). Hsp90 and 18S rDNA trees group Scytomonas with Petalomonas and show Entosiphon as the earliest euglenoid branch. Basal euglenoids have rigid longitudinal strips; derived clade Spirocuta has spiral often slideable strips. Decastava Hsp90 genes have introns. Decastava/Entosiphon Hsp90 frameshifts imply insertional RNA editing. Petalomonas is too heterogeneous in pellicle structure for one genus; we retain Scytomonas (sometimes lumped with it) and segregate four former Petalomonas as new genus Biundula with pellicle cross section showing 2–8 smooth undulations and typified by Biundula (=Petalomonas) sphagnophila comb. n. Our taxon-rich site-heterogeneous rDNA trees confirm that Heteronema is excessively heterogeneous; therefore we establish new genus Teloprocta for Heteronema scaphurum.
270
The National Solar Radiation Data Base (NSRDB)
Understanding long-term spatial and temporal variability of the solar resource is fundamental for energy policy decisions, the optimal design of solar energy conversion systems, transmission interconnection planning, power systems integration, market operations, and reducing uncertainty in investments .Historical solar resource data for those purposes can be provided by ground-based in situ measurements or satellite remote sensing .Pyranometers and pyrheliometers, which use either thermoelectric or photoelectric detectors, are the most common ground-based radiometers to measure global horizontal irradiance and direct normal irradiance, respectively .The accuracy of measurements by these instruments is highly dependent on instrument design, hardware installation schemes, data acquisition methods, and calibration method and frequency .Measurements by accurately calibrated and well-maintained pyrheliometers and pyranometers can provide reliable long-term solar radiation data at specific locations that are frequently used for cloud and radiation studies and validation of satellite-derived solar radiation .The high cost of operating quality ground stations has resulted in existing surface radiation networks being sparsely distributed and insufficient to meet the needs of the rapidly growing solar energy industry.The other reliable and practical option is to use information from geostationary weather satellites that provides continuous solar radiation estimates covering a wide spectrum of temporal and spatial scales.To retrieve solar radiation from satellite data, solar irradiance models are essential to compute surface GHI and DNI from observations of radiances at the top of the atmosphere.During the last few decades, numerous solar irradiance models have been developed using empirical, semi-empirical or physical models .Empirical models develop regression functions relating long-term GHI measurements at selected local stations to the simultaneous data recorded by satellites’ visible channels which are then used to simulate GHI from global satellite observations.The GHI is combined with empirical relationships developed using modeled or observed solar radiation to retrieve DNI .A well-known solar radiation dataset developed by an empirical model is HelioClim based on the observations of Meteosat geostationary satellites covering Europe, Africa, the Mediterranean Basin, the Atlantic Ocean, and part of the Indian Ocean .Compared to empirical models, semi-empirical models use a hybrid approach to derive solar radiation in which clear-sky background irradiance is solved from simple radiative transfer schemes .A cloud index representing the proportion of radiation reflecting back to the satellite is converted to a clearness index that represents the proportion of incident radiation reaching the surface.The clearness index scales the clear sky radiation to estimate the GHI and then partitioned to estimate the DNI, which is similar to the empirical models.This semi-empirical approach has been widely implemented in global solar radiation datasets, including SolarGIS and SolarAnyWhere .Physical models are conventionally categorized by single-step and two-step models according to the procedures to determine solar radiation .Single-step models directly solve for GHI using satellite observations and radiative transfer theory .Two-step models intend to understand the complete physics affecting the transmission of solar radiation from the TOA to land surface.They retrieve aerosol, cloud and other atmospheric properties from various satellite channels or modeling efforts and use the information to precisely simulate GHI by solving the radiative transfer equation ."A typical product of a two-step model is the National Aeronautics and Space Administration's global Surface Radiation Budget in which International Satellite Cloud Climatology Project pixel-level data and Goddard Earth Observing System Version 4 reanalysis products are used to infer atmospheric properties at a 250-km resolution every 3 h.The solar radiation is then derived using the atmospheric properties and a model developed by Pinker and Laszlo .Compared to empirical, semi-empirical and single-step physical models, most two-step physical models require significant computational capability because extensive information from satellite observations and other ancillary inputs need to be processed to estimate solar radiation.The multiple processes in the production chain require sufficient quality inputs to make full use of the advanced models to reduce uncertainties of GHI and DNI.During the years, the rapid development of satellite technologies and modeling capabilities spectral channels and multi-channel geostationary satellites) have effectively increased the reliability and accuracy of the two-step physical models .More recently the expansion of spectral channels with better temporal and spatial resolutions on the third-generation Geostationary Operational Environmental Satellite-16 is expected to lead to remarkable improvements in aerosol and cloud products which the two-step physical models are capable of exploiting."The improvements in reanalysis data, such as NASA's Modern Era Retrospective analysis for Research and Applications, version 2, bring observations and numerical models together in a unified standardized framework, resulting in high-quality ancillary information that significantly enhances the quality of the two-step physical models .In contrast, empirical, semi-empirical and single-step physical models are not expected to reap equivalent benefits from advances in satellite technology and reanalysis datasets because of inherent limitations in the underlying methods.The National Renewable Energy Laboratory has an extensive history of developing solar resource data over the United States using various sources of observations and modeling tools."This paper reviews the evolution of NREL's National Solar Radiation Data Base and the recent efforts on developing the Physical Solar Model and satellite-based solar radiation to enhance the resolution and accuracy of the NSRDB.The remainder of this paper is structured as follows."Section 2 provides a historical review of the NREL's NSRDB.Section 3 introduces the technical details of the PSM and data validation using surface-based solar radiation measurements.Section 4 describes the users and applications of the NSRDB, and the last section concludes and explores future work to further improve the NSRDB.The NSRDB is one of the most accessed public datasets providing a serially complete collection of solar energy and meteorological data, including the three most common measurements of solar radiation: GHI, DNI, and diffuse horizontal irradiance, which have been collected over the United States and a growing list of international locations with high temporal and spatial resolutions to accurately represent the global and regional solar radiation climates."It supports the U.S. Department of Energy's SunShot goals of reducing barriers to high-penetration levels of solar energy technologies by providing easy access to high-quality, foundational data that are essential for innovative product development and downstream modeling. "There have been substantial improvements in data collection and modeling technologies throughout the NSRDB's more than 20 years of existence.Therefore, NREL implemented major updates to the original database three times in 2007, 2012, and 2017.The NSRDB versions are briefly reviewed below.The first version of the NSRDB, covering 1961–1991, originated in 1994 to replace the SOLMET/ERSATZ dataset developed by the National Oceanic and Atmospheric Administration and DOE ."This version contains hourly solar irradiance data for locations over 239 ground stations across the United States with a combination of measurements and simulations using NREL's Meteorological-Statistical model . "Cloud observations from NOAA's National Center for Environmental Information Integrated Surface Database were used as inputs to the METSTAT, and measured solar irradiances were directly obtained from the National Weather Service solar radiation network.The first version of the NSRDB was updated in 2007 to cover 1991–2005.The major updates include 10 × 10 km solar irradiances from hourly GOES data and the use of an empirical model developed by the State University of New York at Albany .The satellite-based products covered the contiguous 48 states of the United States from 1998 to 2005 while the solar irradiance data in Alaska were computed by the METSTAT model.This version of the NSRDB also provides measured and modeled solar irradiances as well as other meteorological data from 1454 ground stations during 1991–2005 .In 2012, NREL, in collaboration with Clean Power Research, updated the NSRDB to cover the years from 1991 to 2010.This version of the NSRDB was developed using an improved SUNY model in an hourly interval with a spatial resolution of 10 × 10 km.The data package also includes measurements from 1454 ground stations, including meteorological data from the NCEI ISD stations."The gridded NSRDB was released through NREL's Solar Prospector web portal which was decommissioned at the end of September 2016.The latest version of the NSRDB was released in 2017 containing gridded data from 1998 to 2016 in half-hourly temporal and 4 × 4 km spatial resolutions.This dataset was developed using the GOES data that cover the entire Western Hemisphere from 60° North to 20° South latitude including the contiguous United States and Central America.The average values of the daily GHIs and DNIs from 1998 to 2016 are illustrated in Fig. 1.This version used the two-step physical model, PSM, which opened the door to the use of next-generation satellite datasets for solar resource assessment and forecasting.Details about the development of this latest NSRDB are introduced below.With the advancement of satellite technology, information available from accurately retrieved atmospheric properties is continuously growing.This coupled with the fast advancement in computing technology has resulted in significant improvements in solar radiation simulations.One such approach is the use of two-step physical models where cloud and aerosol properties derived in the first step are fed into a radiative transfer model in the subsequent step.This approach provides the opportunity to directly calculate DNI with better accuracy from improved retrievals of water vapor, aerosol, and cloud properties .NREL employed this technology to produce the latest NSRDB, which contains long-term high-resolution solar radiation from geostationary satellites.Fig. 2 displays a flowchart of the PSM, a two-step physical model to compute solar radiation from satellite data, which was developed through collaboration among NREL, the University of Wisconsin, and NOAA.As shown in the figure, aerosol, water vapor and other meteorological properties are combined with satellite-derived cloud properties and used in the Fast All-sky Radiation Model for Solar applications to compute GHI.For clear scenes, FARMS is also used to compute DNI, whereas the Direct Insolation Simulation Code decomposition model is used for cloudy scenes.NOAA developed an Advanced Very High Resolution Radiometer Pathfinder Atmospheres-Extended system to efficiently retrieve cloud physical and optical properties from the synergetic use of satellite measurements in visible, near-infrared, and infrared channels .The system has been implemented with data from GOES for continuous weather monitoring and forecasting.Cloud products—including cloud height, thermodynamic phase, optical thickness and effective particle size—are retrieved from PATMOS-x and GOES-West and GOES-East satellites at 4 × 4 km over the continental United States for each 30 min during day time.These products are employed by the PSM to produce cloudy-sky solar radiation from 1998 to 2016.The aerosol optical depth used by the PSM is based on monthly MODIS data in combination with the MERRA-2 aerosol dataset.The monthly data are first scaled to a 0.5 × 0.5° resolution using an elevation weighting scheme and evaluated by surface-based Aerosol Robotic Network data.According to the evaluation, North America is divided into two regions.Over the southern and western United States, and northern Mexico, dominated by arid areas, the AOD is given by an optimal linear combination of MERRA-2 and MODIS data when the latter are available.Only MERRA-2 data are employed when MODIS observations are missing due to high surface albedo, cloudiness, large solar zenith angles, etc.Over the eastern and northern United States, Canada, southern Mexico and central America that are dominated by vegetation or urban areas, only MERRA-2 data are used because they are found to have similar accuracy as MODIS which suffers from a large fraction of missing data, especially in the high-latitude areas during winter.The monthly AOD data are then interpolated at 4 × 4 km on the basis of a daily average to match the NSRDB grids and improve the accuracy of surface solar radiation .The MODIS instruments onboard the Terra and Aqua satellites provide high-quality measurements of surface albedo at 30 arc-seconds for each 8-day interval .Maclaurin et al. matched the white-sky albedos from the MODIS MCD43GF product to the NSRDB grids.A point-in-polygon approach was employed to assemble MODIS pixels and compute the effective values of surface albedo within the NSRDB grids."This new product was integrated with National Ice Center's Interactive Multisensor Snow and Ice Mapping System to coordinate the influence of snow and ice on surface albedo. "The other atmospheric and land properties used or provided by the NSRDB—e.g. the atmospheric profile, wind direction and speed, snow depth, surface temperature and pressure, etc. —are based on data from NASA's MERRA-2. "The PSM applies satellite-derived atmospheric and land surface properties to radiative transfer models to numerically solve for solar radiation through the Earth's atmosphere.When clouds are absent, the extinction of solar radiation is associated with the light scattering by aerosols and air molecules and absorption by the trace gases in the atmosphere such as water vapor, carbon dioxide, ozone, oxygen, and methane.Although high-spectral-resolution radiative transfer models—e.g. the Line-By-Line radiative transfer model —provide rigorous expression of radiation in narrow bands of wavelength, they are less efficient when solving for broadband solar radiation.Thus, many clear-sky radiative transfer models parameterize the extinction of broadband solar radiation from surface-based meteorological data or simulations in numerous spectral bands .Badescu et al. evaluated 54 clear-sky radiative transfer models using surface observations of GHI and DHI from Kipp and Zonen radiometers in Cluj-Napoca and Bucharest-Afumati.Although the best model for all scenarios was not found, REST2 was ranked among the first-tier, along with three other models, in terms of the accuracy for both GHI and DHI.Because of the concise equations and parameters and the consequent efficiency in computing, REST2 is used in the development of the NSRDB .The radiative transfer problem under a cloudy sky is much more complicated because of the combination of absorption and multiple scattering within the cloud.Thus, solving the radiative transfer equation for clouds is the only rigorous approach to compute cloudy-sky radiation .Despite numerous approximations—e.g. the two-stream approach and delta-M truncation scheme —conventional radiative transfer models are still time consuming in numerically solving the radiative transfer equation.To meet the needs of developing the NSRDB and other solar energy applications, Xie et al. proposed FARMS to efficiently simulate all-sky solar radiation at land surfaces.In contrast to solving the radiative transfer equation, FARMS uses pre-computed cloud transmittances and reflectances of irradiances by the Rapid Radiative Transfer Model with a 16-stream Discrete Ordinates Radiative Transfer model .To further reduce the computing burden, the cloud transmittances and reflectances were parameterized as functions of solar zenith angle, as well as cloud thermodynamic phase, optical thickness, and particle size.The parameterization is coupled with surface albedo and REST2 accounting for clear-sky transmittances and reflectances to compute all-sky downwelling solar irradiances."The evaluation using 16-stream RRTM and DOE's Atmospheric Radiation Measurement Southern Great Plains site indicates that FARMS is as accurate as the two-stream approach; however, FARMS is approximately 1000 times faster than the two-stream approach, which has substantially accelerated the computation of the NSRDB.More detailed algorithm and performance evaluation can be found in .It is also worth noting that the algorithm to compute clear-sky radiation is consistent in both clear-sky and cloudy-sky conditions because FARMS is coupled with REST2.Despite the progressive efficiency provided by the FARMS, developing the NSRDB is still a computationally intense process because of its advanced temporal and spatial resolutions and a large volume of input data including approximately 50 terabytes of GOES and 1.5 terabytes of the other data."Therefore, the NSRDB data are produced by the NREL's flagship high-performance computing system that is capable of 2.26 PetaFLOPS with a total of 58,752 Intel Xeon processor cores, including 6912 E5–2670 SandyBridge, 24192 E5–2695v2 IvyBridge, and 27648 E5–2670v3 Haswell processor cores.To ensure timely production, allowing for rapid computation and quality check, we use Hierarchical Data Format for storage and a highly parallel and vectorized framework based on Python with the fundamental packages of NumPy and MPI4Py.In the PSM shown in Fig. 2, the development of a serially complete NSRDB with consistent spatial and temporal mapping requires four additional steps prior to the computation by FARMS: regridding, temporal interpolation, time shifting, and gap filling.The regridding step organizes all input data and reprocesses them into the NSRDB grids with a resolution of 4 × 4 km.The cloud properties are regridded using a nearest-neighbor approach because the GOES data have grids that are very similar to the NSRDB.Different regridding procedures based on physics laws are employed in the MERRA-2 and AOD data when their spatial resolutions are significantly lower than the NSRDB.For example, data with land-surface pressure and temperature with a 0.5° spatial resolution are regridded using an elevation scaling by considering the hydrostatic equation and a temperature lapse rate of 6 °C/km, respectively.The 0.5° resolution AOD data are first reduced to sea-level using an exponential scale height of 2950 m and then regridded to the NSRDB resolution by applying the same scale height to the pixel elevation.However, the specific humidity, wind speed, and direction are converted to the NSRDB resolution using a nearest-neighbor approach.The temporal-interpolation step assigns the regridded data to the NSRDB intervals every 30 min.Specifically, the wind directions are determined from the nearest values of the hourly MERRA-2 data, and other properties are interpolated to the NSRDB resolution using a simple linear relationship.With a mean-conserving algorithm, the monthly-mean AOD data are interpolated to daily intervals.Data from the GOES-West are given at each integral hour and 30 min past, whereas those for the GOES-East are available at 15 and 45 min past the integral hours.To develop the NSRDB with a consistent time stamp matching the GOES-West, the time-shifting step projects the 15-min delayed cloud properties from the GOES-East data to the NSRDB time stamps.The gap-filling step supplements the NSRDB because data gaps in cloud properties routinely exist in long-term satellite-based observations.Times with missing cloud properties, caused by various reasons, are represented by the clear-sky GHI and the ratio of GHI to clear sky GHI available in the nearest previous time point.A comprehensive evaluation of the NSRDB is essential when discussing “bankable data” for all phases of solar energy conversion projects, from the conceptual phase to routine solar power plant operation.Solar radiation data with known uncertainties help reduce the expense associated with mitigating performance and financing risk for solar energy conversion systems."The performance of the latest NSRDB was recently investigated by Habte et al. using surface observations from NREL's SRRL; ARM SGP; and Surface Radiation Budget Network sites at Bondville, Illinois; Desert Rock, Nevada; Fort Peck, Montana; Goodwin Creek, Mississippi; Pennsylvania State University, Pennsylvania; Sioux Falls, South Dakota; and Boulder, Colorado.Various statistics, such as mean bias error, mean absolute error, mean percentage error, root mean square error, and percentage RMSE were calculated for various locations and time scales.Fig. 3 demonstrates the spatial distribution of the surface sites that represent diverse geographical and climatic features at locations throughout the continental United States.It was also reported by Habte et al. that the %RMSEs of the hourly-averaged GHI and DNI can reach up to 20% and 40%, respectively, when compared to the surface-based measurements.An interannual variability of the GHI and DNI was found to be less than 5% for both the NSRDB and surface observations.The magnitudes of U95 for the GHI at the surface sites are illustrated in Fig. 5.A US of 5% is assumed for all the sites because most well-maintained surface-based pyranometers report uncertainties ranging from 3% to 5% .The stability of US and its impact on the U95 might require further studies, as discussed by .Fig. 5 shows that the magnitude of U95 dramatically decreases when the averaging timescale varies from hourly to annually.Therefore, %RMSE plays a dominant role in understanding the uncertainty of hourly-averaged GHI whereas the bias from surface observations becomes important in the monthly- and annually-averaged solar radiation.More details in the validation of the NSRDB can be found in .Geographic Information System layers are the core mechanism for developers, transmission planners, or conservationists to display geographic datasets in multiple geospatial processing programs.The GIS layers present in the NSRDB data include annual and multi-year mean GHI and DNI and multi-year mean capacity factors modeled for single-axis tracking photovoltaic panels and those with a fixed tilt angle of 20°.All the GIS layers are downloadable in several formats including the commonly used shapefile.The NSRDB Viewer, demonstrated in Fig. 6, is the major data delivery tool."Built on the NREL's OpenCarto, a web-based GIS framework, the NSRDB Viewer provides an intuitive, map-based interface for accessing raw time-series data or summarized GIS layers.Detailed instructions for downloading data from the NSRDB Viewer can be found on the website.The NSRDB also provides an Application Programming Interface giving researchers, analysts, and developers an alternative way to efficiently download data using modern scripting languages, e.g., Python, MATLAB, or R.In addition, the API enables web developers to build their own applications without storing the approximately 50 terabytes of the NSRDB data.Because of the improved accuracy and availability, the latest NSRDB has become a heavily and increasingly used dataset since its deployment.According to the web-based counter, the monthly data visit from the NSRDB viewer has doubled to more than 10,000 in 12 months, whereas more than 40% of the visit was from unique users.The NSRDB users include universities, local and federal governments, research institutes, public utilities, and numerous energy and high-technology companies across the world.Major use of the NSRDB can be categorized by energy-related and other applications.The energy-related applications of the NSRDB include, but are not limited to, site and building design, facility integration, transmission and distribution planning, and strategic analysis.The serially completed, spatially continuous solar radiation from the NSRDB naturally meets the demands in developing solar radiation time-series and Typical Meteorological Year data for building analysis and the forecast and comparison of solar system performance .In addition, capacity expansion and integrated assessment models rely on the spatially continuous NSRDB data to quantify the supply and quality of solar power and assess costs and feasibilities at a national scale.Production cost models used in grid integration studies to evaluate and optimize power plant dispatch require the high temporal-resolution NSRDB data from thousands of locations.Models that are intrinsically coupled with capacity expansion and production cost models—known as Geodesign models—are used to characterize and quantify solar supply.Those models use continuous or climatological data based on the NSRDB to evaluate land-use impacts, barriers, and scenarios of development futures.Solar energy facility developers use PVSyst or System Advisor Model with long-term NSRDB data to estimate power output and assess specific cost and feasibility.A brief summary and description of the abovementioned models is provided in Table 1.The NSRDB has also been used in bioenergy to evaluate algal biomass productivity potential in a variety of climatic zones .In addition to the energy-related applications, the NSRDB has been employed in many other research areas.For example, the American Society of Heating Refrigerating and Air-Conditioning Engineers uses the NSRDB for climate research.The NSRDB has also been used by the American Cancer Society to conduct cancer research because solar exposure is the primary vitamin D source that is associated with survival in multiple cancers.The residence-based ultraviolet radiation data from the NSRDB is used to examine its relationship to cancer outcomes and help understand the geographic disparities in cancer prognosis.The NSRDB is a widely used public solar resource dataset that has been developed and updated during more than 20 years to reflect advances in solar radiation measurement and modeling.The most recent version of the NSRDB uses 30-min satellite products at a 4 × 4 km resolution that cover the period 1998–2016."The NREL-developed PSM was the underlying model for developing this recent update, which used this two-step physical model and took advantage of the progressive computing capabilities and high-quality meteorological datasets from NOAA's GOES; NIC's IMS; and NASA's MODIS and MERRA-2 products.The percentage biases in the latest NSRDB are approximately 5% for GHI and approximately 10% for DNI when compared to the long-term solar radiation observed by the ARM, NREL, and SURFARD stations across the United States.Future updates of the NSRDB are expected annually.Advanced information in the planned dataset will involve new satellite retrievals and improved AOD data.However, future advancements in the PSM—e.g. identifying low clouds and fog in coastal areas, improving the discrimination of clouds from snow, providing specular reflection on bright surface, and reducing uncertainties of parallax especially under high-resolution conditions—are desired to further increase the accuracy of the NSRDB.Further, the Lambert-Bouguer Law is almost non-exclusively utilized by physics-based radiative transfer models, including FARMS, that assume DNI is constituted of an infinite narrow beam."This assumption is interpreted differently in surface-based observations by pyrheliometers where direct solar radiation is defined as the “radiation received from a small solid angle centered on the sun's disc” .To reduce this disagreement in principle, we employed an empirical model, DISC , to decompose DNI from the GHI in cloudy situations.Further efforts are underway in developing a new DNI model to bridge the gap between model simulation and surface observation.Additionally, the launch of GOES-16 is also expected to provide improved cloud products; however, this requires better capabilities to process larger volumes of data.Finally, while the PSM has been applied to the GOES satellites the methods and models are equally applicable to any other geostationary satellites.Therefore, future work will involve developing global capabilities in collaboration with various national and international partners.
The National Solar Radiation Data Base (NSRDB), consisting of solar radiation and meteorological data over the United States and regions of the surrounding countries, is a publicly open dataset that has been created and disseminated during the last 23 years. This paper briefly reviews the complete package of surface observations, models, and satellite data used for the latest version of the NSRDB as well as improvements in the measurement and modeling technologies deployed in the NSRDB over the years. The current NSRDB provides solar irradiance at a 4-km horizontal resolution for each 30-min interval from 1998 to 2016 computed by the National Renewable Energy Laboratory's (NREL's) Physical Solar Model (PSM) and products from the National Oceanic and Atmospheric Administration's (NOAA's) Geostationary Operational Environmental Satellite (GOES), the National Ice Center's (NIC's) Interactive Multisensor Snow and Ice Mapping System (IMS), and the National Aeronautics and Space Administration's (NASA's) Moderate Resolution Imaging Spectroradiometer (MODIS) and Modern Era Retrospective analysis for Research and Applications, version 2 (MERRA-2). The NSRDB irradiance data have been validated and shown to agree with surface observations with mean percentage biases within 5% and 10% for global horizontal irradiance (GHI) and direct normal irradiance (DNI), respectively. The data can be freely accessed via https://nsrdb.nrel.gov or through an application programming interface (API). During the last 23 years, the NSRDB has been widely used by an ever-growing group of researchers and industry both directly and through tools such as NREL's System Advisor Model.
271
Reducing energy demand through low carbon innovation: A sociotechnical transitions perspective and thirteen research debates
Improvements in energy efficiency and reductions in energy demand are widely expected to contribute more than half of the reduction in global carbon emissions over the next few decades .To provide a reasonable chance of limiting global temperature increases to below 2 °C, global energy-related carbon emissions must peak by 2020 and fall by more than 70% in the next 35 years.As an illustration, this implies a tripling of the annual rate of energy efficiency improvement, retrofitting the entire building stock, generating 95% of electricity from low-carbon sources by 2050 and shifting almost entirely towards electric cars .The rate and scale of change required is best described as revolutionary: there are few historical precedents and existing policy initiatives have achieved only incremental progress towards those ends .Major reductions in energy demand will require the widespread uptake of technical and social innovations.The paper focuses on demand-side low-carbon innovations, which refer to new technologies, organisational arrangements and modes of behaviour that are expected to improve energy efficiency and/or reduce energy demand.This broad definition encompasses both incremental and radical innovations relevant to all energy using sectors.Fig. 11 provides some relevant examples, broadly classified by their degree of technical or social novelty.To date, most policy efforts have focused upon technically and socially incremental options.While these are important in the short term, they face diminishing returns in the long term, since their potential for further diffusion is limited.Hence, more substantial demand reductions are likely to require more radical innovations that are presently at an earlier stage of emergence and require larger changes to existing sociotechnical systems.The two dominant approaches that have, so far, underpinned most policy efforts have strengths, but also important limitations for understanding both the emergence and diffusion of radical innovations and the associated system transformations .Neoclassical economics considers energy or carbon prices to be the critical variable in reducing energy demand, supported where appropriate by policies to reduce economic barriers to energy efficiency, such as split incentives, asymmetric information, high transaction costs and difficulties in accessing finance .Neoclassical economics also provides a rationale for supporting new, energy efficient technologies at different stages of the ‘innovation chain’, but offers only limited insights into either the process of innovation or the most effective means of policy support.These recommendations have at least three drawbacks.First, for most consumers energy efficiency represents a secondary and largely invisible attribute of goods and services, thereby muting the response to economic incentives.Factors such as comfort, practicality and convenience commonly play a larger role in energy-related decisions, with energy consumption being dominated by habitual behaviour shaped by social norms .Second, carbon pricing is politically unpopular and energy efficiency remains a low political priority, resulting in a policy mix that is frequently weak and ineffective .Third, neoclassical economics assumes rational decision-making by firms and individuals and tends to pay limited attention to the broader, non-economic determinants of decision-making .Insights from behavioural economics and social psychology provide deeper insights into the cognitive, emotional and affective influences on relevant choices and routines and suggest ways to ‘nudge’ people and organisations towards more energy efficient choices and routines .But social-psychological research focuses overwhelmingly upon individual consumers and under-appreciates the importance of interactions with other actors, organisational decision-making and economic and social contexts.More fundamentally, both economic and social psychology have an individualist orientation that underrates the significance of the collective and structural factors that shape behaviour, guide innovation and enable and constrain individual choice.Thus, the dominant perspectives on reducing energy demand have a number of limitations and these limitations are reflected in the partial focus and relative ineffectiveness of the current policy mix.Given this, we propose a broader socio-technical perspective that more fully addresses the complexity of the challenges involved as well as integrates relevant insights from various social science disciplines.A socio-technical transitions perspective is more appropriate for two reasons.First, energy services such as heating and mobility are provided through large-scale, capital intensive and long-lived infrastructures that co-evolve with associated technologies, institutions, skills, knowledge and behaviours to create broader ‘sociotechnical systems’ .These systems are termed ‘sociotechnical’ since they involve multiple, interlinked social and technical elements, such as technologies, markets, industries, policies, infrastructures, user practices and societal discourses.Second, a transitions perspective acknowledges specificities of the kinds of change processes involved.Sociotechnical systems have considerable inertia, making it difficult for radically different technologies and behaviours to become established – such as electric mobility or mass transit schemes.Hence, reducing energy demand involves more than improving individual technologies or changing individual behaviours, but instead requires interlinked and potentially far-reaching changes in the systems themselves – or ‘sociotechnical transitions’.These transitions are typically complex, protracted and path dependent and the outcomes are difficult to predict.A socio-technical transitions perspective acknowledges these characteristics, while neo-classical economics and social psychology do not.The socio-technical transitions perspective has received much attention in recent years .In fact, authors have made so many and diverse contributions in recent years that there is a risk of not seeing the forest for the trees.Our key contribution is therefore to inductively identify and describe thirteen key debates within this literature that are relevant for energy demand reduction.Our aim is to construct a research map useful for guiding future research.We have organized our discussion along three research themes: emergence, diffusion and impact.Although this is suggestive of a linear model of innovation, we think the distinction is useful since each theme encompasses very different analytical topics.Emergence and diffusion of radical demand-side low carbon innovations refer to different phases in decades-long transition processes.Impact refers to the ultimate effect of low carbon innovations on energy demand.Acknowledging complexities, we also identify crosscutting debates that span the three themes.The focus throughout is on theoretical and conceptual issues rather than specific empirical topics.Many of the debates are relevant to research on ‘sociotechnical transitions’ in general as well as to research on energy demand in particular.The paper proceeds as follows.Section 2 briefly introduces the sociotechnical transitions perspective on low carbon innovation and contrasts this with more mainstream approaches to understanding innovation.Section 3 then explores the emergence of low carbon innovations from a sociotechnical perspective and identifies five debates on which further research is required.Section 4 briefly conceptualizes the diffusion of low carbon innovations and identifies three pressing debates.Section 5 addresses the impact of low-carbon innovations on energy demand and identifies three further debates.Section 6 then highlights two cross-cutting debates that span all three themes, while Section 7 concludes.Numerous frameworks identify themselves as being ‘sociotechnical’, with scores more focusing broadly on the interactions between science, technology, and society.One review identified no less than 96 distinct frameworks or theories focusing across the domains of technological change, sociotechnical transformation, sustainability transitions, or the diffusion, acceptance, and use of new technologies .Nonetheless, there are some key distinctions that set our sociotechnical transitions perspective apart from others, which we examine here.The sociotechnical approach differs from more conventional models of innovation, which often equate innovation only with new technology.The simplest ‘linear’ model of innovation assumes that technological development proceeds according to its own, internal logic, largely separated from society, and that once introduced in society, it ‘causes’ social changes.This model envisions science and technology as an assembly line that begins with basic research, follows with development and marketing of a given technology, and ends with the product being purchased by consumers.Fischer characterized this as a “billiard-ball” model, in which technological development rolls in from outside, and impacts elements of society, which in turn impact one another.A more sophisticated model sees innovation as arising from an innovation system, defined as a “network of institutions in the public and private sectors whose activities and interactions initiate, import, modify and diffuse new technologies” .This model highlights the interactions and feedback loops between the different phases of R&D, development, demonstration, market formation and diffusion.Innovation is viewed as a collective activity involving many actors and knowledge feedbacks and is strongly influenced by institutional settings .Within policy and scholarly debate around low-carbon transitions, this challenge has increasingly been framed in terms of ‘pathways’ towards change .The sociotechnical approach takes this further and focuses upon how innovation processes are often about creating new sociotechnical systems through the co-construction of multiple elements .In addition to technological changes, this involves changes in infrastructures, markets, regulations, user practices and so on.The successful development of bike-sharing, for example, is about modal shifts from cars to cycling, shifting from individual ownership to sharing, developing robust bicycles, establishing an infrastructure of docking stations and easy payment facilities, establishing new business models, building political support, ensuring effective maintenance and repair, and disseminating positive discourses about cycling more generally.The ‘co-construction’ of user practices and technology .is particularly relevant for our interest in reducing energy demand.On the one hand, technologies are adjusted to fit better with the user environment.On the other hand, the user environment is adjusted to accommodate the new technologies.In this way, technologies, environments and user practices co-evolve, as Fig. 2 depicts.More generally, the sociotechnical approach is concerned with the interactions between various actors in the development and diffusion of innovations.These may include researchers, designers, engineers, firms, consumers, policymakers, urban planners, intermediaries and the media.A sociotechnical analysis pays attention to interpretations, interests, decisions, resource allocations, learning processes, and power struggles among these actors.Innovation is understood as arising from actions and interactions in social contexts, rather than from an intrinsic technical or economic logic.Within the sociology of technology, there are different kinds of socio-technical approaches that share some of the above characteristics, but differ in other ways.The Social Construction of Technology approach , for instance, focuses on the meanings of technologies and how these emerge from competing interpretations in relevant social groups.SCOT consequently downplays the importance of economic considerations such as finance and market competition.SCOT-studies tend to focus on the emergence and stabilisation of artefacts, but pay less attention to diffusion, impact or replacement of existing systems.The Large Technical System approach focuses on a particular kind of technology: large-scale, integrated infrastructures.LTS-scholars address emergence, diffusion and societal transformation, but also pay less attention to replacement of existing systems.Their emphasis on ‘system builders’ also has heroic, voluntarist connotations, often with a supply-side orientation.Actor-Network Theory is a radical approach that adopts a ‘flat ontology’, which means it understands coordination as emerging from circulations and translations between local practices .It thus challenges the traditional social science emphasis on institutions and social structures as coordinating forces.ANT also challenges traditional views on actors by endowing artefacts with agency, because they hold socio-technical networks together.While provocative, ANT’s translational focus makes it impractical for investigating decades-long transition processes as its methodological recipe to ‘follow the actors’ is difficult to put into practice.To understand transitions, we suggest that the Multi-Level Perspective is the most suited socio-technical approach.The MLP combines ideas from SCOT with evolutionary economics.The MLP thus spans foundational social science dichotomies : agency and structure; stability and change; ideational and material dimensions.Substantial reductions in energy demand require transitions towards new or durably reconfigured sociotechnical systems in heating, lighting, motive power and mobility.Promising low carbon innovations are the seeds for such transitions, but many of them are currently small in terms of market share and amount of investment and face uphill struggles against existing sociotechnical systems.2,One implication is that current policy interventions may be insufficient to bring about non-marginal change.A second implication is that low carbon innovations should not be studied in isolation, but in the context of their compatibility with and struggles against existing sociotechnical systems.One framework to understand these issues is the Multi-Level Perspective, which we briefly describe to contextualize our later discussion.The MLP distinguishes three analytical levels .The incumbent sociotechnical system refers to the interdependent mix of technologies, industries, supply chains, consumption patterns, policies, and infrastructures.These tangible system elements are reproduced by actors and social groups, whose perceptions and actions are shaped by rules and institutions, such as shared meanings, heuristics, rules of thumb, routines and social norms.These more intangible elements are referred to as the sociotechnical regime.Innovation in existing systems is mostly incremental and path dependent, aimed at elaborating existing capabilities, because of various lock-in effects .These include sunk investments, economies of scale, increasing returns to adoption, favourable regulations, cognitive routines, social norms and behavioural patterns.These reinforcing factors act to create stability in the incumbent system.Niche innovations refer to novelties that deviate on one or more dimensions from existing systems.The novelty may be a new behavioural practice, a new technology, a new business model, or a combination of these.Because radical novelties initially have poor price/performance characteristics, they cannot immediately compete with existing systems.Particular applications, geographical areas, markets or subsidized programs therefore act as ‘incubation rooms’ – called ‘niches’ – which protect novelties against mainstream market selection .In these niches, radical innovations are initially often developed by small networks of dedicated actors, often outsiders or fringe actors .The sociotechnical landscape forms an exogenous environment beyond the direct influence of niche and regime actors, but acting upon them in various ways.This may be through gradual changes, such as changes in cultural preferences, demographics, and macro-political developments, or through short-term shocks such as macro-economic recessions and oil shocks.Transitions come about through processes within and between the three analytical levels that vary over time.In the emergence phase, niche actors engage with radical innovations, but this does not automatically lead to sociotechnical transitions because existing systems are stabilized by multiple lock-in mechanisms.In the diffusion phase, niche innovations build up internal momentum, while changes at the landscape level create pressure on the regime.The subsequent destabilisation of the regime creates windows of opportunity for niche innovations to diffuse.The wider breakthrough of niche innovations leads to broader system transformation, which generates impacts.This brief description indicates that the MLP provides a big picture understanding of transitions.The next three sections draw upon this framework to further assess the processes through which low carbon innovations emerge and diffuse, together with their impacts on energy demand.In each case, we first provide a general conceptualisation of the relevant theme and then highlight several research debates within this theme.These are largely theoretical debates that need to be connected to empirical questions.Research on emergence does not focus on the initial invention of new ideas, but on the early introduction of these ideas and their concrete embodiment into society.Confusingly, the word ‘innovation’ is often used as a synonym for emergence, and to distinguish early introduction from ‘diffusion’.The distinction between emergence and diffusion is often fuzzy and gradual, but the former involves much greater emphasis on plurality, experimentation, testing, demonstration and collective learning.The introduction of innovations tends to be difficult because the supportive sociotechnical contexts that allow innovations to thrive – e.g. networks of institutions, formalised and tacit knowledge, social norms and expectations, design standards, financial resources, and so forth – have yet to be established.A common manifestation of the absence of supportive contexts for innovations is the so-called ‘valley of death’ between research or demonstration projects on the one hand and full-blown market commercialisation on the other.Many novelties fail to cross this chasm or take a very long time to do so.As a result, it is difficult to mobilise sufficient financial resources and/or policy support for development and subsequent diffusion.According to the sociotechnical transitions literature , the creation of ‘protective spaces’ is a useful and important means of encouraging emerging innovations because they shield those innovations from the pressures imposed by the existing system and give them time to mature.Such protective spaces allow actors associated with innovations to address and reduce a wide range of uncertainties, including:Techno-economic uncertainties: There may be competing technical configurations, each with different advantages and disadvantages.Finance and investment related uncertainties: Often it is difficult not only to obtain the funding that is necessary for technical development and practical experimentation, but also to evaluate the rationality of investments in innovations.To attract funding, product champions often make positive promises and even expert analysts in technical areas often suffer from ‘appraisal optimism’ .Cognitive uncertainties: Actors developing niche innovations often have different views and perceptions about technical specifications, consumer preferences, infrastructure requirements, future costs, and so forth .This ‘interpretive flexibility’ gives rise to debates, disagreements, discursive struggles and competing visions .Social uncertainties: The networks of actors developing niche-innovations are often unstable and fluid.Actors may enter into partnerships for a few years, but then leave if difficulties arise or funding runs out .Start-up or spin-off firms may be attracted by new opportunities, but then may also exit when economic ventures fail.To address these uncertainties, the literature on ‘strategic niche management’ distinguishes three core processes in the development of niche-innovations:Articulation of expectations and visions: Expectations are considered crucial for niche development because they provide direction to learning processes, attract attention, and legitimate protection and nurturing ;,Building of social networks: This process is important to create a constituency behind an innovation, to facilitate interactions between relevant stakeholders, and to provide the necessary resources for further development and subsequent diffusion ;,Learning processes along multiple dimensions , including: technical aspects and design specifications; markets and user preferences; cultural and symbolic meaning; infrastructure and maintenance networks; production, supply chains and distribution networks; regulations and government policy; and societal and environmental effects.Niches can be said to gain momentum if: first, visions and expectations become more precise and more broadly accepted; second, the alignment of various learning processes results in shared expectations and a ‘dominant design’; and third, networks increase in size, including the participation of powerful actors that add legitimacy and expand resources .These processes of stabilisation, growing acceptance and support and community building tend to occur over sequences of concrete demonstration projects, experiences and trials.Having summarised and characterised the niche-innovation literature, we now identify five research debates that are relevant to the emergence of low carbon innovations.One debate that has attracted significant attention is the composition of social networks and the question of which actors drive the innovation.Specifically, what are the roles of new entrants relative to actors within the incumbent regime such as electric utilities and car manufacturers?,The early SNM literature and the grassroots innovation approach suggested that start-ups, civil society organisations and grassroots innovators tend to pioneer radical niche innovations because they are less ‘locked in’ and willing to think ‘out of the box’.Incumbent actors, in contrast, were thought to focus on incremental innovations that fit easier with existing capabilities, capital investments and interests.Recent work has questioned this simple dichotomy, identifying many instances where incumbent actors develop radical niche innovations .New entrants may also collaborate with incumbents in order to draw on their financial resources, technical capabilities and political connections.This may accelerate emergence but almost inevitably entails some ‘mainstreaming’ and weakening of the radical aspects of the innovation .While this may enhance the scalability of innovations – i.e. the potential for growth and wider diffusion – the risk is also that their critical edge and potential to bring about ‘deep’ changes to contemporary society are lost.A second debate concerns the scalability of niche-innovations.Some innovations may be scaled up through successively larger demonstration projects .But others may be more difficult to scale-up and hence remain relatively small – catering to the needs of a specific user segment.This raises questions for policy.A focus on efficiency and effectiveness may result in a greater support for scalable innovations that hold the promise of significant reductions in energy consumption, including electric vehicles and urban light rail.Yet, less scalable innovations may provide significant wider and/or not necessarily easily quantifiable benefits.For instance, bicycle cooperatives can fulfil an important role in the maintenance of individually and publicly owned bikes and, like urban gardening projects, assist in theintegration of disadvantaged youth, ex-convicts, etc. into working life and mainstream society .This may make support for innovations with limited scalability worthwhile and raises hitherto largely unaddressed questions for SNM and multi-level thinking: is up-scaling of niches the only way through which regime shifts can come about?,A third debate concerns the significance of place and geography for sustainability transitions .While the sociotechnical literature is strong on temporal issues, it has paid less attention to spatial questions such as: Why do innovations emerge more often in some than in other places?,Why do transitions unfold faster in certain locations than in others?,What is the role of local and regional institutions, policies and forms of governance in the emergence and diffusion of innovations?,Much of the recent thinking on the geography of sustainability transitions focuses on cities and urban networks .This is largely because cities now house the majority of the world’s population; urbanisation is continuing apace ; cities have long been the cradle of innovations and creativity ; and economic and state restructuring under neoliberal capitalism have enhanced the economic and political significance of cities .Research on the role of place, cities and urban networks can advance understanding of how interactions within and across niche innovations, and indeed niche innovations themselves, are constituted by assemblages of regulation, funding, discourses, the pre-existing material fabric, collective values and customs that are both place-specific and networked across localities.This deepens insight into the locational ‘stickiness’,and spatial politics of niche innovations, and hence their scalability and potential transferability.A fourth debate is the need to further articulate the business and economic dimensions of niche-innovations.Reflecting their sociological origins, SNM studies have tended to focus disproportionately on socio-cognitive dimensions, such as visions, social networks and learning processes.This could be fruitfully complemented with more economic research on the development and evolution of new business models and the role of funding mechanisms .The latter is especially important since funding is a major constraint on the emergence of innovation in sectors such as domestic buildings and urban transport .Insights from political economy can also be utilised to analyse how powerful actors influence funding streams .Another important economic issue is how investments in emerging innovations may generate broader economic benefits, such as ‘green jobs’.Promises of such benefits form a key part of societal debates around such innovations, even though there are many uncertainties about the size of these benefits and to whom they accrue .A fifth debate concerns changes in user practices in relation to niche innovations.This topic has not been studied in great depth in the SNM literature, which has focused more on new technologies and services than on their consumption and end-use .Further development of the role of users in niche-innovation can draw on insights from various literatures.For example, the literature on domestication emphasizes the creative agency of consumers who do not just buy new technologies but also embed them in their daily lives.This requires cognitive work, symbolic work and practical work.Similarly, the literature on user innovation suggests that users play active roles in the development of new uses of technologies that were not foreseen by producers.Furthermore, interactions between supply and demand may be facilitated by intermediary actors and by institutional loci where users, mediators and producers can meet to negotiate and align technical design choices and user preferences .The widespread diffusion of low carbon innovations is necessary to achieve energy demand reduction on a substantial scale.However, large-scale diffusion in mass markets often means head-on competition with incumbent sociotechnical systems, which are stabilised through the alignment of existing technologies with the business, policy, user and societal contexts.Therefore, the diffusion low carbon innovations does not happen in an ‘empty’ world, but in the context of existing systems that provide barriers and active resistance .Another problem is that many low carbon innovations are not intrinsically attractive to the majority of consumers since they are often more expensive and perform less well on key dimensions.Much of the recent policy interest in low carbon innovation is driven by public good concerns rather than by private interests, which implies that diffusion is unlikely to be driven solely by economic mechanisms.Policy support, cultural discourses and social pressures are therefore likely to be important factors as well, which means that a multi-dimensional approach is required.The MLP conceptualises diffusion as entailing two interacting developments: 1) the creation of endogenous momentum of niche-innovations; and 2) the embedding of niche-innovations in wider contexts and environments.Both developments can be seen as process of co-construction and alignment.Endogenous momentum arises gradually from the same processes that drive the emergence of innovations, namely: developing larger social networks with greater legitimacy and resources; aligning learning processes on multiple dimensions resulting in a ‘dominant design’; and forming clear and widely accepted visions of the future of the innovation.The gradual shift from the emergence phase to the diffusion phase is characterized by a reversal in which the innovation shifts from initial flexibility to ‘dynamic rigidity’ .Hughes describes the emerging momentum of new systems in terms of an increasing ‘mass’ of technical and organizational components,3 emerging directionality and system goals, and an increasing rate of perceptible growth.Thus, endogenous momentum is driven my multiple and reinforcing causal mechanisms including: expansion of social networks and bandwagon effects; positive discourses and visions; learning by doing; increasing returns to scale; network externalities; strategic games between firms; and increasing support from policymakers who see the innovation as a way of solving particular problems.The diffusion of low carbon innovations also requires embedding within policy, social, business and user environments .This external fit may be difficult to foresee, as Rosenberg noted more than forty years ago: “the prediction of how a given invention will fit into the social system, the uses to which it will be put, and the alterations it will generate, are all extraordinarily difficult intellectual exercises”.Achieving this fit may be especially difficult for more radical niche-innovations that often face a ‘mismatch’ with the existing sociotechnical system.The process of societal embedding is conceptualised as a co-construction process that entails mutual adjustments between the innovation and wider contexts: “Technology adoption is an active process, with elements of innovation in itself. Behaviours, organization and society have to re-arrange themselves to adopt, and adapt to, the novelty.Both the technology and social context change in a process that can be seen as co-evolution”.The degree of adjustment is a question for research, where one extreme is that the innovation is adjusted to fit in existing contexts and another extreme is that the contexts are adjusted to accommodate the innovation.4,The distinctive contribution of a sociotechnical approach to diffusion is to study the interaction between endogenous mechanisms and external embedding.Although adoption decisions by individual consumers remain important, the sociotechnical perspective focuses upon the activities of a broader range of actors.Within this literature, we highlight three debates that are relevant to the diffusion of low carbon innovations.First, a general debate is how the MLP-based view on diffusion relates to existing diffusion models that have been developed in the economic, sociological, geographical, and psychological literatures .We suggest that these existing models may be grouped into three broad families, namely: adoption models, socio-technical models, and spatial models.5,The MLP currently draws primarily upon the socio-technical models.A conceptual research challenge is therefore to consider if and how the socio-technical perspective can be enriched with insights from the adoption and spatial models.A second question is how relevant the various diffusion models are for different types of low carbon innovation and if their salience varies over time.Second, there is a debate regarding the diffusion of systemic innovations.The adoption models, which are the dominant approach in diffusion research, can be criticized for focusing on discrete artefacts or products such as televisions, computers, and consumer goods.The diffusion of systems or “systems of systems” poses particular challenges, which have not yet been systematically addressed.Lyytinen and Damsgaard , for example, note six specific shortcomings in the diffusion literature with regard to systems.8,Future research could therefore fruitfully investigate how systems such as district heating or trams diffuse across space and over time.Understanding this under-addressed topic is likely to require novel conceptual work.One additional puzzle is that not all systems need to follow a ‘point-source dynamic’, with change starting small and then diffusing.Some new systems may grow out of old systems.Intermodal or integrated transport systems, for instance, first require sufficiently developed train, bus, tram and/or bike systems that can then subsequently be linked together.Another puzzle is that existing systems may be reconfigured through the adoption of multiple innovations, which together lead to wider changes.Car-based systems, for instance, can be reconfigured through self-driving cars, congestion charges, on-board navigation tools, dynamic road management, and electric vehicles providing back-up capacity for electricity grids.So, rather than following the diffusion of single technologies, one could shift the unit of analysis and ask how multiple innovations can reconfigure existing systems .Third, there is a more general debate on how diffusion can be accelerated, which is especially relevant to low carbon transitions and the time-sensitive problem of climate change .The mainstream climate mitigation literature has identified a range of options where strengthened policies could help accelerate low-carbon transitions, such as R&D subsidies, feed-in tariffs, carbon pricing, performance standards and removing fossil fuel subsidies.While useful and important, these studies are instrumentalist, focused on analyses for policymakers, not on analyses of policy, power, or politics .This is problematic because scholars emphasise that the acceleration of low-carbon transitions is a deeply political challenge.The German Advisory Council on Global Change, for instance, states that while technical and policy instruments for low-carbon transitions are well-developed, it is “a political task to overcome the barriers of such a transformation, and to accelerate the change”.To understand such deliberate acceleration it is too simple to focus on ‘political will’ or ask policymakers to show courage, because such a voluntarist orientation overstates the importance of politicians’ own volition.We therefore agree with Meadowcroft’s that it is important to better understand the “political conditions required to bring into play”.Conditions for accelerated diffusion may derive from external shocks and crises that change socio-political priorities and create a sense of urgency to accelerate deployment .Pressure for stronger policies may also come from changes in public opinion or from companies that see commercial opportunities in low-carbon innovations .Diffusion may also be accelerated by incumbent firms reorienting themselves towards radical innovations, thereby making financial resources, technical capabilities and marketing expertise available .Such reorientation is not easy, and often requires both pressures and economic opportunities.Comprehending the impacts of low carbon innovations on energy demand is central to public policy: energy efficiency improvements are considered to be the most promising, fastest, cheapest and safest means to mitigate climate change, as well as providing broader benefits, such as improved energy security, reduced fuel poverty, and increased economic productivity .However, compared to the large body of work on emergence and diffusion, the analysis of the impacts of low carbon innovations has received much less attention from sociotechnical researchers.Authors often emphasise the limitations of linear, deterministic approaches to projecting impacts; the frequency with which expectations of impacts are confounded by real-world experience ; and the challenges associated with both anticipating impacts ex ante and measuring them ex-post .Quantification of impacts is difficult within complex social systems, but may nevertheless be feasible for more incremental kinds of innovation within restricted spatial and temporal boundaries, e.g. the adoption of condensing boilers and the retrofitting of loft and cavity wall insulation .In these examples, sufficient data exists for the historical impacts of these changes to be measured and the relevant systems are sufficiently stable for the future impacts to be modelled.But establishing the historical or potential future impact of more radical innovations over longer periods of time presents much greater difficulties.For example, commonly used modelling tools may not capture all of the relevant mechanisms ; there may be no basis for assigning values or ranges to relevant parameters; and certain types of outcomes may be difficult or impossible to anticipate.The impacts of any change within a complex system are necessarily mediated through multiple interdependencies, time-delayed feedback loops, path dependencies, and threshold effects.More fundamentally, the basic concept of ‘impact’ is problematic from a sociotechnical perspective, because of its connotations with technological determinism – with technology impacting on society in a linear and straightforward fashion .Hence, for radical and systemic innovations it is difficult to establish causality, assess historical impacts and project future ‘impacts’.While historical analysis can provide rich descriptions of the co-evolutionary processes involved, the primary lesson is the contingent nature of impacts and our limited ability to anticipate them in advance.In this context, authors in the sociotechnical tradition have focused more upon transition processes than on the ultimate impacts of those transitions.Against this background, we identify three important research debates that are relevant to the impacts of low carbon innovations.First, there is a critical debate on the rebound effects from low carbon innovations and the extent to which these may undermine the anticipated benefits of low carbon innovations .Such effects result from a number of mechanisms operating at different levels, across geographical scales and over different time periods, but only some of these are amenable to quantification.Moreover, attention to date has focused almost exclusively upon economic mechanisms to the neglect of other co-determinants.As an illustration, consider the following example from transport systems: a) fuel-efficient cars make travel cheaper, so people may choose to drive further and/or more often, thereby offsetting some of the energy savings; b) joint decisions by consumers and producers may channel the benefits of improved technology into larger and more powerful cars, rather than more fuel-efficient cars; c) drivers may use the savings on fuel bills to buy other goods and services which necessarily require energy to provide; d) the energy embodied in new technologies may offset some of the energy savings, especially when product lifetimes are short; e) reductions in fuel demand translate into lower fuel prices which encourages increased fuel consumption, together with changes in incomes, prices, investments and industrial structures throughout the economy; and f) more fuel-efficient vehicles deepen the lock-in to the sociotechnical system of car-based transportation, with associated and reinforcing changes in infrastructure, institutions, regulations, supply chains and social practices.Rebound is therefore an emergent property of a complex system.A growing body of research is exploring mechanisms a-b, and to a lesser extent mechanisms c-e in transport and other areas, but this research excludes non-economic mechanisms, tends to be confined to the short to medium term and stops short of assessing the impacts of broader changes in the relevant systems.Nevertheless, such studies indicate significant departures from anticipated impacts.There is a need to apply the relevant techniques to other innovations, contexts, datasets and time periods, and to extend the analysis to include broader psychological, social, institutional and other factors that either offset, reinforce or contribute additional rebounds – for example, the phenomena of ‘moral licensing’ .However, methods for studying the longer term impacts of sociotechnical transitions need much more development, along with methods for evaluating the claim that ongoing transitions are necessarily more sustainable.Second, there is an important set of debates about the construction of impact scenarios including the economic, social and political influences on those scenarios and the societal impacts of the scenarios themselves.Scenarios of both technology diffusion and energy consumption are regularly produced by multiple public and private institutions and it would be useful to examine and compare their underlying assumptions, the processes through which they are constructed, their historicalaccuracy and the perils and pitfalls that result .For example, Gross et al. show how reliance on ‘learning curves’ for forecasting the future cost of electricity generation technologies has led to over-optimistic estimates, exacerbated by the tendency of analysts towards ‘appraisal optimism’.The latter may be an endemic feature of technology appraisals owing to the powerful incentives to raise expectations in order to attract finance and social support .Economic, political and institutional influences can shape the choice of data, methodologies and assumptions within impact studies and the results can legitimise political decisions .Exploring this dimension of impact can enhance awareness of how difficult it is to assess future developments, especially for innovations with substantial transformative potential, and how the process of commissioning, creating and communicating those assessments can influence the developments themselves.Third, there is an important debate on the use of quantitative modelling tools for forecasting future impacts and the feasibility of modelling broader sociotechnical transitions.The economic and policy analysis literature is replete with model-based projections of future transitions and energy-related impacts.While many traditional modelling tools struggle to accommodate the non-linear, disruptive characteristics of socio-technical transitions , relevant quantitative techniques can offer useful insights in appropriate circumstances, provided their limitations are acknowledged.Examples include understanding the impacts of specific innovations within circumscribed spatial and temporal boundaries, or clarifying the long-run relationships between aggregate measures of productivity, consumption and growth.Until recently, the sociotechnical approach has mostly been used for qualitative explorations of future transitions via sociotechnical scenarios and related techniques .As with the historical studies, these primarily focus upon the process of future transitions rather than their impacts.In light of both the complexity of the processes involved and our limited ability to anticipate future impacts, most sociotechnical researchers have avoided formal modelling and quantification.In recent years, however, a productive research stream has started to explore combinations of socio-technical and quantitative modelling approaches .Some researchers use new techniques, such as agent-based models or stochastic system dynamics, to simulate socio-technical transitions .Other scholars have explored future energy transitions through recursive interactions and ‘dialogue’ between quantitative models and qualitative socio-technical storylines .These bridging attempts form an important new research stream that aims to combine quantitative rigour with processual socio-technical insights.The relational and co-constructionist nature of a sociotechnical approach can blur the boundaries between emergence, diffusion and impact.We identify two more synthetic cross-cutting debates that span the different themes.The first cross-cutting debate is how impacts of low carbon innovations are co-constructed by choices in the earlier processes of emergence and diffusion.It is easy to use this co-construction notion to criticize ‘traditional’ impact studies, but more difficult to develop a deeper understanding of relevant processes and mechanisms.There are some starting points in sociological theories of innovation, but these need to be further developed, especially for more radical and systemic low carbon innovation.Actor-network theorists, for instance, have argued that designers build a ‘script’ into new technologies, which shapes later behaviour in a non-deterministic manner .So the impacts that manifest themselves in later periods are already constructed in early design and emergence phases.By the time that the in-built impacts become apparent, it is too late or too difficult to make design changes.This problem is sometimes called the ‘Collingridge dilemma’ , and it has inspired some scholars to develop ‘Constructive Technology Assessment’.CTA emphasizes not only the importance of early thinking about future impacts, but also feeding the views about possible effects back into design decisions in the emergence phase of technologies.Social historians suggest that impacts arise from the way new technologies are societally embedded via specific policies, infrastructures, markets, and societal debates .So, the same innovation can have different impacts in different countries or localities, depending on choices during societal embedding processes.This idea can be further exemplified by exploring social justice impacts.The material and social transformations associated with the emergence, diffusion and impact of new innovations are imbued with contestations over what is just, equitable, and right.Thus, there is a need for studies that explore questions of ethics and justice across these stages, including concern for where, how and with whom new technologies are socially embedded.Without a focus on justice, an energy efficiency revolution may fail to acknowledge the burden of not having enough energy, where some individuals lack access, are challenged by under-consumption and poverty, and may face health burdens and shortened lives as a consequence of restricted energy choices .A second cross-cutting debate concerns the role of policy and governance in shaping the emergence, diffusion and impacts of low carbon innovations.Three topics are of particular interest.The first is the importance of the policy mix and the synergies and conflict between different instruments.In line with the systemic approach to innovation and impact taken throughout this article, we also take a systemic view of policies and policymaking.As Kern et al. identify, much of the ‘policy advice’ literature still focuses on individual policy instruments, pairwise instrument interactions or intended policy mixes, neglecting the analysis of complex, real-world mixes, their development over time, and their consistency and coherency.We agree with Sovacool and Kivimaa and Kern about the need for comprehensive policies rather than individual, isolated mechanisms that tend to operate in a non-predictable and non-synergetic matter.As an example, Givoni et al.’s exploration of the transport sector illustrates that the deliberate and careful combination of mutually supportive policy packages may result in more effective and efficient outcomes through increasing public and political acceptability and the likelihood of implementation.It is important therefore to look at the ‘whole system’ of policy instruments, to identify positive and negative interactions between policies, and to investigate how these hinder or stimulate the emergence, diffusion and impact of low carbon innovations.The second topic is the pervasive role of politics in the emergence and diffusion of low carbon innovations.Early transitions thinking was criticised for being too technocratic, with a failure to fully acknowledge the role of politics and conflict .Since then a research stream on the ‘politics of transitions’ has started to incorporate political science theories into socio-technical perspectives, e.g. Sabatier’s advocacy coalition framework , Kingdon’s multiple streams framework , political economy , and political coalition theories , with the aim of better understanding the conflicts and power struggles associated with the emergence and diffusion of low carbon innovations.Consequently, policymaking is not seen as a purely rational process, but as a political process involving multiple stakeholders and social groups.So, we perceive policymakers as part of sociotechnical systems rather than as steering them from outside.The interactions within governance networks entail agenda-setting, discussions, negotiations, as well as disagreements and conflicts that relate to different views, interests and positions.A third topic is multi-level governance, which refers to interactions between supra-national, national and local policies and policymakers.This issue is particularly important for low carbon innovations, many of which are not only implemented but also increasingly configured and governed locally.In Europe, such local processes are shaped by local policy makers, operating in the context of national and European framework policies such as targets, regulations and subsidy schemes .This literature usefully contextualizes some of the voluntarist tendencies of the urban transitions literature, discussed in Section 3.Schwanen also shows how successes in the implementation of urban transport innovations in UK cities are dependent on national and EU level support.Bulkeley and Betsill therefore advocate multi-level governance approaches in research on the role of urban planning for climate change protection, thereby blurring the boundaries between global goals and local actions in the presence of the nation-state.Alignments and tensions between supranational, national and local policies are therefore critical in shaping the success of low carbon innovations.This article has identified and described thirteen research debates in the socio-technical transitions literature.The focus throughout has been on theoretical and conceptual issues rather than specific empirical topics.With this in mind, we offer three broader conclusions.First, the dominant economic and psychological approaches to understanding energy efficiency and demand reduction only provide a partial picture which is reflected in the limitations of the current policy mix and its focus upon incremental change.Radical reductions in energy demand require more far-reaching transitions in the systems that provide energy services.Policies to encourage this must in turn be informed by a deeper understanding of the actors, innovations, and causal processes involved.Second, a sociotechnical approach on low carbon innovation offers such an understanding.This perspective focuses upon how radical innovation is about creating new sociotechnical systems through the co-construction of multiple elements.Informed by detailed case studies, this interdisciplinary perspective sheds new light on how sociotechnical systems evolve, stabilise and transform through the alignment of developments on multiple levels.The themes of emergence, diffusion and impacts are useful heuristic devices through which to understand the sociotechnical transitions that are required for drastic reductions in energy demand.In each case we have described the sociotechnical conceptualisation of the research theme and identified several research debates within the theme.These debates are summarised in Table 1.Third, a sociotechnical approach exposes several important characteristics about low carbon innovation and transitions, namely:Radical low carbon innovation involves systemic change: This extends beyond purely technical developments to include changes in consumer practices, business models and organisational arrangements.A sociotechnical transitions approach links multiple innovations and transforms broader sociotechnical systems.Radical low carbon innovation involves cultural change: Low carbon innovations are typically less ‘sexy’ then energy supply innovations, and garner less interest from policymakers and the wider public .Most people have little interest in demand reduction and the economic incentive to save energy is often weak.An energy efficiency and demand ‘revolution’ will therefore require dedicated campaigns to create a sense of urgency and excitement about low carbon innovations.To alter cultural preferences, such campaigns need to go beyond information provision and aim to create positive discourses and increase competencies and confidence among users.Radical low carbon innovation involves new policies and political struggles: Since many of the benefits of low carbon innovation can be considered a public good, incentives may be weak in the absence of collective action.The development and adoption of low carbon innovations will therefore require sustained and effective policies to create appropriate incentives and support.The development and implementation of such policies entail political struggles because actors have different understandings and interests, which give rise to disagreements and conflicts.Managing low carbon transitions is therefore not only a techno-managerial challenge, but also a broader political project that involves the building of support coalitions that include businesses and civil society.Radical low carbon innovation involves pervasive uncertainty: The technical potential, cost, consumer demand and social acceptance of new innovations are highly uncertain in their early stages of development, which means that the process of radical innovation is more open-ended than for incremental innovations.Such uncertainty carries governance challenges.Policy approaches facing deep uncertainty must protect against and/or prepare for unforeseeable developments, whether it is through resistance, resilience, or adaptation .Such uncertainty can be hedged in part by learning by firms, consumers and policymakers.Social interactions and network building and the articulation of positive visions all play a crucial role.This uncertainty extends to the impacts of low carbon innovations on energy demand and other variables, where unanticipated and unintended outcomes are the norm.Essentially, low carbon innovation demands we not only rethink the promise of both technology and behavioural change, but our assumptions concerning systems, culture, politics, and uncertainty as well.
Improvements in energy efficiency and reductions in energy demand are expected to contribute more than half of the reduction in global carbon emissions over the next few decades. These unprecedented reductions require transformations in the systems that provide energy services. However, the dominant analytical perspectives, grounded in neoclassical economics and social psychology, focus upon marginal changes and provide only limited guidance on how such transformations may occur and how they can be shaped. We argue that a socio-technical transitions perspective is more suited to address the complexity of the challenges involved. This perspective understands energy services as being provided through large-scale, capital intensive and long-lived infrastructures that co-evolve with technologies, institutions, skills, knowledge and behaviours to create broader 'sociotechnical systems’. To provide guidance for research in this area, this paper identifies and describes thirteen debates in socio-technical transitions research, organized under the headings of emergence, diffusion and impact, as well as more synthetic cross-cutting issues.
272
Qualifying the design of a floating closed-containment fish farm using computational fluid dynamics
The production of Atlantic salmon is the paramount activity in Norwegian aquaculture, accounting for more than 80% of the total aquaculture production in the country.With a thousand-fold growth over last four decades, Norway is currently contributing more than one third of the global salmon production.Aspiring to increase the salmon production by five times by 2050, Norwegian aquaculture has been evolving with new businesses and innovative technologies with a focus on the environmental performance of fish farms.However, there are many challenges facing this proposed five-fold expansion in production, which include sea lice, diseases, production losses etc.This necessitates innovative production systems such as closed-containment systems, where the fish are separated from the outside environment.With a better control on production, environmental impact and disease transmission makes CCS a promising alternative to open-cage production systems.There has been a growing interest in the Norwegian aquaculture industry in CCS solutions for post-smolts.Post-smolts are salmon being adapted to sea water life, and up to about 1 kg.Although the harvest size is about 5 kg, the post-smolt stage still amounts to approximately half the production time cycle in the sea due to the growth characteristics of salmon.By keeping the post-smolts in closed systems, this considerably reduces their exposure to sea lice, and also these systems are a more stable environment for fish production.At 4% annual growth, production in Norway should increase to 3,000,000 t by 2030.Thus, CCS plants by 2030 can be expected to account for a production of 500,000 t.The industry is therefore interested in innovative solutions to achieve this.Little research has been done to investigate flow hydrodynamics in CCS using computational methods.However, the subject of rotational flows in confined domains has been investigated for some time, but in different applications.For instance, the early experimental studies of Willingham, Sedlak, Rossini, and Westhaver, and Macleod and Matterson considered the flow behaviour in the rotary fractionation columns.Kloosterziel and van Heijst performed experiments to analyse the vortices in a rotating fluid.The study noted several observations on vortex stability, which imply that the characteristics of rotating fluid largely depend on the type of eddies that prevail.Eddies are influenced by the type of inflow and outflow settings employed in the system.The empirical study by Dyakova and Polezhaev on the steady flow in a rotating cylinder explains the complexity associated with these flows.Furthermore, a considerable effect of the geometry of container on the flow characteristics was experimentally studied by Pieralisi, Montante, and Paglianti.Although the observations made in the above literature are relevant to the present study, the experimental methods used in these investigations were enormously complex and time consuming to carry out at full-scale.On the other hand, the theoretical approach to understand the rotating flow patterns depend on far-reaching assumptions.Computational Fluid Dynamics has become a promising tool to create a platform for simulation-driven product development, without the need to produce working prototypes for testing.By solving the conservation equations for mass and momentum using CFD tools, comprehensive information on various flow features can be obtained and used to improve the flow conditions.The flow injected tangentially through a series of jets into a circular tank is inherently turbulent and several dynamic aspects associated with turbulence are experienced.Under steady inflow conditions, the rotational fluid in a closed domain experiences columnar vortical structures that align with the axis of rotational motion.Associated eddies with steep energy spectra play a major role in mixing and momentum transfer by reducing the turbulence dissipation rate that would otherwise occur.The coherent vortical structures in such complex flows can be detected using various criteria.Levine, Rappel, and Cohen gives an intuitive definition of a vortex structure by presenting it as the rotational motion of several particles around a common centre.This definition can be improved by also considering the vortex convection.Vortex formation and convection is primarily accompanied by fluid shearing and thus turbulent zones.Although there are several theoretical and experimental methods to determine the vortex characteristics in closed flow domains, one of the objectives of this study is to observe and compare the mixing characteristics in different geometrical designs of the closed-containment aquaculture system FishGLOBE.With an increasing concern about the environmental impact of aquaculture, technologies are being developed to manage the organic waste, including the deposition of waste material as well as control of water quality due to their presence.While the near-field deposition of wastes including solids removal and stabilisation has its own challenges, the motion of solid particles in the working fluid is one of the critical aspects of operation due to its two-way interaction with the hydraulic environment; the physical and motion properties of the solids are determined by the flow field, and particle dissolution influences the water quality.In fish culture environments, solids in the water column mainly consist of faecal material and uneaten feed pellets.Ideally, these particles should be flushed out much faster than the mean hydraulic retention time of the tank, otherwise they can adversely affect the water quality and the health and welfare of the fish.The internal fish tank geometry will also influence the flow pattern and thus the particles motion, in the context of confined flow domain.Through computational modelling, Shahrokhi, Rostami, Said, Yazdi, and Syafalni and Guo et al. identified that the efficiency of a settling tank increases with the flow uniformity.A uniform velocity field was found to increase the rate of suspended particle deposition.Active circulation zones create non-uniform conditions in the flow, which adversely affect the particle removal.Furthermore, it has been shown that Atlantic salmon growth, health and welfare is improved by increased water velocity which provides exercise training for the fish, usually in the range 1–1.5 body lengths per second.Therefore, any development of closed-containment systems must take this velocity requirement into account.While reviewing the circular tank technology for aquaculture, Timmons, Summerfelt, and Vinci specified that the overall flow pattern is largely dependent on the inflow characteristics, which was experimentally proved by Muller, Cesare, and Schleiss.However, quantifying studies on this topic in CCS are missing in the existing literature.This paper aims to investigate the existing design of floating, closed-containment aquaculture system-FishGLOBE, and improve the design by testing two different inlet configurations.Section 2 describes the rationale for the project, and how the design of FishGLOBE is superior to the land based farming systems.In this study, two models were investigated; a pilot globe and a post-smolt globe.While the flow field was investigated in both designs using turbulence modelling, the validation experiments were possible only for pilot globe, using Acoustic Doppler Velocimetry.In addition, the motion of biosolids in the post-smolt globe was studied using Lagrangian formulation.These investigation methods are illustrated in section 3.In section 4, a detailed information on the flow field that evolved in the pilot globe, including the effect of flow rate, is described.Based on analysis of pilot globe, which has the tangential inflow, a full-scale computational model of pilot globe was developed with two inlet configurations.Contrasting studies between both designs for flow physics and particle flushing in the post-smolt globe is presented in section 5, and conclusions made in section 6.FishGLOBE aims to develop a closed fish-farming facility for better fish growth and reduced production problems.The project is expected to provide a more protected environment for farming the salmon post-smolts.Combining the technology of FishGLOBE and regular open cages, it is expected to help solve some of the bigger environmental challenges, such as sea lice.The globe is equipped with diffusors, situated inside the water inlet pipes, for delivering oxygen.Furthermore, a system for CO2 removal from reused water, based on the use of ejectors, could be turned on if the facility experiences the difficulties with getting fresh water; thereby contributing to water quality and fish welfare during such emergencies.To achieve positive buoyancy, buoyancy tanks are located inside the upper portion of the globe, which also makes the construction stronger and resistant to the forces caused by waves.The buoyancy tanks can also be used as technical rooms, with an emergency power supply and a reserve oxygen tank.These chambers also have the equipment for filtration and handling of waste material and dead fish, as well as feed stores.The design also offers a unique solution to transport fish at high rates by creating a positive atmospheric pressure inside the structure.In addition, FishGLOBE offers most of the solutions that well boats can offer to treat the freshwater for parasites and sea lice at almost the same processing capacity.This drastically reduces the operating costs.All these operating systems are located above the rearing volume as shown in Fig. 1, where the design of the pilot globe is illustrated.The use of closed-containment systems in aquaculture is dependent on their safe, controllable and optimal operation and systems must be tested to verify designs and confirm the desired flow pattern.Proper dimensioning and management of the hydrodynamics in the facility is imperative for water quality, fish development, health and welfare, and operating costs.Good water quality depends on an optimal flow pattern to ensure not only better distribution of oxygen but also the efficient removal of waste products.A Nortek 10 MHz acoustic Doppler velocimetry probe was used in this study to measure the 3D velocity components at predefined locations within the globe.The instrument, as shown in Fig. 2, operates on the Doppler shift principle and is suitable for determining point specific velocity fluctuations but not for identifying coherent flow structures, such as the resolution of turbulent structures within a domain.ADV measurements can only be carried out within particle-laden flows.At the start of measurement, a short acoustic signal of known frequency is emitted from the transmitter.This signal is reflected in the water by the smallest particles moving with the speed of the water.The echo of the signals reflected from the measuring volume reaches the three receivers with a time shift Δt, is amplified in the signal conditioning module and digitized and analysed in the processor.The frequency change in the acoustic signal at the time of impact on and reflection from the measuring volume caused by a relative movement of the water flowing in all three directions is proportional to the flow velocity.A stable recording for 35 kB data for each measurement, consisting of individual velocity components was collected."The momentary velocity, measured over a period, is temporally averaged from a turbulent fluctuating variable v'.As noted by Strom and Papanicolaou, the signal filtering process involves the removal of low quality data within the time series through signal correlation and signal-to-noise ratio; the former is a statistical measure of how closely the reflected pulses are related, and the latter gives the ratio between the transmitted and received signal strength.A correlation coefficient of more than 90% indicates a reliable measurement.SNR should always have values of more than 15 dB, when the data is recorded with a sampling frequency of 25 Hz.If only the mean value of the measured values is considered in the data evaluation, a SNR of 5 dB is sufficient.A typical measurement is shown in Fig. 3.However, there is a variety of parameters that lead to uncertainty in ADV measurements, particularly in turbulent flows.These include random spikes in the data, Doppler noise, too close presence of fish, and unresolved turbulent scales.Due to the lack of exclusive measurement settings for each sampling point to validate the computational predictions, a reasonable requirement is to quantify the uncertainty in velocity magnitude.This uncertainty can be reduced to some extent by fine tuning and calibration of the measurement apparatus.However, the temporal fluctuations in the flow variables necessarily produce considerable deviation from the mean values.Referring to Fig. 4, there are two openings, each of 800 mm, on opposite sides of the globe at approximately 1.7 m from the centre, which were the only possible practical locations to measure the velocity in the globe.This limited the number of velocity measurements to 11 along three vertical lines at each operating condition.When making field measurements, it is important to obtain reliable reference data.Due to a number of errors and uncertainties in the real-time measurements, appropriate data filtration techniques are necessary to remove wrong data sets.In the present study, a sufficiently large amount of data was collected, which reduced the degree of variation in the measurements.Figure 5 shows that the coefficient of variation at the chosen locations - lines a, b and c, is less than 1% at different pump speeds.A likely decreasing trend of CV with pump speed is observed.This means that the uncertainty associated with the equipment and measurement processes contribute more to the variations in the measurement than the uncertainty in managing an accurate flowrate into the globe.In the context of complex geometries used here, a reliable multi-physics analysis and a cost-effective workflow were required.Figure 6 shows the schematic workflow, used in this work.The transfer of CAD data from one computational engine to the other is a challenging task, particularly in the case of complex geometries.Older data formats such as STL and IGES/IGS offer a surface representation based CAD data transfer, which is not suitable for exchanging the product data structures and solid model definitions.On the other hand, the formats STEP/STP can efficiently transfer such metadata and provides a standard interoperability of data exchange between different computer programs.The software package CATIA V5 R21 was used to develop the geometry models in STEP/STP format.To create a control volume domain, an automated meshing process using Castnet was implemented.However, the meshing interface was limited only to the imports of Parasolid or STL models.Therefore, an efficient CAD translation from STP to X_T was performed using the conversion software, 3D-Tool V12.A finite volume based CFD tool that works with OpenFOAM technology, called BlueCFD, was used for simulations.While offering a wide range of viscous and multiphysics solvers, BlueCFD-Core 2.3, in association with a graphical interface, called RunGui, constituted the simulation environment for the present study.The results comprised both field representations and quantified parameters, which were post-processed using Paraview 5 and Matlab R17.In the context of increasing demands for high fidelity computational studies in designing and analysing flow systems, powerful computing machines, with a rapid technological evolution in their architecture, have been in use for about 30 years.Multicore processors with highly capable shared memory nodes have become the fulcrum of advanced computations.The development of an efficient computational model is not a straightforward process and it requires many different skills.The physics of the flow were first modelled using a robust numerical framework, followed by the implementation of solution process on parallel computing machine.Parallel processing involves the domain decomposition, where the computational grid and its associated fields are partitioned to be handled by separate processors.OpenFOAM employs process-level parallelism between the processors using the standard, message passing interface, which levers the communication between different tasks through data exchange.The parallel processing of MPI protocol in the present study used the Scotch heterogeneous decomposition, which requires no geometric information and thus reduces the number of patches between the processors.Because the amount of data communication is reduced, the performance could be increased.Figure 8 shows the resulting 26 subdomains after decomposing the computational grid.For the computational modelling of the hydrodynamics in the pilot globe of 74 m3 volume, a simple CAD model was developed by eliminating all internal supporting structures and operating systems inside the globe.The resulting geometry was discretised into finite volumes to solve the conservation equations.The fundamental aspects that distinguish computational grids are cell shape and size, which determine the solution accuracy.A widely-accepted fact is that the hexahedral cells yield better accuracy than the tetrahedral cells.However, a critical review of computational meshes is particularly important for industrial applications where the geometries are often complex, and a trade-off between the ease of mesh generation and solution accuracy is necessary.The computational study of Hosseini, Patel, Ein-Mozaffari, and Mehrvar on the multiphase modelling of an agitated tank used tetrahedral cells to create the unstructured mesh over complex surfaces.However, being the lowest order polyhedral construction, the tetrahedral cells occupy the space less efficiently for a given resolution, and thus demand more memory and high CPU time.Concerning the boundary layers, where the viscous effects lead the momentum transport, Ito and Nakahashi acknowledged that the hexahedral meshes are more effective in predicting the flow gradients than the tetra meshes.Kowalski, Ledoux, and Frey noted that the complicated domains could be covered by a full hexahedral mesh by ensuring the good quality in terms of cell dihedral angle and Jacobian measure.In order to benefit from the full hexahedral cells, the meshing process was considered to generate a dominantly structured hexahedral mesh.Although the transient turbulent simulations require sufficiently fine resolution in the node spacing, minimising the number of cells has a huge payback in terms of computational cost.Several mesh dependence studies were conducted by changing the base cell size until the velocity magnitude in the regions of high gradients did not change by more than 2%.As a result, a hexa mesh with 484,112 cells was developed.Because the mesh quality significantly affects the solution accuracy by influencing the discretisation error, different mesh quality parameters were examined.98% of mesh cells had the aspect ratio less than 10, and 99% had the dihedral angle between 70° and 130°.92% of cells had the skewness less than 0.8 with 55% less than 0.5.This confirmed good quality of discretized domain.The simulations were initiated with a first-order upwind discretisation in space and time until the solution was converged, and then switched to second-order accuracy.The pressure and velocity fields were coupled using the SIMPLE algorithm with second order interpolation.The transient formulation contains the time step 0.005 s with 30 sub-iterations.The residuals of computed flow variables were set to the order of 10−3 as convergence criteria, and no further change in the solution was observed with further reduction in the target residual value.The computations were started at low under-relaxation factors, which were raised to default values after a stable solution was witnessed.A 14-core Intel Xeon E5-2683 v3 2.00 GHz workstation with 28 processors was employed for the computations.Before analysing the flow field in the globe, the developed computational model was validated against the velocity measurements at predefined locations under different operating conditions.Figure 10 compares the CFD predictions of velocity magnitude along the lines, and from the ADV measurements.The standard deviation bars that accompany the experimental results, quantify the uncertainty in the measurements.This variation in the velocity does not necessarily represent the uncertainty in the measurement utilities alone, but it includes the flow rate through inlet pipes as well.Because the error in the flow rate into the globe is unknown, the analysis is therefore limited only to the magnitude of deviation but not its source.The spatial variation of velocity along the line ‘c’ is due to the jets from each flow inlet nozzle.In addition, the interactions between the local flow and near boundaries such as free surface, conical wall surface and bottom of the globe are not homogeneous.Irregularity in the velocity profile is preserved along the flow path as seen along the line ‘a’.On the other hand, the velocity magnitude and its scale of variation are comparatively lesser along the line ‘b’.This shows that strong velocity gradients exist in the radial direction.Also, a reduction in the velocity magnitude is observed as the flow travels from the location ‘c’ to ‘a’, which explains the momentum diffusivity in space.The flow velocity on free surface displays a different behaviour along the radius.The free surface velocity is higher than at depth along the line ‘b’, which is possibly due to flow suction by the outlet on the central vertical pipe.But, at extremely high flow rates, the stress-free wall surface boundary condition caused underestimated velocity predictions by CFD.Fluctuations in divergence of the free surface velocity field is associated with a range of turbulence scales.Better accuracy could be ensured using a two-phase flow model at the free surface but this would produce a small gain in accuracy and cost more in terms of CPU time.The conical bottom for fish tanks plays an important role in self-cleaning of the tank.FishGLOBE adopted this concept to create the secondary vortices in the flow by keeping the primary rotational flow free from perturbations and non-uniform flow structures.It is therefore interesting to investigate the effect of Reynolds number on the characteristics of secondary vortices in a confined flow domain.Figure 11 – show the streamline distribution across the central vertical plane at different operating conditions.For comparison purposes, the contour plot is coloured on the scale of a normalised velocity.Referring to Fig. 11, region 1 is characterised by high velocity and hence strong shear stresses, which is likely to be narrowed down at higher pumping conditions.This is due to the increasing normalised velocity in the vicinity of region 1, implying more uniform flow in the tank.There is an increased vortex formation in the region 2 with pump speed, but this is at the expense of losing vortices in region 3.In addition, there is a new vortex growing in region 4 at high flow rates.Increasing flow velocity deforms the vortex combo at the location 5, which tends to move to the bottom of the tank.The vortex position and strength in region 6 is also influenced by the pump speed, but limited by the confined flow domain."It is observed that the flow structure along the vertical line through region 7 is little impacted by the increased flowrate due to equally dominant rotational flow near the tank's periphery and radial flow from the core to the centre of the globe.However, investigating the relationship between these two flow components is out of the scope of present study.By and large, the obtuse-angled corners of the globe control the velocity field for any flowrate.It is thus concluded that the design of pilot globe maintains effective ‘tea-cup hydrodynamics’ to promote mixing and self-cleaning of the tank.The vortices are characterised by the peaks of low pressure.However, Jeong and Hussain noted that the minimum local pressure is not a sufficient condition, though necessary, to identify the vortices.This led to the definition of the tensor S2+Ω,2 where Sij = 0.5 and Ωij = 0.5 are the symmetric and antisymmetric components of velocity gradient.The Q criterion, as computed by Gorle, Chatellier, Pons, and Ba identified the vortices as the regions, where the flow is dominated by the rotation tensor.Mathematically, it is defined as Q = 0.5.Consequently, the vortex structures are identified by a representation of the positive Q iso-values, while their centres are identified by the maximum values of Q.The evolution of vortices at different pump speeds are visualised in Fig. 11– using iso Q-value.The intensity of circulation increases with flow rates.Two major regions of vortices are observed in all cases.One is the vortex ring around the central outlet pipe, and the other is envelope of vorticity that covers the inlet pipes.On a general note, these two regions comprise turbulent energy cascade processes.The individual vortical structures advected away from the inlet pipes in Fig. 11 and along the outer envelope have merged to form a continuous columnar vortex in the globe at higher flow rates as shown in Fig. 11.The wake zones behind the inlet pipes can be stretched by the vortex core followed by a gradual entrainment into the envelope.Enriched flow dynamics in this region are often difficult to accurately model.Stripping, stretching and wrapping of vortices generate a wide range of turbulence scales, which challenge the ability of available turbulence models to capture the pertinent physics.In order to exploit the advantages of CCS as seen in the case of pilot globe, a post-smolt facility with an expected production of 250 t of salmon in a rearing volume of 3500 m3 is in the development phase.The design of the post-smolt globe was computationally tested to determine if the design produces the optimal flow conditions and mixing in the globe.Figure 12 shows the geometry and basic dimensions of the post-smolt globe.In addition to increasing the water volume by ∼50 times from the size of the pilot globe, there are two major changes in the geometries of the two designs.Firstly, the conical part wall of the pilot globe that has the largest radius was replaced by the straight vertical wall.This would expect to result in a change in the development of vortices in the region 2 as in Fig. 11.Secondly, the number of inlet pipes was increased from 3 to 6.However, the computational model was developed with only two inlet pipes operating to supply a total flow of 1.98 m3 s−1.From several possible configurations for the nozzles on the inlet pipes in terms of their size, shape and orientation, two nozzle designs were considered and compared for their performance.The first design has a standard series of 40 nozzles, each having the diameter of 125 mm, placed along the height of each inlet pipe.The nozzles are tangential to the wall of the globe and so is the inflow direction.In the second design, the flow is discharged through 160 nozzles, divided into two columns such that a V-type inflow feature is created.The inner column directs the flow at an angle of 20° towards the centre of the globe, while the outer column of nozzles discharges the flow at 10° away from the tangent.The nozzle size in this case is 63 mm.Figure 13 and show the respective nozzle designs that confirm an equal inlet velocity of 2 m s−1, when 0.99 m3 s−1 flow is discharged through each inlet pipe into the globe.The streamline patterns across the two vertical planes, XY and ZY in the globe with the proposed nozzles are illustrated in Figs. 13–.The flow is accelerated around solid obstructions such as inlet pipes.Flow from the standard nozzles, which is purely tangential to the walls, follows a pure circular path in the globe.This feature displays a higher velocity around the obstruction, than that in case of V-nozzles.In addition, standard nozzle design creates a concentrated velocity distribution along the radial position of inlet pipes, which is more distributed across the plane in case of V-nozzle design.Both designs display the presence of secondary vortices along the vertical planes, which retain the ‘tea-cup’ effect and promote mixing activity.The rotational velocity across the globe was analysed using the velocity distribution across the horizontal plane.Figure 14 distinguishes both nozzles for planar velocity distributions at different heights from the base of the globe.All planar visuals display a common trait; a maximum velocity from the two inlet pipes and the major flow gradients along the ring of inlet pipes.This qualitative locus separates the velocity field; standard nozzles create higher velocities outside this ring and a lower velocity inside.V-nozzles on the other hand create higher velocities inside the ring than the outside.Comparing the wake region downstream of the inlet pipes, the inner series of nozzles deliver the flow with a radial component, and therefore a lesser tendency to interact with the pipe on downstream.This leads to a reduced wake area behind the pipes, which implies a reduced form drag.The wake area in case of standard nozzles is comparatively larger, and relatively more energy should be spent overcoming the drag force induced by low-pressure wake.Table 1 shows the γ indices across the horizontal planes at different heights for the two designs.Both designs have appreciable difference in the velocity distribution.With the standard nozzles, the peak velocities are gradually reduced from the periphery to the centre.In contrast, V-nozzles caused the velocity to jump suddenly from the lower values in the periphery to higher values in the core.This resulted in somewhat lower uniformity indices for the globe with V-nozzles, compared to standard nozzles.However, as observed by Gorle et al., this difference is of no practical effect because the overall uniformity was never below 90% in both cases.Another advantage of V-nozzles is that the jets emanating from 2 columns of nozzles into the upstream flow tend to organize themselves with the mean flow quickly.This controls the wake size behind the inlet pipes, which is not the case with standard nozzles.Wake consists of shear zones and characterized by severe turbulence.The knowledge of the fine structures of turbulence in the flow domain can help identify the critical regions, where flow field fluctuations are maximum.This is important because turbulence intervenes in the phenomena of flow uniformity, mixing and particle settling.Turbulent structures apparently appear downstream of each inlet pipe with the peaks near the inlet nozzles.The transport mechanisms increase in these regions due to the prevailing transient characteristics of eddies, which extract the kinetic energy from the mean flow.Figure 15 and depict the 3D contours of turbulent kinetic energy k in the globe of selected inflow designs with an iso-value of 0.007.It is clear that the standard nozzles create stronger velocity gradients than V-nozzles, which result in increased production of turbulent kinetic energy.The turbulent kinetic energy dissipates more quickly in the case V-nozzles.Negligible turbulent kinetic energy distribution was observed in both cases near the tank walls compared to the peaks of distribution.Although the volume fraction of solids in the globe is very small compared to the size of the globe, the particles adversely impact the water quality and hence the welfare and performance of the fish.Uneaten feed pellets and fish faeces, if left in the culture tank, are hydrolysed or decomposed by micro-organisms which reduces the dissolved oxygen in the water and increases CO2, NH3 and other mineral nutrients.Thus, the uneaten feed particles should be removed from the tank to prevent these wastes from further degrading into fine particulates and dissolved organic matter that exerts an oxygen demand, ammonia, and dissolved phosphorous, as well as to control the eutrophication and potentially hypoxic conditions in the receiving water.The solids in the flow domain should be treated to meet the quality standards.To facilitate an effective self-cleaning action, optimised flow conditions and structural design is necessary.A separate particle trapping system exists in the post-smolt globe, which along with the particles discharges 1% of the flow.A further investigation was carried out to measure the effectiveness of solids flushing in both the standard and V-nozzle designs.In the first step, the working fluid was characterized as the water-solids mixture.The liquid phase is defined by its density, viscosity, average velocity, while the solid phase is characterized by particle size, shape, density and particle cohesion, which determine the rate of sedimentation in the globe.When the particle size is relatively small with respect to the flow dimension, turbulence plays a major role in the flow of the water/solids mixture.In order to evaluate the particle motion in the selected designs, two types of particles, fish faeces and feed pellets, were used to investigate the motion of the solids in the globe.The density of fish faecal matter is likely to vary depending upon operating conditions.Suspended solid specific gravity values of 1.13–1.20 and 1.005 were reported by Timmons and Young, and Robertson, respectively.The study of Unger and Brinker on a variety of fish diets indicated a mean specific gravity of 1.036 ± 0.0018 of faecal matter from 0.3 to 0.4 kg sized rainbow trout.Also, the settling velocity of the particles varies with the size of the particles.These authors found that the settling velocity increases with particle size from 1 mm s−1 for 200 μm faecal particles to 6–9 mm s−1 for 600 μm size.These wide variations likely reveal the diversity of such complex hydrodynamic systems.In the present study, fish faecal particles with a mass-averaged diameter of 200 μm and a specific gravity of 1.036 were modelled as they were injected through the water surface at an initial settling velocity of 4 mm s−1.In order to track the particles through the longest travelling distance, the particles were injected at the water surface.For the sake of simple analysis, the number of particles injected was limited to 500 in a fully developed Eulerian flow field.It was also assumed that there were no collisions among the particles.A standard wall interaction with a coefficient of 0.5 for the particles was used in the solution process.Figure 16 shows the resulting distribution of particles in the globe for the selected inlet configurations at different times after the particle have been injected.As soon as the particles enter the flow domain, the particles attained the characteristics of the flow, as explained in Fig. 14.Higher peripheral velocity with the standard nozzles moves the particles in this region with higher momentum, which happens near the centre with V-nozzles.Particle settling occurs along the flow length under the influence of kinetic energy of the flow, gravity force and evacuation through the outlet holes.The snapshots at t = 11.6 min show that the particles in V-nozzle design move swiftly towards the outlet and get discharged out of the tank, resulting in a fewer particles left.The stronger central vortex forming in the standard nozzle design is likely to prevent the particles to go close to the outlet.At t = 23.2 min, higher particle momentum continues near the periphery of the tank in the case of standard nozzles.This reveals that the loss of uniformity with V-nozzles as explained in section 5.2 has become an advantage because fewer particles are near the periphery of the tank, since they are difficult to move towards the central outlet.Furthermore, the solids removal efficiency of both inlet configurations was compared for particle residence time.Figure 17 shows the percentage of particles removed as the particles were injected at time t = 0.Standard nozzle design takes approximately 45 min to steadily flush 40% of particles.V-nozzle design moves more than half of the particles in less than 17 min, which results in a steeper profile.This corresponds to the self-cleaning effectiveness of V-nozzles is approximately two times higher than that standard nozzle design.In case of feed pellets, both nozzle designs displayed identical trends, although V-nozzles took 65% less time than the standard nozzles to flush 40% of particles.This paper presents the development of CFD models of a closed-containment aquaculture system.Such aquaculture systems are increasingly being focused on as a technology that can lead to further growth of the salmon farming industry.There is a lack of scientific information for the development of innovative solutions in the field of hydrodynamics in aquaculture systems.Using CFD, this study has developed new inlet designs to improve the flow patterns.The following three aspects are worth noting.Measurements and modelling: ADV was used for velocity measurements in the pilot globe at 11 predefined locations.Although the amount of information obtained from the experiments was enough to validate the computational model, the measurements are not sufficient to fully understand the turbulence features in the globe.However, it is possible to estimate the variance and covariance of velocity components using ADV, which can improve the quality of flow characterisation.Despite the assumptions made in relation to the instrument geometry, acoustic device efficiency and the target element positioning, Voulgaris and Trowbridge recorded a deviation of only 1% in the mean velocity and Reynolds stresses using ADV from true values.The deviation was 5%, when the turbulent boundary layer flows over a smooth bed was studied by Dombroski and Crimaldi.Future studies should empirically study the levels of turbulence in the globe.Flow characterisation: This study demonstrated the effect of inflow rate on the flow domain of a closed-containment aquaculture system.It was found that this flow field is characterized by enriched vortex dynamics, associated with the vortex column and rings.The conical corners of the pilot globe largely control the presence of secondary vortices.With the information obtained from the CFD studies of the pilot globe, a much larger post-smolt globe was simulated with two different inlet configurations under the hypothesis that inlet configuration has a major impact on the flow domain.It was discovered that V-nozzle configuration for the inlet pipes displayed a superior performance to the standard nozzles in terms of vorticity distribution and energy preservation, and was only penalised by a 2% reduction in mean uniformity of the flow.Other nozzle configurations that can improve the flow pattern are possible and this requires further study.In addition, the positioning of inlet and outlet pipes could be a fruitful area for future research.Particles in the flow: The motion dynamics of faeces and uneaten feed pellets were well-captured in this study.The inlet configuration was found to have substantial influence on the particle distribution in the globe and their settling features.V-nozzles displayed approximately three times better performance in flushing the solids than the standard nozzles.The simple particle-tracking model used in the present study did not consider the collisions.This could be addressed in the future using a stochastic collision formulation which requires appropriate information on the mechanical properties of the solids concerned.This is more challenging and yet untouched research topic in the field of aquaculture and it requires multi-particle dynamics modelling of combinations of feed and faeces at high Re.The next step in the project FishGLOBE is to develop a grow-out globe to farm bigger fish, sized 100 g–3 kg.The expected rearing volume would be 29,000 m3, capable of holding 2300 t of fish.Such large constructions, with enormous amounts of flow transferring to and from the globe, would necessarily create high Reynolds effects with complex flow dynamics.Future computational studies on this grow-out globe will consider the outcomes of the present study to create favourable flows and operating conditions.
In order to overcome the environmental consequences of traditional net pens in producing Atlantic salmon, closed containment aquaculture systems are being developed, where the culture volume is separated from the ambient environment by an impermeable wall. However, several challenges in terms of construction and hydrodynamic properties must be solved before such systems can be used on a large scale. A study was thus performed on the design of a floating closed-containment fish farm in sea. This paper presents the design and flow analysis of two versions of the globe; first is the pilot design of a 74 m3 globe, and the second is the design of a 3500 m3 globe for post-smolts of Atlantic salmon. The results of turbulence model of the pilot globe were validated against the velocity measurements using acoustic Doppler velocimetry. Computational assessment of various flow characteristics includes the velocity and vorticity fields. The streamline pattern confirmed the secondary vortices, creating the tea-cup hydrodynamics. Coherent vortices, identified by means of Q-criterion, show the presence of vortex column in the globe. Two inlet configurations were tested on the post-smolt globe for improved performance. Design 1 has the standard one-column nozzle configuration, and the Design 2 has two-column nozzles to create a V-shaped inflow. The mixing action of the two designs was examined using Lagrangian particle tracking. Considerable influence of inlet configuration on the particle motion was observed. It was found that V-nozzles (two columns of inlet nozzles) are more effective than standard nozzles in flushing the solid particles.
273
Farm family effects of adopting improved and hybrid sorghum seed in the Sudan Savanna of West Africa
Sorghum is grown in harsh environments where other crops grow poorly, by farmers who are among the world’s poorest.Along with millet and groundnuts, sorghum “defines” the semi-arid tropics of West Africa.Since the devastating droughts of the 1970–80s, national and international research institutes have invested to improve sorghum productivity in this region.Globally, when combined with other farming practices, the diffusion of well-adapted, improved seed has enhanced the productivity of major food crops, including sorghum.Yet, area shares planted to improved sorghum varieties are estimated at only 3% in Burkina Faso, 15% in Niger, and 20% in Nigeria.In Mali, our country of study, estimates range from 13% to around 30%, depending on the measurement approach, geographical coverage, and time period.From a plant breeding perspective, one explanation for low adoption rates is that under farmers’ conditions, achieving sizeable yield gains with improved sorghum varieties has posed particular challenges.In the Sudan Savanna of West Africa, yield advantages have not attained 30% over farmers’ own best cultivars until the release in 2010 of the first sorghum hybrids based largely on Guinea-race germplasm that originated in this region.Earlier research had demonstrated the potential of hybrid sorghum seed to yield well under controlled conditions in Niger and Kenya, but with introduced germplasm.From a policy perspective, numerous structural constraints have impeded adoption, including a slow transition from entirely state-managed seed supply channels.Most smallholder farmers in the drylands have few resources other than seed, their labor, and their family lands to produce the cereal harvests they need to survive.From the perspective of a Malian farmer, one might argue that no seed value chain has existed for sorghum.Most of Mali’s numerous sorghum growers remain largely outside the formal structures of extension and program services, termed “encadrement”.By comparison, cash crops such as rice or cotton have vertically-integrated value chains that provide a range of services via registered cooperatives.Others argue that farmers in this region avoid paying cash for seed--and understandably so, if the advantages of using improved seed are dubious.Sorghum is a traditional food staple in Mali and the West African Savanna is one of the centers of the crop’s domestication.The fundamental role that sorghum seed plays in the well-being of smallholder farmers is reflected in rural cultural norms.These include a customary reverence for the seed of local varieties, a perspective that all farmers have a right to seed, and a preference for giving or sharing seed rather than exchanging or purchasing seed with cash.The sorghum seed system has remained ‘farmer-centered’, with farmers themselves diffusing much of the local and improved seed plant each season through associations, social networks and other relationships based on trust.There is some evidence that preferences regarding seed acquisition may be evolving, however.For example, our census of sorghum varieties grown by 2430 growers in the Sudan Savanna indicated that 38% of seed of improved varieties and 67% of seed of hybrids was initially obtained through a cash purchase as compared to gifts or exchanges.In this paper, we explain adoption and measure the impacts of improved sorghum seed on farm families at an initial point in the diffusion process for hybrids in the Sudan Savanna of West Africa, where they were initially introduced in Mali.We make several contributions to the literature on this topic.First, we measure the farm family impacts of the first sorghum hybrids developed and released in Sub-Saharan Africa that were based largely on local germplasm.In addition, these were bred through an on-farm, participatory process.We employ data generated by statistical sample drawn from 58 villages in Mali and we are able to differentiate hybrids from other improved varieties based on expert identification.Second, instead of grouping improved varieties with hybrids, we differentiate the impacts of each seed type by applying a less frequently used, multivalued treatment effects approach.Described by Wooldridge, Cattaneo, and Cattaneo et al., we have found only one published example so far of its application in agriculture.Many treatment applications with observational data in agricultural development pertain to binary assignment, which is typically addressed with propensity score matching—an approach that has not been fully extended to multivalued treatments.We test three multivalued effect estimators with regression methods.Regression serves as a baseline estimator.We also apply the augmented, inverse-probability weighted estimator and the inverse-probability weighted, regression adjustment estimator, both of which are described as “doubly robust.,Third, considering that like most of the smallholders in the Sudan Savanna of West Africa, Malian farm families both sell and consume their crop, we explore effects of improved seed use not only on plot-level yield but also household dietary diversity and the share of sorghum in consumption and sales.Finally, reflecting the social organization of production among sorghum growers in this region, we include the gender, generation and education of the plot manager in addition to plot, household and market characteristics among our control variables.These covariates also enable us to capture intrinsic, unobserved management factors related to the status of the plot manager within the household.Three aspects of sorghum production in the Sudan Savanna of West Africa are fundamental to understanding the process and potential of seed-based technical change: the nature of sorghum genetic resources in the region, including the history of sorghum improvement; the importance of the crop in dryland farming; and the social organization of sorghum production.Farmers in Sub-Saharan African grow several morphological forms or “races” of sorghum.These include caudatum, durra, kafir, and Sorghum bicolor, which is broadly distributed.The West African Savanna, where parts of the Guinea race of sorghum originated and still dominates, produces most of the sorghum in Sub-Saharan Africa.The Guinea race of sorghum is uniquely adapted to growing conditions in the West African Savanna.Photo-period sensitivity enables its plants to adjust to the length of the growing seasons, which is important for farmers when the beginning of the rainy season is unpredictable.The lax panicles and open glumes of this morphological form reduce grain damage from birds, insects and grain mold.Sorghum and millet continue to serve as the cereals base of the drylands agricultural economy in the Sudan Savanna of West Africa—destined primarily for consumption by the farming families who grow them, but also sold for cash as needed.Recognizing the importance of sorghum as a food staple in more arid zones, the governments of Mali, Burkina Faso and Niger, in particular, have long pursued a goal of raising sorghum productivity.During the Sahelian droughts of the 1970–1980s, national and international research systems accelerated efforts to enhance sorghum yields, also introducing exotic germplasm from outside national borders.Nonetheless, growth rates of national yields have not been as impressive as might be hoped.Yields reported by FAOSTAT for Mali show an average growth rate of 0.49% from 1961 to 2013.From 1980 to 2013, which corresponds to an active sorghum improvement program, the average growth rate in sorghum yields was 2.3%.This growth is quite modest, especially when compared with the 7.6% average growth rate in rice yields over the same time period.National average yields have rarely exceeded 1 t per ha.In Mali, sorghum is extensively cultivated on degraded soils with low fertility and little to no chemical fertilizer.Early breeding approaches focused on “purification” of superior farmers’ varieties and the introduction of exotic germplasm, but this latter group often lacked resistance to insects and grain mold and farmers’ varieties were often preferred for their grain quality.In general, achieving more than marginal yield changes has been difficult without hybrid vigor.Since 2000, the national sorghum improvement program has pursued two additional directions.The first is a participatory approach to sorghum improvement, based on a network of multi-locational, farmer field trials managed by farmer associations.The second is the development of the first hybrid seed based primarily on germplasm of the local Guinea race of sorghum.Summarizing the results of trials conducted by smallholder farmers in the Sudan Savanna, Rattunde et al. reported yield advantages of individual hybrids of 17–47% over the local check, with the top three hybrids averaging 30%.Such yield advantages had not been previously achieved with improved varieties in this region.On-farm trials represented a broad range of growing conditions, including entries grown with and without fertilizer.Aside from these sorghum hybrids, no other hybrid sorghum seed based on local germplasm has been released to farmers in Sub-Saharan Africa.A private hybrid was introduced in the irrigated areas of Sudan and other exotic hybrids have been tested in Niger and Kenya, The development and release in Mali of hybrid seed based largely on local, Guinea-race sorghum is thus a pilot initiative for the broader region of the West African Savanna and a major step for sorghum improvement in Sub-Saharan Africa.Dryland cereals in the Sudan Savanna of West Africa are often produced by extended family members who are vertically or horizontally related to the head of the family farm enterprise.The head is usually an elder patriarch, or a work leader he designates.The head guides the organization of production on large plots worked collectively with the goal of meeting the staple food needs of the overall extended family.Custodian of the family’s land use rights, he also allocates individual plots to household members who cultivate them privately to meet personal needs.Wives obtain use rights on marriage into the family.Unmarried sons are generally also allocated plots.Status within the extended family household conditions rights to land, ability to negotiate for the labor commitment of other household members, and access to family-held equipment and other resources.The sampling frame is a baseline census of all sorghum-growing households in 58 villages located in the Cercles of Kati, Dioila, and Koutiala, in the regions of Koulikoro and Sikasso.All variety names reported by farmers and improvement status of varieties were identified with the assistance of field technicians and also verified by sorghum breeders working with the national program at the International Crops Research Institute of the Semi-Arid Tropics in Mali.Sikasso and Koulikoro regions have the largest proportions of agricultural land in the Sudan Savanna zone, and are the principal sorghum-producing regions in Mali according to area cultivated and total production.Thus, they are priority target areas for sorghum breeding and especially for hybrid seed development.Villages surveyed included all those listed as sites where the national research program and the International Crops Research Institute for the Semi-Arid Tropics have conducted testing activities via a network of farmer associations as early as 2000.Our findings are therefore representative of areas with at least some engagement by the national sorghum program.However, analysis of adoption rates in the baseline census shows variation from 0% to 80% and a distribution that does not differ significantly from normal—enabling us to treat villages as if they had been drawn at random rather than forming a separate control group.Some villages decided not to test varieties; in others, the sorghum program implemented only surveys to familiarize themselves with farmer priorities.Village adoption rates depend on the diffusion strategies pursued by farmer associations and other underlying social networks through which seed is exchanged among farmers, rather than on a formally-managed government program with specified selection criteria.The enumeration unit in the baseline census, and generally in Mali, is the Entreprise Agricole Familiale.According to the national agricultural policy act, the EAF is a production unit composed of several members who are related and who use production factors collectively to generate resources under the supervision of one the members designated as head of household.The head represents the EAF in all civil acts, including representation and participation in government programs.He or she may designate a team leader to supervise field work and manage the EAF on behalf or to assist the head when he/she has physical or other limitations.In our analytical sample, all but one of the heads is male and all team leaders are male.For more detailed analysis of adoption and effects of adoption on the well-being of farming households, a sample of EAFs was drawn with simple random sampling using the baseline adoption rate for improved varieties to calculate sample size.The sample was augmented by five percent to account for possible non-responses, and because of small numbers at this early stage of adoption, all 45 hybrid-growing EAFs were included.The final sample size for adoption and impact analysis is 628 EAFs, with an overall sampling fraction of 25%.Enumerators inventoried all plots operated by each sampled EAF, grouping them by crop and plot management type.One sorghum plot was randomly sampled per management type per EAF.The total sample of sorghum plots analyzed here, including those collectively and individually-managed, is 734.In this analysis, plot is defined by variety; that is, only one sorghum variety is grown per plot.The multi-visit survey was conducted in four rounds from August 2014 through June 2015, with a combination of paper questionnaires and computer-assisted personal interviews, by a team of experienced enumerators employed by the Institut d’Economie Rurale.Our analysis has two components.In the first, we explore the determinants of plot-level variety choice.Estimation of two separate adoption equations is one feasible strategy, but this strategy would not account for interrelationships in either systematic or random components of the variety choice decision.Bivariate probit would be a modeling option that allows for correlations in the error structure between two separate decisions, but the choice farmers’ make is to grow one type of sorghum variety over another one each plot.Conceptually, we prefer an ordered logit model, differentiating between three types of sorghum varieties: local, improved, and hybrid.The order, which is sometimes referred to as “improvement status,” recognizes several potentially important differences between the three categories.Many improved varieties grown by farmers in this region are popular older releases, for which the seed may have been reused and shared among farmers.By contrast, sorghum hybrids are new releases.Although on-farm trial evidence demonstrates that sorghum hybrids perform well with and without fertilizer, farmers and extension agents often state that hybrid seed “requires” fertilizer and may manage it differently.In addition, it is recommended that farmers purchase and replace hybrid seed each year, while annual replacement is considered to be less important for improved sorghum varieties as long as good seed storage practices are followed.While we consider that the method of improvement as well as the seed characteristics create an order of choice, we also estimated a multinomial choice model for purposes of comparison.In the second component of our analysis, we estimate a multivalued treatment effects model.In our context, adoption processes for improved sorghum seed have occurred naturally, with occasional programmatic interventions, over a period of years; treatment assignment is nonrandom because some farmers choose to adopt while others do not.Once introduced into a community by a development project or program, improved sorghum seed, like the seed of local sorghum varieties, has diffused from farmer to farmer based on relationships of trust, including kinship, social networks, formal and informal associations.Thus, we expect that adopters and non-adopters may differ systematically.Sorghum hybrids have been more recently introduced, but through similar processes.Various methods have been used to address the question of establishing a counterfactual with non-experimental observations, including the class of treatment effect models, most of which involve a single treatment level represented by a binary variable.The case of multivalued treatment effects has been developed by Wooldridge, Cattaneo and Cattaneo et al.Wooldridge presents the example of participation in a training program that occurs at different levels, or in different forms.Cattaneo develops a more general theory for semiparametric estimators and applies it to estimate quantile treatment effects.X is a vector of covariates that influence the outcome Y and Z is a set of covariates explaining treatment assignment T, which may overlap.By exploiting inverse-probability weights, these estimators control for selection.AIPW and IPWRA enable consistent estimation of treatment parameters when either the outcome model, the treatment model, or both are correctly specified.For this reason, both the AIPW and IPWRA are called “doubly robust.,The AIPW has been termed the “efficient influence function”; the IPWRA is also known as Wooldridge’s “doubly robust” estimator.AIPW and IPWRA estimators can be more efficient than RA.A recent example of the application of the binary AIPW estimator to analyze agricultural innovations in Malawi is found in Haile et al.Esposti applied the multivalued, Cattaneo model to evaluate the impact of the 2005 reform of the Common Agricultural Policy on farm production choices using treatment intervals.Identification of the treatment effect relies on the satisfying the properties of conditional mean independence, which stipulates that the treatment does not affect the mean of the outcome variable.The multivalued case relies on a weaker assumption than in the binary case.Among our estimators, weighting by the inverse of the estimated propensity score can achieve covariate balance and creates a sample in which the distribution of covariates is independent of treatment status.Potential bias generated by unobservable characteristics remains.We address this requirement by introducing plot manager characteristics into the treatment model to control for intrinsic, unobserved factors related to management and access to resources within the extended family household.We also examine the common support condition.Models are estimated in STATA 14.The conceptual basis of our variety choice model is the non-separable model of the agricultural household, reflecting production by a farm family enterprise that primarily deploys its own labor supply and land in an effort to produce staple food requirements in a situation with market imperfections.In our survey data, we find virtually no evidence of land or labor markets; farm families consume, sell, and give their harvests to others.According to this framework, we expect household capital endowments and proximity to market infrastructure to affect transactions costs and thus the likelihood of acquiring inputs in sorghum production.Although we would argue that typically, sorghum seed is not viewed as a purchased input in the same way as fertilizer or herbicides, endowments also affect access to information and knowledge about new seed types.In Mali, access to formalized extension structures substitutes to some extent for commercial markets, influencing farmer access to inputs and services of various kinds, including subsidized fertilizer.To express “encadrement,” we include a variable measuring the share of all plot managers in the village who are members of a registered farmer cooperative.The market network extends to weekly fairs conducted in villages.We include a dummy variable for the presence of a weekly fair in the village of the EAF.Finally, as described above, we recognize the social organization of production in this region of Mali, and add the characteristics of the plot manager among our explanatory variables.We also control for physical features of the plot, including time in minutes to travel from homestead to the plot and whether any structure has been built on the plot to offset soil and water erosion.Table 1 shows the definitions and means of our independent variables in the ordered logit model.In estimating multivalued treatment effects, we seek to quantify the potential outcomes that express changes in the supply of sorghum to the EAF and associated changes in the consumption patterns of the EAF.Referring to equation, our treatment specification is a multinomial logit with T equal to zero if a local variety is grown on the plot, 1 if an improved variety is grown, and 2 if hybrid seed was planted.The outcome models in Eq. are specified as linear or fractional response in form, depending on the variable of interest.Outcome variables, definitions and means are shown in Table 2, including yield, dietary diversity, the value share of sorghum and other cereals in the food purchases during the week preceding the survey, the share of sorghum in the total quantity of cereals consumed from the harvest, and the share of sorghum harvested that was sold.Of particular interest, the Household Dietary Diversity Score represents a count of the different food groups consumed during the 7 days preceding the survey.The variable “FreqHDDS”, used here, augments the score to reflect the number of times a food group is consumed.For each food group, the EAF receives a score of 0 for frequencies fewer than four times per week, a score of unity for frequencies from 4 to 6 times per week, and a score of 2 for frequencies of seven or more.With ten groups, the hypothetical range of the sum is 1–20.Without controlling for other factors, average yields and mean dietary diversity appear to increase with the improvement status of sorghum varieties.The mean share of sorghum in recent food purchases declines with improvement status, while the mean value share of other cereals in recent food purchases rises.The quantity share of sorghum in cereals consumed during the last cropping season is lower, and the share sold of the previous season’s harvest increases, when improved varieties or hybrids are grown.For yield, we specify a model where Y is sorghum yield in kg/ha and X is a vector of agricultural inputs applied on sorghum plots, including quantities per ha, as well as physical plot characteristics.Z is a vector of the same plot manager covariates that are included in the adoption analysis.Definitions and means of control variables are shown in Table 3.For consumption outcomes, following the conceptual basis of the agricultural household, we consider that relevant factors include both the supply side and those that affect outcomes through a constraint on expenditures.The specification includes the production function inputs of the yield model and plot manager characteristics as treatment covariates, adding the covariates that are likely to condition consumption, given the amount produced as outcome covariates.These include household size, transfer receipts received in the year preceding the survey from absent household members, as well as household wealth in assets and the presence of a weekly market fair in the village, which affect transactions costs of purchasing consumption goods.The ordered logit regression model explaining the adoption of sorghum varieties by improvement status is shown in Table 42.Marginal effects of this model are shown in Annex Table 1.Plot manager characteristics, which reflect the social organization of farming in this region of Mali, are key determinants of variety adoption in sorghum production.These features are not often included in adoption studies, which usually focus on household characteristics.Individual management of a plot, compared to collective management of a plot by the EAF head, generally reduces the likelihood that improved varieties of sorghum or hybrid seed are grown.Thus, potentially higher-yielding sorghum seed is allocated in first priority to the collective plots of the extended family, which are destined first and foremost to their collective needs.However, management by the senior wife of the head increasing the chances that improved varieties, and especially hybrids, are grown in the plot.The effect of management by the son is also positive and significant but weaker in magnitude and significance.These results reflect their status relative to other plot managers, such as other wives, daughters-in-law, brothers and fathers of the head.Clearly, improved seed is distributed among plot managers, although some members appear to have better access within the household.Seed is an input that is neutral to scale when other complementary inputs such as fertilizer and irrigation are not used.Sorghum is not heavily fertilized in Mali and all of the plots surveyed were rainfed.Culturally, the right to seed is still an important social norm, so that constraints on overall access are not expected to be binding unless a seed variety is newly introduced.In addition, sorghum is a food staple, and qualitative work in this region has underscored the fact that women are increasingly planting sorghum on their individual fields, often grown as an intercrop with other crops they grew previously, in order to supplement the family food supply.As part of the current outreach strategy of the national sorghum improvement program, women farmers, in addition to men in the household, have been approached in recognition of their roles in sorghum production.Attainment of primary education by the plot manager is strongly significant for adoption of improved varieties, and even more so for sorghum hybrids.Despite that variety information is transmitted by word of mouth, in general, primary education broadens interest in and access to information and services, supporting innovation.We expect the ability to read and write strongly affects receptiveness and access to new information, techniques, or technologies.While plot location does not influence the likelihood of growing improved seed, erosion control structures on the plot are negatively associated with improvement status.Thériault et al. reported similar findings in Burkina Faso.Presence of anti-erosion structures on plots typically reflects the slope of the land and dissemination efforts by formal cooperative structures more than variety type.Hybrids have been more recently introduced; the average time since initial construction of stone bunds on sorghum plots in the sample is 10 years.Furthermore, while women managers in our sample grow hybrids, they are less likely to have anti-erosion structures on their smaller plots.As in the broad adoption literature, capital endowments are strongly significant in predicting the use of improved sorghum varieties.On the other hand, neither the extent of membership of village plot managers in a registered cooperative nor the presence of a weekly market fair in the village appears to influence the likelihood that improved varieties of sorghum are planted on a plot.The explanation for the first result is that registered cooperatives are primarily conduits for inputs and services related to cotton production, which also includes maize seed but not sorghum seed.Fertilizer subsidies, while facilitated by cooperatives, have also been facilitated by other associations and are in principle available to sorghum growers, though at a lower rate.Improved sorghum seed has been introduced occasionally by external organizations and programs, but directly and indirectly via farmers’ associations.However, diffusion has occurred primarily from farmer to farmer, among those who are members of farmers’ associations, but not exclusively.Concerning the local market, it is still the case that little of the sorghum seed planted by farmers in this region passes through commercial markets or agrodealers, despite efforts by donors to stimulate the development of seed markets.Estimates of average treatment effects are shown for all outcomes and the three estimators in Table 5.In terms of significance of effects, results are generally, but not always consistent across models.The most noteworthy differences in both significance and coefficient magnitudes tend are between the regression adjustment approach, and the other two approaches, which explain both treatment assignment and outcome.If the overlap assumption holds, both the AIPW and IPWRA estimators have the “double robust” property.The density distributions of the estimated probabilities for the three groups, shown in Annex B, show little mass around zero or one, supporting the overlap assumption.Yield effects are strongly significant and of a large relative magnitude for sorghum hybrids, but not for improved varieties, relative to local varieties.Yield advantages are between 479 and 1055 kg/ha, depending on the model, which represents 79–180% of the untreated mean of the local varieties grown by farmers.This result confirms findings reported by Rattunde et al., which were generated by on-farm trials.The fact that the yield advantages of 34–35% estimated for improved varieties are not statistically significant probably reflects the underlying variability of yields under farmers’ conditions for this heterogeneous category of materials that combines older and more recent introductions.Meanwhile, the expenditure share of sorghum is reduced by growing hybrids, as measured during the week preceding the visit when enumerators asked for consumption information.Since interviews occurred three to four months after the harvest, these do not represent either a post-harvest pattern or “hungry” season pattern.At the same time, the effect of growing improved varieties on the share of sorghum consumed from the harvest is negative and statistically significant.As yields rise, the share needed to satisfy consumption needs declines given no substantial changes in household size.This result could also be explained by the fact that higher sorghum yields can enable farmers to release land for the production of other cereals that they would like to consume.Among these villages, for example, maize is both grown and consumed more than was the case in the past.Notably, the share of the sorghum harvest sold rose by large percentages when improved seed was grown, and even more so for hybrid seed.This finding suggests that growing improved sorghum varieties or hybrids could contribute to commercializing a food crop for which no formally developed market channel has been developed.Assuming a consumer price of 150 FCFA in the hungry season, and 90 FCFA as the sales price just after harvest, improved varieties and hybrids give the farmers the potential to increase their sales revenue by between 4644 and 7740 FCFA and 6500–10,836 FCFA depending on model.Earnings from additional sales might be utilized to purchase other cereals or other food items.Consistent with this point, the effect of growing improved varieties is positive on the expenditure share of other cereals.Neither this nor the effect on sorghum consumption shares is apparent for hybrids, but small areas were planted to hybrid seed per household and despite higher per ha yields, the increment to total amounts harvested by the household may not have been large.When the frequency of consumption is taken into account, growing hybrids does appear to be significantly associated with dietary diversity in the households of hybrid growers, generating a 7–8% increase in the score in the AIPW and IPWRA models.In this analysis, we have contributed to a relatively sparse literature on the adoption and impacts of improved sorghum varieties in the Sudan Savanna of West Africa, including the first sorghum hybrids released in Sub-Saharan Africa that were bred largely from local germplasm.The analysis is based on primary data collected in Mali, where the sorghum hybrids were developed through participatory, on-farm testing, and released as a pilot initiative by the national program.We apply two econometric approaches that are infrequently used in the seed adoption literature, in order to differentiate improved varieties from hybrids.To identify the determinants of adoption, we applied an ordered logit model to reflect the differences in the attributes of the germplasm and the length of awareness and diffusion periods for improved as compared to hybrid seed in our study area.Reflecting the social organization of sorghum production in Sudan Savanna, which involves production by extended families on multiple plots managed by different household members, we tested the significance of plot manager characteristics, as well as the plot, household and market characteristics in our regressions.Status within these households, and access to inputs and other farm resources, is conferred to some extent by gender, and generation.We also include education among these factors.Then, we applied a multivalued treatment effect approach to evaluate the differential effects of adoption of sorghum varieties and sorghum hybrids on extended families.In terms of outcomes, we evaluated both supply outcomes and consumption outcomes.We applied three statistical models, including a baseline regression adjustment and two “doubly robust” approaches that model both treatment and outcomes, controlling for selection.We used plot manager characteristics to capture intrinsic, unobserved heterogeneity, while controlling for physical characteristics of the plot, household and market covariates.Compared with other adoption studies, we find statistical significance of plot manager characteristics related to intrahousehold status.The improvement status of the sorghum grown is positively associated with collective plot management by the head.This finding suggests that heads prioritize the use of seed of new, potentially higher yielding varieties on fields destined to meet the starchy staple needs of the extended family as a whole.However, among plots managed by household members other than the head, the senior wives and sons of heads or more likely also to be growing improved seed or hybrids.Attainment of primary education by the plot manager is also a significant determinant, consistent with the notion that formal education is correlated with various information pathways that can affect access to improved varieties and hybrid seed.As expected in the broad adoption literature, household capital endowments are significant determinants of adoption.Contrary to common findings, being a member of a formal cooperative has no effect on adoption of improved sorghum varieties because these primarily facilitate access to fertilizer and other credit to cotton growers.Further, the fact that the presence of a weekly market fair in the farmer’s village has no effect on improvement status is not unexpected.The commercial market for seed remains underdeveloped, and most improved seed and sorghum hybrids have been initially introduced through programs.Subsequently, improved seed has typically diffused from farmer to farmer through customary channels.The multivalued treatment effects model shows that yield effects are strongly significant and of a large relative magnitude for sorghum hybrids, but not so much for improved varieties, relative to local varieties.Growing sorghum hybrids also reduces the share of sorghum in food expenditures a few months after harvest.Greater production from growing improved varieties reduces the share of sorghum consumed from the harvest.The share of the sorghum harvested that is sold rises when improved varieties and sorghum hybrids are grown.Meanwhile, the impact of growing improved varieties is positive on the share of other cereals in consumption expenditures.Growing sorghum hybrids also has a meaningful and significant effect on household dietary diversity, reflecting the capacity of household to plant or purchase other food items when yields rise.On the basis of these findings, we conclude that adoption has the potential to contribute to diversification of consumption patterns and to greater commercialization by smallholders.Generated with household survey data, our findings regarding the yield advantages of sorghum hybrids support previously published, on-farm research by Rattunde et al.We and Rattunde et al. demonstrate the potential importance of heterosis for increasing grain yield under farmers’ field conditions—which is of significance for agricultural research policy and for farmers elsewhere in West Africa.Results have clear consequences for sorghum improvement strategies.Improving local germplasm is a useful approach, including heterosis.Achievements reflect that farmers were involved in the testing of hybrids which are both higher-yielding and considered to be acceptable for local use.Beyond productivity effects, the analysis also lends insight into the potential for improved varieties, and particularly, locally-developed sorghum hybrids, to contribute to dietary diversification and commercialization of a staple food crop.Like other starchy staples in the region, sorghum has not benefited nearly as much as many cash crops from investments in marketing programs and related infrastructure.Indeed, ready access to a local market and access to formal cooperative structures do not yet seem to play a role in the use of improved seed or sorghum hybrids in the Sudan Savanna of Mali.Given that sorghum is a food crop grown primarily by poor, dispersed smallholders in semi-arid areas, policy makers in West Africa must be realistic in their expectations of the extent and form of private sector interest in supplying sorghum seed.In Mali, parapublic models, and decentralized means of seed supply to more remote areas through farmer seed producer associations appears to be a workable model in regions where the formal cooperative structures are inactive.Seed dissemination strategies should be integreated into locally acceptable norms and patterns of local seed distribution, inlcuidng sales by locally trusted farmers.We are not aware of greater private sector engagement in neighboring Burkina Faso or Niger.In our analysis, we incorporate the social organization of production by controlling for plot management type and manager characteristics.However, we have not depicted the process of intrahousehold decision-making about seed use—a potential area of future research.In our study area, anecdotal evidence suggests that women are increasingly growing sorghum on the individual plots they manage to address the nutritional needs of their children and meet the costs of clothing, school, and health care.Where this proves to be the case, any policy aiming to promote the development and diffusion of improved varieties of sorghum should recognize the potential role of women members of the farm family enterprise in planting and producing sorghum.This pattern is clearly a change in cultural norms; the conventional wisdom has been that women were not involved in sorghum production outside the collective fields of the family.To encourage higher adoption rates, our results indicate that channels of introduction for seed should incorporate not only the household head but also all economically active members of the EAF.The role of senior women in the EAF is seen to be strong in our sample, which reflects a combination of factors, including that the national sorghum program in Mali has made an effort to ensure that women contribute and benefit from variety testing programs and related activities.Concerning these substantive issues, testing findings in other regions of West Africa where improved varieties and sorghum hybrids have been introduced over time will be important for the formulation of national and regional policy.
Uptake of improved sorghum varieties in the Sudan Savanna of West Africa has been limited, despite the economic importance of the crop and long-term investments in sorghum improvement. One reason why is that attaining substantial yield advantages has been difficult in this harsh, heterogeneous growing environment. Release in Mali of the first sorghum hybrids in Sub-Saharan Africa that have been developed primarily from local germplasm has the potential to change this situation. Utilizing plot data collected in Mali, we explain the adoption of improved seed with an ordered logit model and apply a multivalued treatment effects model to measure impacts on farm families, differentiating between improved varieties and hybrids. Since farm families both consume and sell their sorghum, we consider effects on consumption patterns as well as productivity. Status within the household, conferred by gender combined with marital status, generation, and education, is strongly related to the improvement status of sorghum seed planted in these extended family households. Effects of hybrid use on yields are large, widening the range of food items consumed, reducing the share of sorghum in food purchases, and contributing to a greater share of the sorghum harvest sold. Use of improved seed appears to be associated with a shift toward consumption of other cereals, and also to greater sales shares. Findings support on-farm research concerning yield advantages, also suggesting that the use of well-adapted sorghum hybrids could contribute to diet diversification and the crop's commercialization by smallholders.
274
Sequestration of C in soils under Miscanthus can be marginal and is affected by genotype-specific root distribution
Miscanthus is a favored perennial feedstock for bioenergy in subtropical and temperate regions due to its high potential productivity and benefits with regard to the carbon and greenhouse gas balance.Domestication of these perennials is in its infancy and genotypes may be found or bred that suit a wider range of ecological conditions and maximize efficiency of carbon sequestration.The increasing interest in Miscanthus should be accompanied by the exploration of the carbon budgets of other genotypes in addition to commercially grown Miscanthus × giganteus.This would clarify whether contrasting Miscanthus phenotypes act as an effective sink or even a source of atmospheric carbon.Key considerations in determining the soil organic carbon balance require measurement of the C fraction deposited into the subsoil, which is less likely to be remobilized than C deposited in the surface horizon.Measurements of 13C abundance can also be used to indicate the stability of these inputs in the surface and subsoil under commercially grown Miscanthus × giganteus.Existing studies of the genotype effect focus on carbon near the surface which ignores the potentially beneficial effect of deep roots as a mechanism to sequester carbon.It is of further interest how contrasting growth forms, e.g. phenotypes and carbon allocation patterns, e.g. different above- and belowground biomass allocation, and root densities, affect SOC.An integrative comparison of genotypes can inform about the relationships between productivity, carbon partitioning and carbon sequestration characteristics, including vertical and lateral root distribution in response to rhizome form and size.This may have practical implications, such as elucidating the potential for increasing sequestration by breeding or selecting varieties with deep roots.The relative contributions of AGB and BGB are easily confounded with annual litter inputs from M. × giganteus being between 1.5 and 7 Mg ha−1 yr−1.The contribution of roots to SOC is thought to be significantly greater than that of litter in grassland and woody ecosystems.Clifton-Brown et al. estimated that the C sequestered from Miscanthus into SOC after 15 years was equal to 10% of the BGB assuming a total input of 20 Mg dry weight ha−1, which contributed 14% to the total SOC in the first 10 cm layer.In deep soils the Miscanthus root-fraction was shown to accumulate initially at the net rate of >2 Mg ha−1 yr−1, which then decreased to about 1 Mg ha−1 yr−1 as a result of >3 Mg ha−1 growth and >2 Mg ha−1 decomposition.In the present work, we aim to characterise the distribution of Miscanthus-derived SOC throughout the soil profile with particular attention to contrasts between individual genotypes from the main phenotypic growth forms, and relate these differences to measurements of root distribution.Starting from a solely C3-cropped site we use the Miscanthus induced change in δ13C signature to distinguish between the original C3-based organic carbon and SOCM under contrasting genotypes.From these quantities we then estimate C sequestration throughout the soil profile and relate this to the rooting and growth patterns of these genotypes on a marginal arable soil under low nitrogen input and climatic conditions typical of the site at Rothamsted, UK.The field experiment used in this study was established in 1997 as part of the European Miscanthus Improvement program conducted at five locations in Europe.The EMI field trial in England was established on a long-term arable field at Rothamsted Farm on a silty clay loam with sandy inclusions.C3 annual cereals and break-crops were grown exclusively on both the Miscanthus and reference arable sites and conventionally tilled for 50 years or more.The reference had remained under continuous arable management for all years since the Miscanthus was planted.The N input to a mixed arable crop rotation averaged 141 kg N ha−1 yr−1.The Miscanthus genotypes were planted as micro-propagated plantlets in 5 m × 5 m plots at a density of two plants per square meter in late May 1997.The trial had a fully randomised block design with three replicates.Plants had been drip irrigated during the first year.Details of fertiliser applications and management can be found in Riche et al.Over 14 years approximately 50 kg N ha−1 yr−1 was applied to support increasing annual yields between 4.8 and 15.9 Mg ha−1 yr−1 which then declined and accumulated totals of 100–123 Mg ha−1.Out of the 15 genotypes included in the EMI program we selected five genotypes that represent four genetic groups: M. × giganteus is a vigorous natural hybrid of Miscanthus sinensis and Miscanthus sacchariflorus, widely grown commercially in UK and Europe, M. sacchariflorus is also grown in central Europe, originally obtained from Japan in 1992, and are two genotypes from the M. sinensis hybrid collection, which are characterised by a higher leaf fraction and yield reduction under drought, Sin-11, a M. sinensis from Japan, which showed the least yield variation among the chosen genotypes.These genotypes can also be grouped according to their aboveground growth habit or rhizomes.M. sacchariflorus has broad, thick-stemmed rhizomes which creep laterally from where shoots develop out of internodal buds while rhizomes of M. sinensis genotypes are much smaller, do not exhibit the lateral creeping habit and aboveground shoots form dense centralised tufts made out of thinner stems.The annual dry matter allocation to rhizomes was estimated from earlier whole plant analysis and excavations.Based on the much larger fraction of rhizome accumulated under NT than T genotypes one could consider this an important phenotypic trait.The rhizome fraction ranged from 23% for Sac-5 to between 6 and 11% of total accumulated yield for the M. sinensis genotypes.The hybrid, M. × giganteus, allocates circa 15% of the C to intermediate rhizomes, which creep less than M. sacchariflorus.For investigating the effect of this phenotype contrast we grouped these into tuft forming and non-tuft forming groups.A corer with an inner sleeve that could be dismantled longitudinally was driven into the soil using a hydraulic jackhammer and extracted using a tripod ratchet.Two cores were taken from each plot to a depth of 100 cm, one central to the original planting site and one between plants, in the gap situated midway between plants.Cores were wrapped in polythene and stored at −18 °C pending root and soil analyses.A further three random cores were taken from the adjoining arable reference site, approximately 10 m from edge of the EMI trial as reference points for δ13C and total C.An equivalent soil mass of the A horizon of the Miscanthus plots was found in the 0–26 cm layer of the Reference Arable soil.In addition to the reference samples, archived samples from the site were retrieved for the period prior to planting the Miscanthus and analysed to obtain the baseline SOC content.This archived material always used to be sampled to a depth of 0–23 cm because this represented former ploughing depth.The cores were cut into three sections for each horizon composed of topsoil and subsoil, respectively.Due to high soil moisture at sampling, the cores were variably compressed, two thirds between 0 and 5%, and only three cores were exceeding 10% compression.A proportional adjustment was made for all sections of the compressed core before division.These sections were then divided into approximately equal half-cores and each half was weighed.One half-sample was kept for root washing while the other half was air-dried for the determination of soil moisture and chemical analysis.Stones were removed from both sub-samples.Dry bulk densities were determined from each segment using the volume and stone-free dry matter content to estimate the carbon content at each depth.The air-dried soils were gently separated from visible organic matter, litter, roots, rhizomes and large stones.The soil was then sieved and crushed using a disk mill.The root core half section was placed in a bowl of warm water to gently tease the soil apart from Miscanthus structures, and then to carefully separate plant roots from rhizomes and litter debris.Large roots were collected on a fine sieve to enable soil dispersion and small soil particles to be removed, and roots were then placed straight into a water-filled glass jar.Rhizomes and aboveground plant materials were removed from the top section of each core.Once the visible roots had been removed, the content of the bowl containing the fine roots was poured onto the sieve and rinsed thoroughly with water to remove any soil.Roots were subjected to a second rinse if the sample was not clean and then combined and stored in aqueous 10% ethanol in plastic bottles and kept in a dark cool room before scanning.Root samples were spread on an A4 plexiglass water tray of the WinRhizo flatbed scanner.Root length and diameter were quantified using the WinRhizo Pro software package, applying a standard set of acquisition parameters for black and white for the root length and diameter classes.Scans were saved as tiff files pending further image analysis.After scanning was complete, the roots were dried at 40 °C to determine the root dry matter.This was then converted to RDM per depth increment; Mg ha−1) using the respective stone-free dry bulk density of the cores.Root length density was calculated from the measured total root length and the volume of the stone-free soil in each segment.Carbonates in the soil from underlying chalk or added lime interfere with the determination of δ13C of soil organic matter because they exhibit a δ13C signature close to that of PDB.Therefore, carbonates were removed before isotope analysis by acid treatment.To avoid loss of soluble organic C by acid washing, Harris et al. proposed to expose moistened soil to concentrated HCL vapor for 6–8 h before isotope ratio mass spectrometry.However, this popular technique was found to deposit strongly acidic residues that remained even after repeated application of vacuum.The residual HCL appeared to interfere with the analyses and damage the mass spectrometer.We therefore applied the following method, similar to that developed for the removal of carbonates from coastal sediments, which combines the advantages of utilising an invasive aqueous phase without losing soluble organic C or accumulating problematic quantities of HCL.Subsamples of 20.00 ± 0.20 mg milled soil were weighed into Ag-foil capsules and placed in a random arrangement on micro-titre plates allowing adequate space between each sample.To each capsule, sufficient aqueous solution of trace analysis grade HCl was added to bring the soils approximately to field capacity.The plate was subsequently placed in an empty, carbon-free desiccator for 30 min to allow the acid to permeate throughout the sample.The desiccator was then fitted with a Viton seal and evacuated for 2–3 min until equilibrium was reached, then left at this pressure for 1 h to allow HCL to permeate throughout the sample and reduce the likelihood of trapped air.After carefully and slowly returning the desiccator to atmospheric pressure, a further 35 μL of HCL was added, before transferring to a clean oven set to 40 °C for 1 h.A final mobilisation of 35 μL de-ionised water was applied.Samples were returned to the oven to dry at 40 °C after which the Ag-foils were closed and analysed by IRMS.For quantifying inorganic carbon an excess of hydrochloric acid was applied to 5 g of soil and the resulting CO2 measured using a pressure calcimeter.Soil organic carbon was calculated from the difference between total C and IC.From this the contribution of Miscanthus–derived carbon was estimated using the profile respective depth and stone-free bulk densities.The δ13C values of nearby reference arable samples had a mean δ13C of −28.16‰ and did not show any systematic variation or statistical significance over depth.A single reference point was thus used from which estimates of Miscanthus-derived SOC accumulation were estimated.The δ13C4 marker value for Miscanthus used in this study was −11.7‰ δ13C PDB).All statistical comparisons were made using GenStat 14.Residual maximum likelihood methods were applied in preference to ANOVA as samples were not equally replicated.Variables were transformed where necessary after examining the residual diagnostic plots for homogeneity of variance; the natural logarithm was selected as the most appropriate transformation.We present the transformed and back transformed data in Table 2 only for the variables with significant effects between interacting factors; all statistical comparisons were made using the transformed data.Natural means are presented in Figs. 3 and 4.Roots of all Miscanthus genotypes were up to an order of magnitude more abundant in the A than in the B horizon but varied greatly within each phenotype group.Statistical significance of the differences between varieties or phenotypes was reduced due to spatial variability.Mean log-transformed RLD showed a significant two-way interaction between phenotype and vertical distribution.Phenotype ‘T’ contributed much less RLD to the A horizon but more to the B horizon than NT phenotypes.Additionally, tufted varieties generally showed higher RLD in the plant position than did NT, and less in the gap than NT.This interaction was just outside the 5% confidence limit.A 3-way interaction between horizon, position and phenotype was found for root dry matter, which reflects the strong contrast in spatial distributions in RDM between phenotypes.Table 3 gives horizon TRL and RDM data on the natural scale.In the A horizon there was typically more than twice the RDM directly under plants of all genotypes except for M. giganteus, where RDM appears to be evenly distributed.However, within the T phenotype the high contrast between allocation of RDM to the G vs. P position of Sin-11 and Sin-H9 was not seen with Sin-H6.Sin-H6 seems to be an exception to all genotypes.Nevertheless, the T ‘phenotype mean’ allocation of RDM to the B horizon was still nearly double that of NT.The δ13C signatures in almost all soil samples under Miscanthus were less negative than those found in the arable reference profile.Miscanthus cover had increased δ13C up to −25.39 in the B, and −16.37‰ in the A horizons, respectively.Statistical analysis of δ13C changes indicated the main effect was depth followed by an interaction between depth and genotype.The statistics demonstrate that Miscanthus genotype is a relevant factor in SOC distribution over depth.Although the sampling position effect was found to lie just outside the 5% confidence level mean δ13C over depth are presented for samples taken under both the original planting and gap positions, respectively.The SOCorig concentration in the A horizon under Miscanthus had declined to an average of 13.25 g kg−1 relative to the reference soil but less from what had been measured in the baseline from the archive.In the B horizon, there appears to be little change in the SOCorig.By comparison with archive SOC values it can be seen that Miscanthus more than compensated for losses in SOCorig through inputs of SOCM.For statistical comparison between genotypes, transformed and back transformed concentration data are presented in Table 4.Analysis of phenotype showed no statistically significant effect on SOCM stock.However, as with δ13C, there was an important interaction between horizon and genotype.Comparison of back-transformed means show subsoil SOCM tended to be greatest under Sin-H6 and lowest under Sin-H9.Gig-1 showed the greatest residual contribution to SOCM in the A horizon while Sin-11 residual contributions were low.Determining the change in SOC storage over area is complicated by the large variation of bulk density due to the variation in stone, root, and rhizome content.Mean stone-free bulk densities in the A-horizon directly under the plants were significantly smaller than in the gap.The difference was mainly attributable to the T-phenotype, where roots and rhizomes displaced the soil and raised the soil surface.No significant differences were found for the B horizons.Accordingly, statistical analysis of SOCM concentration showed a stronger interaction between horizon and genotype than estimates over area.Again however, there was no statistically significant effect of phenotype upon SOC stocks.Fig. 4a and b show natural mean stock estimates over area for SOCorig and SOCM combined.Due to high residual variability there were no statistically significant differences in transformed SOCorig data.Strictly speaking, no statistical calculation can be applied for the stock change in SOCorig by reference to the arable plot, as the reference arable is not within the randomised block design.However, Miscanthus inputs appeared to compensate for losses in SOCorig in all cases except with Sin-11.This was not seen on a concentration basis, indicating that the A horizon of Sin-11 had a lower average bulk density.Comparison indeed showed its bulk density to be the lowest among all genotypes.The results presented describe root distribution for a range of fundamentally different Miscanthus genotypes of the two major phenotypes and the impact of their long-term cultivation on the SOC of a silty clay loam in England.Similar experiments exist in Germany and Denmark but from those studies no data for roots and Miscanthus-derived SOC have been published.Our data for RLD, root biomass, and their distribution down the profile are consistent with those reported for M. × giganteus, the only commercially grown genotype.Our research illustrates the spatial heterogeneity of rooting and carbon allocation.Analysis also shows that SOC enrichment is more closely correlated with RLD than root biomass, which is consistent with findings that link carbon inputs to root exudates and rhizo-depositions from finer roots.The similarity in RDM distribution irrespective of position with M. × giganteus suggests that the mature stand had completely colonised the A horizon.For a young M. × giganteus stand RLD is considerably lower in the gap between plants than in the plant center.A recent comparison of different M. sacchariflorous × M. sinensis crosses found genotypes that spread their root system less than M. × giganteus and exhibited a growth form that was still very much akin to the T phenotype.However, their data do not distinguish between root and rhizome fractions of the BGB, although RDM traits could be approximated from biomass in their deeper horizon.RDM allocation to the subsoil is thought to be a desirable trait but rarely studied as sampling poses a challenge in terms of temporal and spatial variability.Here, C allocation to the B horizon was found to be lower in the NT than in the T phenotype, presumably because of the high allocation of C into rhizomes and associated roots.T types spread laterally to the B horizon in the gap.Spatial variability between replicates was large, and RLD did not show a statistically significant interaction between phenotype and sampling position whilst it exists for RDM.The T phenotypes included genotypes with contrasting AGB traits which could be mirrored in BGB accumulation.Sin-11 and Sin-H9 showed high proportional allocation of RDM to the B horizon, whilst Sin-H6 did not follow the same pattern.Litter accumulation was also low for this genotype but a causal link between these observations for Sin-H6 is unknown.The concomitant high SOCM and low residual litter, however, suggest potentially high SOC accumulation from more rapidly decomposing leaf and root litter.Parameters of Miscanthus residues differ but our results indicate a wide range for genotype-specific parameters.The relationship between RLD and Miscanthus-derived SOC identifies genotype Sin-H6 as an example showing greater SOCM accumulation per unit root length.This supports the hypothesis that faster turnover of roots and litter increase SOCM.It would therefore be of great interest to characterise the biochemical composition of roots and leaf residues to explore the reasons behind the observed contrast.The genotype-specific variation of carbon allocation was previously observed in controlled conditions.Data from the field are the products of greater functional complexity of C turnover, affected by temperature and soil hydrology, especially in the A horizon.The contrasting carbon allocation at depth for Sin-11 and Sin-H6 to the B horizon could reflect the different responses to water stress shown in laboratory experiments: In contrast to M. × giganteus and M. sacchariflorus, the M. sinensis type showed higher drought-tolerance possibly due to increased RLD at depth.The high proportional RDM allocation to the B horizon and lateral spread under Sin-11 suggest that root density was the primary factor influencing the accumulation of SOCM.In contrast, Sin-H9 resulted in the lowest quantity of SOCM in spite of high absolute and relative RDM to the B horizon.The fact that there was a statistically significant interaction of SOCM allocation to soil horizon with individual genotype, but not growth form indicates that other factors affect the SOCM distribution between soil profiles.Production of root exudates and other rhizo-depositions can be of similar magnitude as roots.These root-associated biochemical factors may explain the differences between the effects of roots on SOCM and SOCorig in the B horizon for genotypes Sin-11 and Sin-H9.Priming of existing SOM in response to the input of easily decomposable organic substrate is a transient phenomenon.These potentially accelerate the degradation of SOCorig as observed for Miscanthus phenotypes as suggested by Zatta et al.In this context it is controversial how the addition of mineral N affects SOC turnover.The differences we found for the impact of RLD on SOCM under different genotypes need further research, also with regard the degradation of to SOCorig.The comparison of SOC over area is affected by the change in bulk density over time.The change in ρSF affects sampling depth because soil cores taken to a fixed depth will not access the same mineral soil as before expansion.Such artefacts have been discussed by Palm et al. in the context of tillage effects on soil C stocks, proposing Equivalent Soil Mass sampling.In practical terms, an ESM controlled sampling strategy is only possible once the bulk density has been measured; however, pre-emptive sampling of every plot would be prohibitively expensive.In the present study the SOC quantities over area incorporate variation of ρSF, thus, we also compared SOC change on a concentration basis.In this way, plot variability in ρSF was avoided and genotype-related effects were identified with greater statistical confidence, at p = 0.002 as opposed to p = 0.016.Measurement on a concentration basis also provided more precise estimates of inputs as anticipated by Lee et al.Figs. 3a and 4a show that, while SOCorig declines over 14 years of Miscanthus cropping from its baseline, the total SOC increased above the baseline and was, in spite of much lower N inputs, similar to the high N reference arable.In contrast, any SOCorig decline or net SOC increase in the B horizon was not measurable.This is in accordance with the premise that deep C is more stable due to closer organo-mineral interactions.Three distinct groups of Miscanthus can be separated according to their effects on the A horizon: M. giganteus, which contributes the greatest SOCM with accompanied greatest loss of SOCorig; M. sacchariflorus with lowest contributions of SOCM and greatest retention of C3 originating SOC; M. sinensis with intermediate and similar effects on respective SOC concentrations.Recent analysis showed that the average fertiliser application of 50 kg N ha−1 yr−1 as ammonium nitrate to M. × giganteus, applied in the present study, would limit biomass production.This would be even more relevant for M. sinensis genotypes as these have only a very small rhizome system, which would limit recycling of N within the plant.It is hypothesised here that under conditions of N limitation the production of labile Miscanthus root exudates increases to support a microbial community capable of mineralising soil organic N and thus depleting SOC.Similar ideas have been discussed by Kuzyakov but empirical studies are now needed to investigate this and quantify its potential impact upon calculations of energy budgets for carbon crops.SOCorig varied greatly between plots and genotypes and no statistically significant differences were observed.However, any differences in expansion of the soil profile due to rhizome and root growth will physically protect SOC.Lower temperature and oxygen, protection from freeze-thaw and drying and wetting will all affect microbial activity and contribute to the preservation of SOC.Sequestration of C in soil purely through burying is a commonly overlooked mechanism.In temporal perennials its sustainability will depend on the form of reversion to arable.Nevertheless, the expanding surface horizons could be a useful indirect trait to develop otherwise shallow soils on marginal land and compliment the established benefits from protection against soil erosion.Interestingly, the SOCorig ‘burying effect’ could be of potentially greater contribution to sequestration of C than the direct allocation of Miscanthus C at depth because SOCorig is likely to be more closely associated with the mineral fraction and more stable than recent C inputs in subsoils.Furthermore, turnover rates of SOC are inversely related to concentration which is inherently low in the subsoil.The apparent uncertainty about changes in SOCorig in the subsoil demand further evidence to support these concepts.We found variation in root distribution between genotypes, with ‘T-phenotype’ allocating more biomass – relatively and absolutely – to roots at greater depth than NT phenotypes.Analysis of SOC concentration and isotope composition revealed allocation patterns for SOCM to be significantly different with respect to depth and genotype.Within the T phenotype, the higher dry matter allocation and lateral spread of roots in the subsoil observed for Sin-11 and Sin-H9 were not seen with Sin-H6, indicating high diversity within this phenotype.These results reveal a statistically significant link between RLD distribution and newly derived SOCM which supports the premise that, for C sequestration, it is important to consider the effect of all carbon inputs, including short-lived rhizo-depositions.Furthermore, a subgroup of higher SOCM values associated with low root volumes in M. sinensis could point to higher root turnover in some species, a hypothesis to be followed up in research that integrates N and C turnover.In view of the limiting quantity of N-fertiliser, the accumulation of SOCM in this trial is likely to be lower than would occur under optimum conditions for biomass production.The net SOC stock increase under low N input Miscanthus production was small.Although its C stock was similar to the SOC in a high input arable reference soil after 14 years of cultivation, its net carbon gain is by far superior when accounting for the respective average N fertiliser inputs.Future work could expand on the contribution of root exudates andOC from the litter to SOCM and whether the effects of chemical and physical properties on decomposition can be disentangled.The measured SOCM enrichment and SOCorig decline as well as biomass production, litter and root accumulation will be instrumental in the calibration and validation of models such as RothC for simulating soil C sequestration under Miscanthus.
Miscanthus is a low input energy crop suitable for low fertility marginal arable land and thought to provide carbon sequestration in soil. We analysed a long-term field experiment (14-year) to determine whether differences in genotype, growth habit, and root distribution affected soil carbon spatially under different Miscanthus genotypes. Soil cores were taken centrally and radially to a depth of 1m, and divided into six vertical segments. Total root length (TRL), root dry matter (RDM) and δ13C signature of soil organic carbon (SOC) were measured directly, and root length density (RLD), fractions of Miscanthus-derived soil organic C (SOCM), and residual soil carbon (SOCorig) were calculated. Genotype was found to exhibit a statistically significant influence on spatial allocation of SOC. Grouping varieties into 'tuft-forming' (T) and 'non-tuft-forming' (NT) phenotypes revealed that respective groups accumulated similar amounts of RDM over 14 years (11.4±3.3 vs. 11.9±4.8Mgha-1, respectively). However, phenotype T allocated more carbon to roots in the subsoil than NT (33% vs. 25%). Miscanthus genotypes sequestered between 4.2 and 7.1gC4-SOCkg-1 soil over the same period, which was more than the average loss of C3-derived SOC (3.25gkg-1). Carbon stocks in the 'A horizon' under Miscanthus increased by about 5Mgha-1 above the baseline, while the net increase in the subsoil was marginal. Amounts of Miscanthus root C in the subsoil were small (1.2-1.8MgCha-1) but could be important for sustainable sequestration as root density (RLD) explained a high percentage of SOCM (R2=0.66).
275
Inter- and intra-specific variation in drought sensitivity in Abies spec. and its relation to wood density and growth traits
Terrestrial plants have developed various strategies to avoid or reduce drought induced stress.Such strategies encompass, amongst others, anatomical or physiological adjustments like reduction of leaf area, reallocation of biomass from the crown to stem and roots, stomatal closure or altering gene expression patterns of proteins.One important milestone in the evolution of terrestrial plants had been the development of woody tissue, especially of xylem conduits, which allows them to transport water along a soil-plant-atmosphere continuum to greater heights efficiently.This mechanism also bears risks, especially when soil-water is limited under strong evaporative demand: if the water flow within the xylem conduits is disrupted cavitation may occur causing irreversible damages to the water transport system."A tree's vulnerability to cavitation depends on many factors such as the pore diameter in conduit walls and conduit wall reinforcement. "However, to measure these anatomical characteristics and to estimate a tree's vulnerability to cavitation requires sophisticated preparation and measuring techniques and the ultimate destruction of the given sample.Thus, less time-consuming measures of wood traits were tested as indicators of cavitation- and drought-sensitivity.In Norway spruce, Rosner et al. found a negative relationship between wood density and the pressure potential necessary to induce 50% and 88% loss of hydraulic conductivity, respectively.Similar strong relationships were also found for Douglas-fir,Franco).Other studies focused on growth performance instead of hydraulic performance and used ring width changes as response variable in order to avoid destructive measurements.The understanding of drought sensitivity of individual trees, provenances, tree species and entire forest ecosystems is an issue of utmost importance, as some ecosystems worldwide have experienced and more are expected to undergo significant increases in the frequency, duration and severity of drought periods due to climate change.In forest ecosystems, drought periods were found to result in reductions of the gross primary productivity and led to carbon emissions as well as to increased tree mortality either due to direct die-offs or indirectly via boosted insect outbreaks.Consequently, among different silvicultural adaptation methods, the planting of different provenances of the present species or even the planting of alternative tree species has been suggested as effective management option.The genus Abies encompasses, amongst others, ten species that are distributed around the Mediterranean, of which only one species,is also part of temperate and alpine forests.In the low mountain ranges of western, southern, south-eastern and Central Europe, silver fir is a main component of the forest climax vegetation.Recent comparisons of tree species suggest that silver fir is more resilient to climate change than other conifers of temperate forests.Currently economically less important Mediterranean firs have small, partially disconnected distribution ranges, but they could substitute A. alba or other conifers in temperate European forests if increasing temperatures and decreasing precipitation endanger present tree species compositions.However, a systematic analysis of drought sensitivity within the genus Abies and among populations of silver fir across several drought periods is not available so far.In our study, data was taken from a long-term provenance trial comprising 10 provenances of A. alba and 5 Mediterranean Abies species.This trial site was established in 1970 with the explicit objective to investigate the drought reaction of Abies spec and located in eastern Austria, where severe summer droughts occur frequently.In our analysis, we aim at the following questions: Do the various species and provenances of the genus Abies respond differently to drought situations?, Can wood properties be used to predict the reaction of species or provenances to drought events?, Does the climate at the origin of seed of a respective species or provenance explain its specific reaction?,The trial site is located in eastern Austria at the border of the sub-pannonian Vienna basin.It is placed on a moderate south-west slope at 290 m a.s.l. Mean annual air temperature is 8.6 °C and the annual precipitation sum is 650 mm with 270 mm during the vegetation period.Seed material of the Mediterranean fir species originated from Turkey and Greece and the provenances of silver fir from across its natural distribution area.The trial was planted in 1970 as a randomized block-design with plant spacing of 0.5 × 1.0 m using two- and three-year-old seedlings.In March 2012, all specimen, for which at least eight trees were available were sampled by taking two cores per tree at breast height.This included ten provenances of Abies alba, four Mediterranean fir species, and the natural hybrid of A. alba and A. cephalonica: A. x borisii- regis.Approximately 1.4 mm thick cross sections of each core, produced with a double-blade circular saw, were placed on microfilms and exposed to a 10 kV X-ray source for 25 min.Microfilms were analyzed using WinDENDRO 2009.This procedure provided measurements for mean ring density, early-wood density, late-wood density, minimum density and maximum density for each year as well as for ring width, earlywood width, latewood width and latewood proportion.Density parameters were measured in kg/m3 while ring width parameters were measured to the nearest 0.001 mm.LWP expresses the relative proportion of latewood compared to total ring width and is therefore given in percentages.Values from the two cores of the same tree were averaged to reduce non-climatic effects and to account for potentially missing data of the youngest and oldest tree rings.For further analysis of identification of drought years, we removed the biologically caused age trend occurring in the short time series of ring width by using a flexible 15-year cubic smoothing spline that was fitted and visually evaluated using the dplr package of the software R.To identify drought periods with subsequent effects on tree growth and wood density, we calculated the standardized precipitation index - SPI according to McKee et al.The SPI is based on monthly precipitation time series and relates the actual precipitation deficit to the mean and standard deviation of the time series.In contrast to drought indicators with fixed time scales such as the PDSI, the SPI is able to identify and differentiate between frequently occurring short- and longer drought events, because the duration of the tested drought/precipitation period can be modified from few months up to years.We chose two different time scales since these time spans represent biological meaningful periods of water-shortage for trees and are likely to occur in central Europe.For calculation of SPI we used the program SPI SL 6.Years were considered as drought years, when they showed a severe or extreme shortage in water supply within the vegetation period.To assess a trees performance during these drought events, we calculated four indices of drought reaction following Lloret et al.: resistance, recovery, resilience, as well as relative resilience."Resistance can be characterized as a trees' ability to withstand a period of low water supply without showing a perceptible drop in ring width and is calculated by the ratio between ring width during and before a drought event.Recovery describes the ability to restore from increment drops experienced during a drought and is given by the ratio of ring width after and during a drought event.Resilience is the capacity of a tree to reach pre-drought levels of ring width after a drought event.It is calculated by the ratio of ring width after and before a drought event.The relative resilience indicates, if the effect of a drought is still persisting after disturbance and gives information of how fast a tree is able to recover to a pre-drought level, by taking the magnitude of growth reduction during the drought into account.rRsl is given by/preDr).Pre-drought and post-drought ring widths were calculated as average values for a three-year period before or after a year with drought.All four indices were derived from the raw, untransformed ring width series of each tree, because the biological age-related growth decrease can be neglected within the relatively short time frame of each individual drought event.Firstly, we tested if the observed variation of wood properties and drought reaction indices of the dataset is based on differences between Abies species or Abies alba provenances using the variance components analysis, where both species and provenances were treated as random variables within one analysis.This variance components analysis is able to account for the unbalanced contribution of species and provenances to our dataset and should help to justify further analysis steps.Thereafter, intra-specific variation was investigated only across A. alba provenances, while analysis of inter-specific variation included all sampled Abies species.Because A. alba was highly overrepresented in the dataset for inter-specific comparison, we chose a random subset of A. alba individuals across all provenances equal to samples sizes of the other fir species in order to achieve a ‘fair’ comparison.Inter- and intra-specific differences for wood properties as well as for drought reaction indices were calculated by ANOVA using ‘provenance’ and ‘species’ respectively as categorical fixed variables."Pairwise differences between species/provenances were assessed with Tukey's posthoc test using the package multcomp vers.1.3–8 in R.In previous studies, wood properties were found to be correlated to drought sensitivity of species and provenances and thus were suggested to be suitable traits for the selection of drought resistant trees."To test if this relationship also holds for Abies spec., we calculated Pearson's correlation coefficient between the wood properties averaged across the complete core age and the drought reaction indices of the identified drought events.The specific drought reaction of species or provenances may be a result of adaptation to the climate conditions of a species distribution and local habitat.Therefore, we tested if the observed differences in drought reaction and wood properties are correlated to the climatic conditions and geographic coordinates of the seed origin.We used 19 bioclimatic variables from the WorldClim database from which 11 variables refer to temperature and 8 to precipitation at the place of seed origin.Across the years 1983–2010 we found six years with SPI values indicating either a severe or extreme precipitation deficit.Furthermore, two drought periods appeared at the beginning and the end of the time series.The latter two droughts could not be included in the analysis, because for the first and the last years, the data of many individuals were missing and thus calculations of pre- and post-drought periods were not possible.In 1986, 1993, 2000, 2003 and 2007, observed droughts were in coincidence with remarkable growth reductions of trees.In contrast, tree growth was only slightly affected by the drought in 1990 and some provenances even showed ring width indices above average growth in this year.The drought 2007 can be characterized as a severe spring drought, although the total precipitation sum of this year was above the long-term mean.Since the drought 2000 was closely followed by the drought 2003, a two-year average period was used as reference period for preDr00, postDr00, preDr03 and postDr03 to avoid overlapping of the reference periods with a drought event.Comparing the contributions of species and provenances to the observed variation of wood traits and drought response measures revealed that Abies species explained 10–20% of variance of the wood density parameters, but only a negligible amount of the variance of ring width measure.In contrast, A. alba provenances are responsible for 10–15% of variation among ring-width measures but explain only a non-significant proportion of the variance in ring density measures.Moreover, species and provenances contributed differently to the variance of drought response indices: while inter-specific variation was found to explain variance of recovery, resilience and relative resilience in 2000 as well as of resistance and resilience 2007, intra-specific variation contributes mainly to the variance of all four drought reaction indices in 1986 and 1993 as well as for resistance in 2000 and recovery and relative resilience in 2007.Due to these contrasting contributions of species and provenances to the analyzed wood traits and drought response indices, all further analysis has been made separately for Abies species and A. alba provenances.Analysis of variance confirmed the variance component analysis: among species, significant differences were found for latewood and maximum density, whereas among A. alba provenances, significant differences were found for ring width, earlywood width, latewood width and latewood proportion.For drought reaction indices, significant differences among species were found for all measures in 2003 and 2007, as well as for Rec and rRsl in 1993 and 2000.Among provenances, significant differences were only found for Res, Rec and Rsl in 1986, for Rec, Rsl and rRsl in 1990, for Res and Rec in 1993 and for Res in 2000.Analysis of variance confirmed the variance component analysis: among species, significant differences were found for latewood and maximum density, whereas among A. alba provenances, significant differences were found for ring width, earlywood width, latewood width and latewood proportion.For drought reaction indices, significant differences among species were found for all measures in 2003 and 2007, as well as for Rec and rRsl in 1993 and 2000.Among provenances, significant differences were only found for the Res, Rec and Rsl in 1986, for Rec, Rsl and rRsl in 1990, for Res and Rec in 1993 and for Res in 2000.Pairwise posthoc comparisons among species and provenances, respectively, indicate that the differences in drought reactions follow a consistent pattern across species, but not among provenances.Among the different Abies species, A. nordmanniana showed the highest resistance across five of the six drought periods.In 2003, the observed differences were significant between A. nordmanniana and A. cephalonica, and in 2007, A. nordmanniana and A. cilicica showed significant higher resistance than all other Abies species.In contrast, A. cephalonica was found to have the highest recovery in 1986, 1990, 1993 and 2007, while A. nordmanniana had the lowest in four of six analyzed drought periods.A. alba, as the most widespread of all analyzed species, showed intermediate ranks for drought reaction indices with the exception of the year 2003, where it performed best in recovery, resilience and relative resilience.A. x borisii-regis, the natural hybrid of A. alba and A. cephalonica, did not reveal significant different drought performance compared to its parent species for any of the four response indices.Pairwise posthoc comparisons among species and provenances, respectively, indicate that the differences in drought reactions follow a consistent pattern across species, but not among provenances.Among the different Abies species, A. nordmanniana showed the highest resistance across five of the six drought periods.In 2003, the observed differences were significant between A. nordmanniana and A. cephalonica, and in 2007, A. nordmanniana and A. cilicica showed significant higher resistance than all other Abies species.In contrast, A. cephalonica was found to have the highest recovery in 1986, 1990, 1993 and 2007, while A. nordmanniana had the lowest in four of six analyzed drought periods.A. alba, as the most widespread of all analyzed species, showed intermediate ranks for drought reaction indices with the exception of the year 2003, where it performed best in recovery, resilience and relative resilience.A. x borisii-regis, the natural hybrid of A. alba and A. cephalonica, did not reveal significant different drought performance compared to its parent species for any of the four response indices.Pairwise comparisons of the drought response measures among provenances of silver fir also revealed significant differences, but no consistent pattern of ‘best-performing’ or ‘worst-performing’ provenances across all analyzed drought periods could be found.For example, in 1986, provenance 39 showed the second highest rank in resistance and provenance 19 the lowest, whereas in 2000 provenance 39 had the second lowest rank in average resistance and provenance 19 the highest.A comparison of the average ranks of resistance across all six drought events places provenance 34 in first and provenance 122 in last place.Focusing on recovery, provenance 122 had the second highest average rank, whereas provenance 34 had only the third lowest average rank.Pairwise comparisons of the drought response measures among provenances of silver fir also revealed significant differences, but no consistent pattern of ‘best-performing’ or ‘worst-performing’ provenances across all analyzed drought periods could be found.For example, in 1986, provenance 39 showed the second highest rank in resistance and provenance 19 the lowest, whereas in 2000 provenance 39 had the second lowest rank in average resistance and provenance 19 the highest.A comparison of the average ranks of resistance across all six drought events places provenance 34 in first and provenance 122 in last place.Focusing on recovery, provenance 122 had the second highest average rank, whereas provenance 34 had only the third lowest average rank.Identifying wood characteristics related to drought response that can be screened non-destructive even before drought events occur would be a valuable tool for breeding programs and adaptive forest management.When we tested for correlations between ring width parameters and drought response indices, positive correlations were obtained to Rec, Rsl, rRsl in both datasets, but higher and more significant correlations were found to the intra-specific data.The correlations between wood density parameters and drought response measures were more complex.Among species, density was negatively correlated with resistance of all drought events and positively correlated with recovery and relative resilience.Among provenances variation of density parameters showed much less relations to drought indices: here, the latewood percentage was negatively correlated with Rec86, Rsl86, rRsl86 and rRsl03.Minimum density showed significant negative correlations to Rec, Rsl and rRsl in 2007.Only two significant positive correlations for intra-specific data were found: between maximum density respectively LD and Res03.Inter- and intra-specific differences were also found for correlation patterns between drought response measures and geographic/bioclimatic variables.For Abies species, the strongest correlations were obtained between the longitude and various drought response indices: here species from the eastern Mediterranean showed higher resistance, but lower recovery, resilience and relative resilience.From the 19 bioclimatic variables tested, six precipitation variables showed positive correlations with Rec, Rsl and rRsl throughout the Abies species.Significance of these correlations at p < 0.01 was obtained for the drought events 2000 and 2007.For the same dataset, correlations with temperature variables were rather weak and significant only for isothermality.Correlations of the intra-specific variation to bioclimatic data revealed significant correlations only to Res86 and Res07 as well as to Rsl86.Wood properties showed only few significant correlations to bioclimatic and geographic variables for both datasets, whereas both inter- and intra-specific ring density variation was negatively correlated to longitude, suggesting that provenances and species from more eastern origin have lower wood density.This is accompanied by a tendency for higher ring widths toward eastern seed origin with significant correlations for A. alba provenances.Very few bioclimatic variables were correlated to wood properties, but most notably the seasonality of precipitation.A positive correlation was found both for RW and LW with mean temperature of the warmest quarter and mean temperature of the wettest quarter on intra-specific level.Significant correlations between bioclimatic variables and wood properties were generally less abundant for inter-specific data.Correlations between wood properties and seasonality of precipitation showed a complete reverse pattern compared to intra-specific data.Five severe or extreme drought periods with significant effects on tree growth occurred at the trial site in eastern Austria during the life span of the investigated trees.The drought events in 1986, 1993 and 2003 were characterized by the SPI as droughts of long duration but lower intensity on an intermediate time-scale, whereas the drought in early spring 2007 can be characterized as an event with high intensity and relatively short duration.Nevertheless, the negative consequences of this event for tree growth were of a similar order of magnitude as the drought in 2003, this being considered to be the most significant drought event in Central Europe and used as a reference for investigating drought effects on forest ecosystems and other biological systems.Despite the fact, that short-term drought events are rarely discussed in the current literature so far, our study strongly suggests that these drought events need similar attention in climate change studies.The drought in 1990 had no significant effect on tree growth despite its severe character and a likely explanation is that tree-ring formation was almost completed when this drought occurred.Indeed, monthly SPI data indicate, that this drought had occurred late in the growing season in August while the other five had occurred during April and June.We found species-specific and provenance-specific differences in wood properties and drought response measures during six drought events.The drought reaction of A. alba provenances differed significantly during the three severe drought events 1986, 1990 and 1993, while Abies species showed significant differences during the extreme events 2003 and 2007.This contrasting behavior could be caused by the severity and/or the seasonal occurrence of the drought events: in 1986, 1990 and 1993 drought might have been too weak to provoke a different reaction of the various species, but strong enough to reveal intra-specific differences among provenances.In contrast, the stronger drought events in 2003 and 2007 might have exceeded a certain threshold above which all A. alba provenances were equally affected, but caused contrasting reactions among the different fir species.The underlying genetic basis of the contrasting drought reaction is probably the inherent genetic variation of the species.Although A. alba provenances originate from a wide geographical range and from different phylogeographic lineages, they are phylogenetically much more similar and connected via contemporary gene flow than the sister Abies species.Thus, differences in adaptive traits related to drought reaction are more pronounced among rather than within species.This explanation is supported by the ranking pattern of drought response indices across the six drought events, which is consistent among Abies species but not among A. alba provenances.Abies species tended to keep their ranks for a specific drought response index across the different events while A. alba provenances often changed their ranks even between two consecutive drought events.Besides specific morphological or physiological drought adaptations within species or provenances, genetic correlations between the estimated drought response measures and other quantitative traits could be responsible for the observed drought sensitivity, too.For the present Abies trial, surveys of bud burst are available from the assessment on the young seedlings in 1971, showing that bud burst in A. nordmanniana took place significantly later than for all other species and provenances.Also Aussenac classified A. nordmanniana as a late flushing species.These differences in time of flushing can partly explain the high resistance of A. nordmanniana during the spring drought in 2007: while the early flushing species required high amounts of water to enfold the new shoots, the late flushing A. nordmanniana simply avoided drought stress because the metabolic processes during spring were still down-regulated.In forest trees, the time of bud burst is highly stable within individuals and shows high heritability, suggesting that genetic variation and plasticity of tree phenology need to be considered when selecting for drought adapted phenotypes.Generally, the quantitative and adaptive genetic variation of silver fir and the variation among Abies species have been investigated only in very few studies.Analyzing adaptive and non-adaptive traits in southwestern populations of silver fir in France on four-year-old seedlings, Sagnard et al. observed high trait variation within but low trait variation among provenances.For drought-resistance traits, the variation among provenances ranged only from 6.6 to 6.8%, while for growth traits, variation among populations accounted for 10 to 17.9%.This is very similar to our study, where significant differences among provenances were found for ring width measures, but only for few drought response measures.The high variation among provenances in growth performance has also been demonstrated by a wide number of provenance experiments with silver fir.In contrast to frost hardiness and cold adaptation, the genetic variation of drought sensitivity and its application in breeding programs for Abies spec., and generally for other conifers, has rarely been investigated.Only recently, the drought response of different tree species or provenances within species has become an important research objective as part of adaptive forest management in climate change.A final rating of drought sensitivity among and within species should consider all aspects of drought response, i.e. all four drought response measures, because species/provenances with high resistance might not necessarily show good recovery.Within the present paper, resistance strictly refers to the drought response measures Res as defined by Lloret et al. and should not be confounded with a general ability to avoid any drought symptoms.Species, which revealed high resistance like A. nordmanniana for Res1986–2007 also showed the lowest recovery and relative resilience, while those with low resistance showed higher recovery.These differences can be interpreted as different life-strategies or long-term adaptations: A. nordmanniana, originating from the edge of the Caucasian mountains, a region with relatively high precipitation and moderate temperature during the growing season, kept on growing during drought, while A. cephalonica, originating from a region with only 4 mm precipitation in the driest month, reduced its growth quickly.After a drought event, the resource budget of the high-resistance species might be depleted due to the dissimilation of stored carbohydrates during drought resulting in lower radial increment in the following years.This trade-off is also supported by the correlations between the climate conditions of the seed origin of species and provenances and the drought response measures.On inter-specific level we found the precipitation in the wettest month and in the wettest quarter, which are both decreasing by trend with longitude, to be best predictors of drought response, confirming the findings that species from the eastern Mediterranean basin showed higher resistance, but lower recovery and lower relative resilience.At the Mediterranean, the wettest month is outside the growing season in winter.Thus, our results are in accordance with findings from Williams et al., who have underpinned the importance of cold-season precipitation as a driving force for mitigating forest drought-stress in the southwestern United States.Trade-offs between growth reductions and growth recovery after drought events have also been discussed within and among other tree species.As our data do not include any estimate of tree mortality, neither our present study nor former investigations could relate the resistance-recovery trade-off to the real fitness of individual trees or populations.Reports from the drought-prone southwest North America provide contradictory arguments for the relation between drought response and mortality.While McDowell et al. found that ponderosa pine trees more sensitive to climate events showed higher mortality, various other studies suggested that trees with strong growth response in relation to drought are more likely to survive the drought event.It is speculative without long-term observations and more detailed physiological measures, if one of these relationships also holds for the observed variation among populations and species in case of European fir species.Nevertheless, if the predicted drought scenarios became reality, then the results could provide rough guidelines for future forest management.Firstly, species for future reforestation schemes need to be related to the respective forest site.In particular, the expected seasonal occurrence of drought would define which species were planted.Secondly, the proposed fir species might be selected in dependence of the expected primary forest function.If stable wood production with regular annual increment and density is expected, A. nordmanniana might be a good choice.Other ecosystem services or ecosystem stability might rather benefit from other fir species.Although A. alba, the most widespread fir species in our study, did not show strong intra-specific variation in drought response, it shows a similar response as A. cephalonica: low resistance and fast recovery.An immediate substitution of silver fir in drought prone temperate forests of Central Europe with Mediterranean firs seems not to be required at present.However, given the small and scattered distribution of Mediterranean firs and the increasing climatic stress in their natural distribution, a managed translocation toward northern mountain ridges might be needed to safeguard endangered genetic diversity.Drought resistance certainly is an important quantitative trait and should be considered in translocation schemes, but being aware that besides abiotic interactions also biotic interactions with the wider forest communities need to be considered in risk and benefit analysis.A final rating of drought sensitivity among and within species should consider all aspects of drought response, i.e. all four drought response measures, because species/provenances with high resistance might not necessarily show good recovery.Within the present paper, resistance strictly refers to the drought response measures Res as defined by Lloret et al. and should not be confounded with a general ability to avoid any drought symptoms.Species, which revealed high resistance like A. nordmanniana for Res1986–2007 also showed the lowest recovery and relative resilience, while those with low resistance showed higher recovery.These differences can be interpreted as different life-strategies or long-term adaptations: A. nordmanniana, originating from the edge of the Caucasian mountains, a region with relatively high precipitation and moderate temperature during the growing season, kept on growing during drought, while A. cephalonica, originating from a region with only 4 mm precipitation in the driest month, reduced its growth quickly.After a drought event, the resource budget of the high-resistance species might be depleted due to the dissimilation of stored carbohydrates during drought resulting in lower radial increment in the following years.This trade-off is also supported by the correlations between the climate conditions of the seed origin of species and provenances and the drought response measures.On inter-specific level we found the precipitation in the wettest month and in the wettest quarter, which are both decreasing by trend with longitude, to be best predictors of drought response, confirming the findings that species from the eastern Mediterranean basin showed higher resistance, but lower recovery and lower relative resilience.At the Mediterranean, the wettest month is outside the growing season in winter.Thus, our results are in accordance with findings from Williams et al., who have underpinned the importance of cold-season precipitation as a driving force for mitigating forest drought-stress in the southwestern United States.Trade-offs between growth reductions and growth recovery after drought events have also been discussed within and among other tree species.As our data do not include any estimate of tree mortality, neither our present study nor former investigations could relate the resistance-recovery trade-off to the real fitness of individual trees or populations.Reports from the drought-prone southwest North America provide contradictory arguments for the relation between drought response and mortality.While McDowell et al. found that ponderosa pine trees more sensitive to climate events showed higher mortality, various other studies suggested that trees with strong growth response in relation to drought are more likely to survive the drought event.It is speculative without long-term observations and more detailed physiological measures, if one of these relationships also holds for the observed variation among populations and species in case of European fir species.Nevertheless, if the predicted drought scenarios became reality, then the results could provide rough guidelines for future forest management.Firstly, species for future reforestation schemes need to be related to the respective forest site.In particular, the expected seasonal occurrence of drought would define which species were planted.Secondly, the proposed fir species might be selected in dependence of the expected primary forest function.If stable wood production with regular annual increment and density is expected, A. nordmanniana might be a good choice.Other ecosystem services or ecosystem stability might rather benefit from other fir species.Although A. alba, the most widespread fir species in our study, did not show strong intra-specific variation in drought response, it shows a similar response as A. cephalonica: low resistance and fast recovery.An immediate substitution of silver fir in drought prone temperate forests of Central Europe with Mediterranean firs seems not to be required at present.However, given the small and scattered distribution of Mediterranean firs and the increasing climatic stress in their natural distribution, a managed translocation toward northern mountain ridges might be needed to safeguard endangered genetic diversity.Drought resistance certainly is an important quantitative trait and should be considered in translocation schemes, but being aware that besides abiotic interactions also biotic interactions with the wider forest communities need to be considered in risk and benefit analysis.Can genetic correlations between wood characteristics and drought response measures be used to develop screening tools in breeding programs or to identify drought-vulnerable individuals for forest management operations?,Previous studies reported moderate to strong negative relationships between wood density and a trees’ vulnerability to cavitation on intra-specific level and inter-specific level, suggesting that wood density might be a valuable selection criterion.When we tested for similar correlations among silver fir provenances, we could not confirm this correlation for mean ring density, and only found few moderate negative correlations between the minimum density and recovery, resilience and rel.resilience for the drought in 2007.We presume that the missing relationship between density measures and drought reaction within the species is mainly due to the low genetic variation among provenances.On the inter-specific level, average ring density revealed a negative relationship to resistance in 2003 and positive relationships to recovery, resilience and relative resilience for the drought in 2007.Drought response measures of the remaining drought events were not or only weakly correlated to ring density.Overall, our correlation analysis suggests that average wood density measures are poor predictors of drought sensitivity in the genus Abies and more specific physiological and hydraulic parameters are required.Anatomical and physiological studies suggest that in case of water shortage an early closure of stomata results into an efficient reduction of water loss in Abies spec.This strategy probably avoids hydraulic failure that is caused by destruction of the water transport system.Our results also highlight the need for a study comparing the effectiveness of destructive versus non-destructive methods for estimating drought sensitivity and for developing easy and cost-effective screening tools to identify potentially drought-resistant genotypes.
Understanding drought sensitivity of tree species and its intra-specific variation is required to estimate the effects of climate change on forest productivity, carbon sequestration and tree mortality as well as to develop adaptive forest management measures. Here, we studied the variation of drought reaction of six European Abies species and ten provenances of Abies alba planted in the drought prone eastern Austria. Tree-ring and X-ray densitometry data were used to generate early- and latewood measures for ring width and wood density. Moreover, the drought reaction of species and provenances within six distinct drought events between 1970 and 2011, as identified by the standardized precipitation index, was determined by four drought response measures. The mean reaction of species and provenances to drought events was strongly affected by the seasonal occurrence of the drought: a short, strong drought at the beginning of the growing season resulted in growth reductions up to 50%, while droughts at the end of the growing season did not affect annual increment. Wood properties and drought response measures showed significant variation among Abies species as well as among A. alba provenances. Whereas A. alba provenances explained significant parts in the variation of ring width measures, the Abies species explained significant parts in the variation of wood density parameters. A consistent pattern in drought response across the six drought events was observed only at the inter-specific level, where A. nordmanniana showed the highest resistance and A. cephalonica showed the best recovery after drought. In contrast, differences in drought reaction among provenances were only found for the milder drought events in 1986, 1990, 1993 and 2000 and the ranking of provenances varied at each drought event. This indicates that genetic variation in drought response within A. alba is more limited than among Abies species. Low correlations between wood density parameters and drought response measures suggest that wood density is a poor predictor of drought sensitivity in Abies spec.
276
Carcinogenic and non-carcinogenic health risk assessment of heavy metals in drinking water of Khorramabad, Iran
Supply of healthy drinking water is necessary to human life, and safe drinking water should not cause a remarkable risk to human health.The increasing trend of water shortage has various negative impacts on economic development, human livelihoods, and environmental quality around the world .Numerous contaminants, including heavy metals, organic and inorganic compounds, etc. may contaminate water.Among harmful and persistent contaminants found in water, a special emphasis is given to heavy metals .Rapid economic development and industrialization in many parts of the world and Iran has led to high levels of heavy metal contamination in the soil and then in the surface and groundwater .The heavy metals are released into the water naturally or via human activities .Many heavy metals are the natural elements of the earth’s crust.Weathering and decomposition of metal rock and ores can transfer heavy metals in groundwater and have led to human exposure for the entire history of mankind .The levels of metals vary significantly from the soil of one region to another .Anthropogenic activities considerably affect the availability of heavy metals in the ecosystems.Heavy metals may be released into water the in large quantities via vehicle exhaust, poor waste disposal, fossil fuel combustion, fertilizer and pesticide application, untreated wastewater irrigation, and atmospheric precipitation from various human activities including mining, smelting operation, agriculture, etc. which can influence human health by affecting on vegetation, food chain and water quality .Once released into the drinking water, heavy metals can be taken into the human body through several pathways such as direct ingestion, dermal contact, inhalation, through mouth and nose .Heavy metals in water can cause extensive damage to the ecological environment and consequently human health due to their unique characteristics such as toxicity, poor biodegradability and bioaccumulation .Some heavy metals are detrimental for metabolisms in the human body, serving as both structural and catalytic constituents of proteins and enzymes, but can have adverse effects when the levels were greater than international guidelines .During prolonged exposure, heavy metals can accumulate in target tissues such as brain, liver, bones, and kidneys in the human body resulting in serious health hazards, depending on the element and its chemical form .Health risk assessment of heavy metals is usually performed to estimate the total exposure to heavy metals among the residents in a particular area.Risk assessment of contaminants in humans is based on a mechanistic assumption that such chemicals may either be carcinogenic or non-carcinogenic .Generally, ingestion and dermal absorption are the major pathways of exposure in water environment .In order to assess water quality in an area effectively, it is crucial to find possible human health impacts of contaminants in drinking water.The traditional technique for estimating health impacts is to directly compare the analyzed levels with guideline limits, but it is not adequately valid to provide comprehensive hazard levels and find contaminants of the most important .Health risk assessment is an essential method for evaluating the possible health effects in water environments caused by numerous contaminants .This method has been extensively utilized by many researchers in literature for the estimation of the adverse health effects possible from exposure to contaminated water .Although ingestion is the predominant pathway of exposure to contaminants in drinking water, inhalation and dermal absorption should also be considered .Most health risk estimations associated with human exposure to contaminants in soil, water, and air are based on the exposure methods presented by the USEPA .With the increasing trend of population, economy, and industry growth in Iran, the study is required to determine the impacts of development on the surface and groundwater, before any preventive measures can be considered in the land-use systems and watersheds to decrease the contamination levels of heavy metals.The main objectives of the present research were to determine levels of eight heavy metals including Lead, Chromium, Cadmium, Molybdenum, Zinc, Copper, Barium, and Nickel in the drinking water of Khorramabad city and estimate health risks of non-carcinogenic and carcinogenic metals with respect to daily drinking of groundwater and dermal pathways for general adults in the community.The results of our research may provide some insight into heavy metal contamination in water and are useful for inhabitants in formulating protective procedures and health professionals in reducing heavy metal contamination of water environment, and also serve as a basis for comparison to other areas both in Iran and worldwide.The geographic coordinates of the study area are 33°29′16″N 48°21′21″E in DMS, located in the Khorramabad city, Lorestan Province in the west of Iran.Khorramabad is situated in the Zagros Mountains with a warm and temperate climate.Natural springs are the main sources of water supply in this city.At the 2016 census, its population was 373,416.Average annual rainfall in this region is 488 mm.This city stands at an elevation of approximately 1147 m above sea level .The location map of the study area is depicted in Fig. 1.Analytical grade HNO3 purchased from Merck Company was used in this work.Deionized water was utilized for solution preparation and also for dilution objectives.All glassware was washed and dried in an oven at 105 °C.Sampling bottles were cleaned by rinsing in a metal-free soap and then by soaking in 10% HNO3 before sample taking.Finally, the bottles were washed with deionized water.Totally, forty water samples from 40 different sites along the distribution network were collected during 2017 in order to measure the levels of potentially toxic heavy metals such as Pb, Cr, Cd, Mo, Zn, Cu, Ba, and Ni in drinking water of Khorramabad city.These samples were then transported to the laboratory and stored at 4 °C until analysis.The collected samples were analyzed for eight heavy metals including Pb, Cr, Cd, Mo, Zn, Cu, Ba, and Ni using standard methods for the examination of water and wastewater .Concentrations of the heavy metals in all samples were measured using an inductively coupled plasma mass spectrometry.The limit of detection of individual metal was in the range 0.5–5 ng/L for water samples.Risk assessment is defined as the methods of evaluating the probability of occurrence of any given probable amount of the harmful health impacts over a determined time period .The health risk assessment of each contaminant is normally based on the estimation of the risk level and is classified as carcinogenic or non-carcinogenic health hazards .To estimate the heavy metal contamination and potential carcinogenic and non-cancer health risk caused via ingestion and dermal absorption of heavy metals in the water of the distribution network of Khorramabad city, Hazard Quotients, Hazard Index, and the Incremental Lifetime Cancer Risk were used.The studied group in this study was adults.The values of the RfD and cancer slope factor for different metals are listed in Table 2.The computed HI is compared to standard values: there is the possibility that non-carcinogenic impacts may occur in the residents when HI > 1, while the exposed person is unexpected to experience evident harmful health impacts when HI < 1 .The permissible limits are considered to be 10−6 and <10−4 for a single carcinogenic element and multi-element carcinogens .The minimum, mean, and maximum levels of heavy metals present in water samples in the distribution network of Khorramabad city are presented in Table 3.The minimum, mean, and maximum levels of CDI, as well as total CDI for adults through ingestion and dermal contact pathways in the study area, are given in Table 4.The minimum, mean, and maximum levels of HQ, as well as total HQ for adults through ingestion and dermal contact pathways, are presented in Table 5.The carcinogenic risk assessment for adults is given in Table 6.The heavy metal contamination in water distribution network can increase human health risks through various exposure routes.In the present work, non-carcinogenic and carcinogenic health risks caused by oral ingestion and dermal contact were explored.Based on Table 3, a wide variation in mean values of heavy metals was seen in the water where the maximum metal concentration was for Ba with a mean of 81.13 mg/L and the minimum metal concentration was for Cd with a mean concentration of 0.43 mg/L, respectively.The order of the toxicity heavy metals according to mean concentrations measured in drinking water of the studied area was: Ba > Zn > Cu > Cr > Ni > Pb > Mo > Cd.Human health risk assessment comprises the determination of the nature and magnitude of adverse health effects in humans who may be exposed to toxic substances in a contaminated environment.In the present work, exposure and risk assessments were carried out based on the USEPA methodology.Human exposure to heavy metals principally occurs via pathways of drinking water, food, inhaled aerosol particles and dust .The degree of toxicity of heavy metals to human health is directly related to their daily intake.However, ingestion via drinking water and dermal adsorption was considered in this study.The first step in the non-carcinogenic analysis is the calculation of chronic daily intake values.As given in the Table 4, the mean levels of total CDI in mg/kg-day are 1.00E-04 for Pb, 11.60E-04 for Cr, 1.34E-05 for Cd, 1.60E-05 for Mo, 1.48E-03 for Zn, 2.13E-04 for Cu, 2.55E-03 for Ba, and 1.09E-04 for Ni.Therefore, the mean values of CDItotal of heavy metals concentrations for adults were found in the order of Zn > Ba > Pb > Ni > Cr > Cu > Cd > Mo.As seen in Table 5, all the studied heavy metals had total HQs below 1.Accordingly, the health risk estimation of Pb, Cr, Cd, Mo, Zn, Cu, Ba, and Ni revealed the mean HQs suggesting an acceptable level of non-carcinogenic harmful health risk in all samples taken from Khorammabed’s water distribution network.From the computation of total HQs, it can be concluded that the contribution of the eight metals to the non-carcinogenic health risk was in the order of Zn > Ba > Cr > Cu > Mo > Pb > Ni > Cd.Moreover, to estimate the total potential non-carcinogenic impacts induced by more than one metal, the HQ computed for each metal is summed and expressed as a Hazard Index .The mean values of HI through ingestion and dermal adsorption as wells as total HI were obtained to be 3.31E-03, 2.15E-06, and 3.32E-03, respectively.It shows neglectable non-carcinogenic risk to residents’ health as the value of HI is below 1.The values of HI for heavy metals of inhabitants in the study area are summarized in Table 5.Heavy metals, Cd, and Ni) can potentially enhance the risk of cancer in humans .Long term exposure to low amounts of toxic metals could, therefore, result in many types of cancers.Using Pb, Cr, Cd, and Ni as carcinogens, the total exposure of the residents were assessed based on the mean CDI values given in Table 4.The carcinogenic risk assessment for adults is given in Table 6.The values of cancer slope factor for different metals used for carcinogenic risk assessment are listed in Table 2.For one heavy metal, an ILCR less than 1 × 10−6 is considered as insignificant and the cancer risk can be neglected; while an ILCR above 1 × 10-4 is considered as harmful and the cancer risk is troublesome.For the total of all heavy metals through all exposure routes, the acceptable level is 1 × 10-5 .Among all the studied heavy metals, chromium has the highest chance of cancer risks and nickel has the lowest chance of cancer risk.The results of this research present that there was a cancer risk from the contaminants to residents through the cumulative ingestion and dermal contact routes in the drinking water of the region.This study was conducted to evaluate the health risks of exposure to heavy metals along with the water distribution network of Khorramabad city in Iran.Risk assessment relevant for the present study comprises computations of carcinogenic and non-carcinogenic risk of water through ingestion and dermal contact pathways.The maximum and minimum concentrations of heavy metals measured were related to Ba and Cd, respectively.The order of the heavy metals toxicity according to mean concentrations measured in drinking water of the studied area was: Ba > Zn > Cu > Cr > Ni > Pb > Mo > Cd.Themean values of CDItotal of heavy metals concentrations in adults were found in the order of Zn > Ba > Pb > Ni > Cr > Cu > Cd > Mo.The HQs for those routes of this work decline in the following order: ingestion > dermal adsorption, meaning that ingestion is the dominant pathway of exposure to every receptor.The mean values of HI through ingestion and dermal adsorption as wells as total HI were obtained to be 3.31E-03, 2.15E-06, and 3.32E-03, respectively.Among all the studied heavy metals, chromium has the highest chance of cancer risks and nickel has the lowest chance of cancer risk.The present study will be quite helpful for both inhabitants in taking protective measures and government officials in reducing heavy metals contamination of urban drinking water.The authors of this article declare that they have no conflict of interests.
The continuous urbanization and industrialization in many parts of the world and Iran has led to high levels of heavy metal contamination in the soil and then on the surface and groundwater. In this study, the concentrations of 8 heavy metals were determined in forty water samples along distribution drinking water of Khorramabad, Iran. The ranges of heavy metals in this study were lower than EPA and WHO drinking water recommendations and guidelines and so were acceptable. The mean values of CDItotal of heavy metals concentrations in adults were found in the order of Zn > Ba > Pb > Ni > Cr > Cu > Cd > Mo. The health-risk estimation indicated that total hazard quotient (HQing + HQderm) and hazard index values were below the acceptable limit, representing no non-carcinogenic risk to the residents via oral intake and dermal adsorption of water. Moreover, the results of total risk via ingestion and dermal contact showed that the ingestion was the predominant pathway. This study also presents that the carcinogenic risk for Pb, Cr, Cd and Ni were observed higher than the acceptable limit (1 × 10−6). The present study will be quite helpful for both inhabitants in taking protective measures and government officials in reducing heavy metals contamination of urban drinking water. The data analyzed in this study show a clear situation regarding the quality of drinking water in Khorramabad. The results of this study can be used to improve and develop the quality of drinking water that directly affects the health of consumers. The present study will be quite helpful for both inhabitants in taking protective measures and government officials in reducing heavy metals contamination of urban drinking water
277
Sensory reactivity, empathizing and systemizing in autism spectrum conditions and sensory processing disorder
The ability of the brain to receive, integrate, and respond to an ongoing stream of external sensory information is critical for adaptive responses to the environment.Individuals with autism spectrum conditions,1 however, often report unusual sensory symptoms such as over-reactivity to sound or touch.Beyond anecdotal reports, questionnaires such as the Sensory Profile have estimated atypical sensory features in over 90% of children and adults with ASC.A recent observational study also confirmed sensory reactivity symptoms in over 65% of children with ASC.The growing interest in sensory processing differences in ASC is reflected by the most recent Diagnostic and Statistical Manual criteria for the condition, which now include over- and under-reactivity to sensory input as well as sensory craving.According to the new DSM-5, hyper-reactivity, over-reactivity here, is defined as an adverse response to sensory stimuli, hypo-reactivity, under-reactivity here, as an indifference to sensory stimuli and sensory craving as an excessive desire for sensory input.Atypical sensory symptoms, such as an adverse response to touch, are not unique to ASC.Sensory over- and under-reactivity are reported across many neurodevelopmental conditions including Obsessive-Compulsive and Related Disorder.A growing number of clinicians also have proposed atypical sensory symptoms in children be categorized with the diagnostic term Sensory Processing Disorder, or SPD, with a number of subtypes within the diagnosis.SPD, originally conceived as sensory integration dysfunction, is reported to affect between 5% and 16% of the general child population.SPD has been acknowledged in some diagnostic classification guides, but not others.We also use the suggested term of Sensory Processing Disorder here to refer to children who have sensory processing difficulties.Diagnostic confusion exists between ASC and SPD due to the lack of research investigating the distinctness of SPD and because many of their defining symptoms overlap.For example, an “apparent lack of interest in… engaging in social interactions” is part of the diagnostic criteria for the under-responsive subtype of regulation disorders of sensory processing in the DC:0-3R, which is very similar to the DSM-5 criteria for ASC which includes “absence of interest in peers”.Only a few studies have directly compared children with ASC and SPD.One study used the Sensory Challenge Protocol, in which children are presented with different sensory stimuli while electrodermal activity is measured, and the Sensory Profile, a parent report questionnaire: children with ASC showed significantly lower physiological arousal levels than the SPD group and the SPD group showed significantly higher reactivity in response to sensory stimuli than the ASC group.In addition, the Short Sensory Profile revealed group differences, with both children with ASC and SPD showing more sensory symptoms compared to typical developing children.Examining the differences more closely, children in the ASC group showed more taste/smell reactivity and more sensory under-reactivity compared to the SPD group, while sensory craving behaviors were more common in the SPD group compared to the ASC group.Brain-imaging studies have also investigated the differences between SPD and typically developing children and children with ASC, finding white matter abnormalities in children with SPD compared to typically developing children and differences in white matter tracts between ASC and SPD.This more recent study further found that both groups showed less connectivity in sensory related tracts but that only the ASC group showed difficulties in socioemotional-related tracts.Following these few studies, the first aim of this study was to examine the sensory similarities and differences between children with ASC and SPD using the Sensory Processing Scales Inventory.In addition to sensory symptoms, children with ASC display social and communication difficulties alongside unusual repetitive behavior and restricted interests.Sensory symptoms are likely associated with core features of ASC and may underlie some of the deficits associated with the condition e.g. repetitive behaviors, as well as some of the strengths e.g. attention to detail.The way in which sensory stimuli from the world around us is perceived has an impact on our behavior and cognition and impairments in how sensation is processed and experienced can lead to varied and multiple problems in daily life and mental health.Therefore, a second aim of the current study was to investigate whether children with ASC can be differentiated from SPD by their cognitive styles, specifically in terms of empathy and systemizing.Empathy comprises the drive to identify another person’s emotions and thoughts, and the appropriate emotional response.Systemizing is the drive to analyze or construct rule-based systems, whether mechanical, abstract, or any other type.Studies have shown that individuals with ASC have the tendency to show a greater drive toward systemizing combined with a lower drive toward empathizing.Clinical observation of children with SPD suggests they have fewer or less severe social and communication impairments than children with ASC but to our knowledge, these cognitive styles have yet to be examined in the SPD population.Since clinical observation of children with SPD suggests they have fewer, less severe social and communication impairments than children with ASC and are not as strongly attracted to lawful domains, we predicted that SPD children would have average empathy and average systemizing profiles.We also predicted that there would be a relationship between these cognitive profile and sensory symptomatology across groups.In summary, the goals of this study were to determine if children with ASC can be distinguished from children with SPD based on a) sensory reactivity symptoms and b) cognitive styles, specifically empathy and systemizing.Improved sensory and cognitive phenotyping is an essential first step towards reducing diagnostic confusion between ASC and SPD.Data were collected on-line via two websites: www.cambridgepsychology.com for parents of a child with SPD, and www.autismresearchcentre.com for those with a child with ASC.Both portals led to identical versions of the tests.The SPD group were recruited via the Sensory Processing Disorder Foundation website.Parents could choose a convenient time to complete the on-line tests, and could log out between tests.The study had approval from the Psychology Research Ethics Committee of the University of Cambridge and the Institutional Review Board at Rocky Mountain University of Health Professions.The study included 210 participants, of whom 68 had ASC, and 79 had SPD, and 63 were typically developing children.Parents completed on-line questionnaires and information concerning their child’s diagnosis, sensory symptoms and cognitive styles, specifically empathy and systemizing.In the ASC group parents had to indicate that their child was given a diagnosis of ASC.To screen for autistic traits the Autism Spectrum Quotient-Child was used.Criteria for inclusion into the ASC group were an AQ of 26 and above and a diagnosis of ASC in a recognized clinic by a psychiatrist or clinical psychologist using DSM-IV criteria.The criterion for inclusion in the SPD and TD group were an AQ of 25 or below and no previous diagnosis of ASC.Children who had a comorbid diagnosis of SPD and ASD were excluded from the analysis.For the SPD group, parents indicated if their child ever received clinical evaluations suggesting SPD, or Sensory Integration Disorder.Sensory symptoms were assessed using the Sensory Processing Scale Inventory including questions concerning Sensory Over-Reactivity, Sensory Under-Reactivity, and Sensory Craving.Cognitive styles were assessed using the child version of the Empathy Quotient and the Systemizing Quotient.The child version of the AQ is a short, 50-item questionnaire measuring autistic traits, with 5 subscales.A score of 0 was assigned to the responses ‘definitely agree’ and ‘slightly agree’ and a score of 1 for ‘slightly disagree’ and ‘definitely disagree’.Total scores could therefore range from 0 to 50, with higher scores indicating more autistic traits.Results from the AQ have been replicated cross culturally and across different ages.The AQ also shows good test-retest reliability.The Sensory Processing Scale) has two parts: an inventory report-measure, completed by parents, caregivers or self, and a performance measure or assessment, administered by an examiner.Only the inventory was administered in this study, specifically the subscales regarding Sensory Under-Reactivity, Sensory Over-Reactivity, and Sensory Craving.The SP Scale reflects sensory reactivity including over-reactivity, under-reactivity and sensory craving across all sensory domains.Previous research on the Sensory Over-Reactivity subscale showed high internal consistency reliability within each domain).In addition, the SOR inventory has strong discriminant validity, distinguishing between individuals with and without SOR within each domain and strong concurrent validity with the sensory sensitivity and sensory avoiding dimensions of the Sensory Profile.Cronbach’s alpha levels ranged from 0.69 to 1.00 and intraclass correlation coefficients ranged from 0.82 to 1.00.All have been shown to differentiate between individuals with and without sensory problems.Each item is scored as a ‘1’ if the parent ticks yes on the item.The number of questions on each Inventory varies by subscale: SOR = 76 items, SUR = 30 items, SC = 37 items.Total scores are then computed for each subtype, with higher scores reflect a greater number of atypical sensory symptomatology.The child version of the EQ and SQ were used."The 27 EQ items measure how easily the child can pick up on other people's feelings and how strongly they are affected by other people's feelings.The 28 SQ items assess the child’s interest in systems.Together these are assessed on a single 55-item questionnaire, the child EQ-SQ.The parent is asked to indicate how strongly they agree with each statement as a description of their child.Response options are the following: ‘definitely agree’, ‘slightly agree’, ‘slightly disagree’, or ‘definitely disagree’.Both agree responses are scored as 0, and both disagree responses are a 1, with some items reverse-scored and the items summed by subscale.Higher scores indicate a greater empathizing or systemizing drive.The test-retest reliability of this scale is high.The statistical software package SPSS 20 was used to analyze the data.To correct for multiple comparisons, Bonferroni corrections were used.There was no significant difference in age between groups.The ASD group had significantly higher scores on the AQ compared to the SPD and TD group.The SPD group had a significantly lower AQ score compared to the ASD group and a significantly higher AQ score than the TD group.To analyze sensory symptoms, a MANOVA was performed with group as fixed factor and all sensory subscales as dependent variables.Using Pillai’s trace, there was a significant effect of group on the amount of atypical sensory behaviors = 9.0, p < 0.0001).Post hoc pairwise comparisons were next conducted to explore group-level differences.For the SUR subscale, the ASD group scored higher than the SPD group, who in turn scored higher than the TD group.On the SOR subscales, the ASC and SPD groups did not significantly differ from one another, but both scored significantly higher than the TD group.Both children with ASC and SPD also showed higher scores on Sensation Craving compared to TD children, but did not differ from each other.Regarding cognitive profiles, the EQ and SQ scores for typical developing children were in the average range as reported by Auyeung et al.An MANCOVA was conducted with group as fixed factor and EQ and SQ as the dependent variables.Sex was entered as a covariate, since there is a reported sex difference in EQ scores for typical developing children.Using Pillai’s trace, there was a significant effect of group = 31.3, p < 0.0001) and sex on EQ and SQ scores.Tests of between-subject effects showed that groups differed on the EQ and SQ scores.Sex had an effect on EQ scores, girls scoring higher than boys, but not on SQ scores.Children with ASC showed lower EQ scores compared to children with SPD as well as TD children.Children with SPD scored marginally lower than TD children on the EQ.Children with ASC scored higher than both other groups on the SQ, children with SPD and typical developing children showing similar mean scores.Correlations were calculated between EQ, SQ and all sensory scales combined.Across groups, the EQ score was negatively correlated with the Sensory Total score, as well as within each group independently.In other words, individuals with higher empathy scores had fewer sensory symptoms.The SQ was not correlated with total sensory symptoms in any group nor across the groups.The AQ was correlated with total sensory symptoms in the SPD, and TD groups, suggesting that greater autistic symptomatology is associated with more atypical sensory symptoms, but this did not hold in the ASC group.Sensory reactivity is a new DSM-5 criterion for Autism Spectrum Conditions.However, children who do not have ASC can also suffer from sensory reactivity symptoms as well—children with the suggested diagnostic term of Sensory Processing Disorder.The current study tested whether there are sensory and/or cognitive features that distinguish ASC from SPD.Children with ASC or SPD showed more sensory symptoms than typical developing children, as predicted.The ASC group was the most affected group overall, showing significantly greater symptoms of sensory under-reactivity than both the TD and SPD groups, although they did not differ from the SPD group on sensation craving or sensory over-reactivity symptomatology.Thus, given the overlap in sensory symptoms in ASC and SPD, sensory symptoms alone are not adequate to differentiate these two groups.In terms of cognitive style, children with ASC had difficulty in empathy alongside good systemizing skills, versus children with SPD, who had lower systemizing skills but greater empathy compared to children with ASC.Typical developing children had no heightened sensory symptomatology and average levels of parent-reported empathy and systemizing.Children with SPD also had average levels of empathy and systemizing.This suggests that empathy and systemizing are useful cognitive dimensions for differentiating ASC from SPD and has implications for improving diagnostic accuracy, especially for the new DSM-5.Taken together children with ASC showed the greatest sensory symptomatology and lowest empathy.Children with ASC showed lower parent-reported empathy compared to children with SPD.In children with ASC the underlying disability to empathize may explain the social and communication difficulties.Given that in our current study individuals with higher empathy scores had fewer sensory symptoms, difficulties of understanding others might also impact the amount of sensory symptoms in children with ASC or vice versa.Children with SPD, who have not been characterized on empathy beforehand, had slightly lower empathy scores than typically developing children.In corroboration, while children with SPD in the current study scored below the cut-off on the AQ, they also had significantly higher scores compared to TD children.Indeed, therapists and parents have reported that children with SPD often have difficulty in the behavioral and emotional domains, particularly with regard to emotion regulation.When barraged by sensations that others would not notice such as a loud shopping mall, a child who is over-reactive to sensory stimuli might for example feel overloaded and exhibit dysregulated behavior.By the time a child with SPD enters school, relationships may be compromised and they may present with emotional and behavioral problems.Consequently, empathy may be impaired in SPD because these challenges make it difficult to respond appropriately to another person’s emotions.Future studies are needed to test if and how sensory reactivity problems affect social cognition and behavior or might represent a risk factor regarding establishing healthy foundation for emotional development, early relationships, and emotional maturity.Furthermore, the total numbers of sensory symptoms and social features were associated with one another across groups, specifically with greater sensory symptoms predicting lower parent-reported empathy.Even though children with SPD had augmented empathy scores compared to children with ASC, their scores were lower than the TD group.The association between sensory perception and social cognition is long known.In an early stage of development, infants seek physical contact and learn via their senses to form an attachment to their caregiver.Bowlby argued that through attachment, the infant develops mental representations that become templates for future relationships.However, attachment models do not take into consideration the dysregulating effect of atypical sensory reactivity.An effective and appropriate reaction to sensory stimulation, such as speech sounds, visual facial cues, and social touch, is especially important in order to attend to and decipher social cues and respond flexibly.Future studies should investigate what effect sensory reactivity issues have on social skills, attachment and later development.Limitations of this study include that it was a self-selected sample and data was collected online.Using an online survey allowed us to collect data from a larger group of participants, but lacks some control over variables and a laboratory study including an IQ measure is needed to test if the current findings can be duplicated.However, online data collection does confer the advantage of increasing diversity and minimizing experimental bias, and numerous studies have shown that online survey methodology and data are at least equivocal or even better in quality than performing the study in a traditional laboratory setting).In addition, it would be important to test if children with SPD can be differentiated from children with other conditions such as Attention Deficit Hyperactivity Disorder, OCD, or anxiety.Here, children with additional conditions were excluded from this study.Future research is needed to distinguish sensory symptoms in children with SPD from other childhood disorders such as ADHD.Recent work suggests that sensory symptoms differ in children with SPD and ADHD.The current findings are also worth further exploration using behavioral and performance-based tasks, which measure sensory reactivity and empathy.It would also be interesting to compare children with ASC, SPD and TD children on sensory and social tasks using neuroimaging.A recent DTI brain imaging study showed that both children with ASC and SPD had decreased connectivity relative to TD children in white matter tracts involved in sensory perception.However, only the ASD group showed decreased connectivity compared to TD children in tracts related to social processing.This suggest that even though sensory reactivity is affected in both groups on a behavioral and biological basis, social processing likely seems to be intact at least on a biological basis for children with SPD.This has direct implications for different treatment recommendations for children with ASC and SPD.This study sheds light on the similarities and differences between children with ASC and SPD, which could be helpful for distinguishing these two conditions.Taken together, our findings show that children with ASC are most affected by sensory symptoms, and show lowest empathy and highest systemizing scores.Scores for children with SPD fall in between those for children with ASC and typical developing children on these measures.Future longitudinal studies are needed to explore if children with ASC and SPD both start with the same amount or type of sensory symptoms in early childhood and whether there is a difference in the type of sensory symptoms they display.Children with ASC also have the greatest difficulties in empathy, which could lead to more severe overall symptoms.Children with SPD on the other hand might have an intact drive to empathize, but sensory issues might stop them from using these skills as much as typical developing children.Gathering as much information as possible by measuring cognitive profiles as well as sensory symptoms allows a broader characterization of each child.Identifying greatest areas of challenges, being low empathy or heightened sensory reactivity, can guide treatment.Future work is needed to validate these results using performance tests and to understand the neural basis of the similarities and differences between these two related conditions.
Although the DSM-5 added sensory symptoms as a criterion for ASC, there is a group of children who display sensory symptoms but do not have ASC; children with sensory processing disorder (SPD). To be able to differentiate these two disorders, our aim was to evaluate whether children with ASC show more sensory symptomatology and/or different cognitive styles in empathy and systemizing compared to children with SPD and typically developing (TD) children. The study included 210 participants: 68 children with ASC, 79 with SPD and 63 TD children. The Sensory Processing Scale Inventory was used to measure sensory symptoms, the Autism Spectrum Quotient (AQ) to measure autistic traits, and the Empathy Quotient (EQ) and Systemizing Quotient (SQ) to measure cognitive styles. Across groups, a greater sensory symptomatology was associated with lower empathy. Further, both the ASC and SPD groups showed more sensory symptoms than TD children. Children with ASC and SPD only differed on sensory under-reactivity. The ASD group did, however, show lower empathy and higher systemizing scores than the SPD group. Together, this suggest that sensory symptoms alone may not be adequate to differentiate children with ASC and SPD but that cognitive style measures could be used for differential diagnosis.
278
An efficient method for measuring plasma volume using indocyanine green dye
Plasma is the water component of blood in which nutrients, hormones, and other biomarkers circulate.Plasma volume is the total amount of plasma in blood.It is an important biomarker in pregnancy, in chronic heart failure patients, as well as in situations where blood transfusion is critical.PV changes to a small extent from temperature changes and exercise , and some evidence has shown changes in PV at different points in the menstrual cycle .In pregnancy, PV increases on average by 50% from non-pregnant values, causing hemodilution .Abnormal PV expansion has been associated with adverse pregnancy outcomes .Because proteins and biomarkers of health are transported in the plasma, the amount of PV could impact how biomarker concentrations are interpreted.Despite the substantial value of PV assessment, it is not routinely measured in clinical settings or during pregnancy, partly because the method is cumbersome and time consuming.PV could be a useful diagnostic tool if it could be safely and easily measured.The method recommended by the International Committee for Standardization in Hematology for measuring PV uses radioiodine-labeled human serum albumin .131I–HSA has also been used for PV measurement .It is not ethical to use radioactive tracers in some populations including children and pregnant women.As well, it is challenging and expensive to perform measurements with such tracers.Other methods using dyes that bind to albumin, and therefore distribute throughout the vascular space with albumin, have been developed.The two main dyes used are Evans blue and indocyanine green , each of which have been validated against 125I–HSA in humans.Evans blue dye is no longer available for purchase in the US .ICG is currently produced and sold and is advantageous because it is rapidly cleared from circulation.ICG also has a short circulatory half-life of 2.5–3 min .These properties allow for quick assessment and repeated measurements of plasma ICG concentration and PV within a day , even as early as 30 min following the first injection .In clinical settings, ICG has been used extensively in ophthalmologic imaging to examine the eye structure .Most PV measurements specify that the dye be injected in one arm and blood collected on the contralateral arm to avoid possible contamination of the tracer with post-injection blood .This has been followed by others .The inconvenience and challenge of inserting an intravenous catheter line in both arms can be overcome by replacing all or parts of the blood collection system following the dye injection, before post dye-injection blood is collected .The objective of this study was to develop an efficient and less-invasive ICG method for measuring PV in women of reproductive age by combining best practices from the literature and further reducing 1) the number of blood samples needed and 2) the amount of plasma needed per subject.We conducted a single visit study at Penn State to develop and test an ICG method for measuring PV in non-pregnant women of reproductive age.Currently, PV measurement is not one of the Food and Drug Administration approved uses of ICG but the method uses ICG in the same manner as FDA approved uses such as cardiac output, liver function and hepatic blood flow.As a result, our work was granted an investigational new drug exemption from the FDA.ICG has been used safely in pregnancy, and does not appear to cross the placenta but is not currently approved for use in pregnant women in the US.Thus, our protocol in women of reproductive age includes a pregnancy test the same day of the measurement.Visits were conducted in the Clinical Research Center, a service unit in The Pennsylvania State University’s Clinical and Translational Science Institute, University Park, PA.Participants were healthy women of reproductive age that were non-pregnant, non-breastfeeding, and not using hormonal birth control methods.Eligibility also included normal blood pressure because we wanted healthy subjects, and because there are previous reports of blood pressure dropping after ICG use .The study visit was scheduled to occur within the early follicular phase of the menstrual cycle, aiming for cycle day 2 .Two days before the study visit, participants were asked to drink plenty of water and to abstain from any form of alcohol because we wanted each person to be well hydrated at the time of measurement.All participants fasted for 12 h before the study visit, which took place in the morning between 7 a.m. and 10 a.m.This component of the protocol was needed to standardize methods for biomarker measurement, but it also served to standardize the timing of PV measurement.At the visit, women were asked a series of questions about their health history and lifestyle.Blood pressure, height, and weight were measured using standard methods.Weight was needed to calculate the amount of ICG solution to inject.Participants provided a fresh urine sample for a human chorionic gonadotropin-based pregnancy test.We measured body fat percentage using the Tanita InnerScan Body Composition Monitor.After these measurements, the specific protocol for ICG measurement of PV began.Women rested for 15 min in a supine position in a hospital-style room with a small heating pad over the inside of the arm selected for IV insertion.At the end of 15 min, a temporary tourniquet was applied to aid in identifying an antecubital vein and an IV was inserted by a nurse.To standardize the point of entry, we only used the antecubital vein and did not consider other locations if the IV was unsuccessful in the arm.After IV insertion, blood was drawn into a 6 mL vacutainer blood collection tube coated with K2 ethylenediaminetetraacetic acid.Of note, a 4 mL tube would be sufficient for measuring PV, but we took extra plasma to aid in method development.We also collected blood into tubes for serum and whole blood at this time for measurement of other biomarkers.After ICG injection, 5 more EDTA tubes were collected as described below.We used 3 mL tubes for method development, but 2 mL would be more than enough for our final method.EDTA tubes were gently inverted 10 times immediately after the tube was filled.Tubes were centrifuged at 3200 rpm for 15 min to separate plasma from blood cells.The plasma samples were transported to the laboratory to determine the ICG concentrations.The PV was calculated from the laboratory-measured ICG values.PV determination was completed within 2 h of blood collection.ICG doses up to 5.0 mg/kg body weight have been reported to be safe in humans .This amount has been used in pregnant women without any adverse effects .Haneda et al. used between 5.0 to 10.0 mg in children and 10.0 to 15.0 mg ICG in adults .Other researchers have used 25.0 mg to study ICG clearance by the liver .The most commonly used doses for injection in the determination of PV and ICG plasma disappearance rate studies are 0.25 mg/kg body weight or 0.50 mg/kg body weight .Plasma disappearance rates of ICG using 0.25 mg/kg body weight are comparable to 0.50 mg/kg body weight .In this study, we chose the lower dose of 0.25 mg/kg because it is expected that lower doses will clear faster from the body than higher ones, and it will use less ICG overall.For this study, we used an ICG kit that contained a 10 mL ampule of sterile water and 25 mg of ICG powder in a vial.The study nurse added the water to the ICG vial to create a concentration of 2.5 mg/mL, immediately before injection.The solution must be used within 6 h of mixing, so we waited until the IV was successfully placed for each participant before mixing the ICG and water.A 10 mL syringe was rinsed with the ICG solution and the calculated volume ÷ 2.5 mg/mL) was drawn for injection.The 10 mL syringe was weighed with the full content for injection, then reweighed after the injection to determine the exact weight of the ICG injected using a high-precision scale.The weight of the ICG injection was used later for PV determination.A bolus dose of ICG was injected evenly over 5 s into the antecubital vein through an IV line with a 3-way stopcock attached, and flushed with 10 mL saline.The 3-way stopcock was replaced after the dye injection to prevent contamination of ICG and post-injection blood to be collected.A timer was started at the beginning of ICG injection.Starting at 2 min, blood samples were collected into 3 mL EDTA vacutainer blood collection tubes every 45 s, up to 5 min 2:00; 2:45; 3:30; 4:15; and 5:00).The time in seconds was recorded at each draw.The method is still successful if the draw is not evenly spaced at each interval.Blood was drawn into 2 mL syringes before each draw and pushed back immediately after the draw to keep the IV line clear.The blood collection tubes and syringes were purchased from BD.Blood samples were processed the same way as the 6 mL EDTA tubes described above, and were used for PV determination described below.This is an overview of the full process:Collect urine sample for pregnancy test; weigh participant; take blood pressure.Have participant rest in a supine position for 15 min, with a small heating pad over the antecubital vein.Trained phlebotomist inserts an intravenous catheter with a 3-way stopcock.Participant rests supine for 5 min. ,Collect pre-injection blood sample – minimum 4 mL blood in EDTA vacutainer tube needed for plasma.Collect additional blood here if other biomarkers will be measured.Start timer and inject ICG through IV in a bolus dose over 5 s.Flush with 10 mL saline solution.Remove and replace 3-way valve system.Attach a 2 mL syringe to the 3-way stopcock.Before each blood draw, draw blood into the syringe and replace this after each tube is drawn.At exactly 2 min from start of ICG-injection, draw 2 mL blood using EDTA vacutainer tube; continue with 4 more blood draws every 45 s.Use a larger blood tube here if other biomarkers will be measured.Remove IV and let participant get up when comfortable; take blood pressure. ,Process blood tubes per standard centrifugation methods; aliquot plasma.Set up the standard curve and 96-well plate per details below.Measure ICG wavelength in a standard plate reader.Plot the standard curve and the decay curve for plasma ICG concentration.Extrapolate the decay curve for plasma ICG concentration to estimate plasma volume.PV was measured by applying the indicator-dilution principle.Plasma obtained from the participant before injection and the five plasma samples obtained after the injection of ICG were used to determine PV for each subject.Calibration curves were prepared by diluting the initial concentration of ICG with MilliQ water to concentrations of 5 mg/L–30 mg/L.A solution with 200 μL of ICG and 200 μL of the participant’s blank plasma were mixed together to obtain final standard concentrations of 2.5 mg/L–15 mg/L.We chose this range to so that absorbance readings from different subjects and clearance could be captured.The linear relationship between absorbance and concentration of ICG solution in plasma follows the Beer-Lambert’s law up 15 mg/L .All samples including standard solutions, blank samples and post-injection plasma samples were vortexed at high speed for 10 s for even mixing of solutes, and 100 μL of each was pipetted into 96 well plates, in triplicate.Absorbance was read on an Epoch plate reader powered by Gen5™ Software, set to a wavelength of 805 nm.Triplicate readings were taken for each blank, standard, and sample.The mean of 3 readings was calculated and used as the result.We constructed a standard curve of absorbance against standard concentrations and used it to estimate the concentrations of serially collected plasma samples, obtained from t = 2–5 min, for each participant.The concentrations of ICG in serial plasma samples were transformed into natural logs and plotted against the time they were collected, from t = 2–5 min post-injection.Traditionally, plasma volume estimation with ICG is made by back-extrapolation to time t = 0 min .However, this method has been shown in some studies to underestimate PV partly due to incomplete ICG mixing at this time .To overcome this problem, other researchers resorted to using a tourniquet to create a state of reactive hyperemia to speed up intravascular mixing and distribution of ICG .Another solution, shown by Polidori and Rowley, is to use backward-extrapolation to time t = 1 min, which produces consistent and more accurate PV than back-extrapolating to t = 0 .In this paper, we back-extrapolated the ICG concentration to both t = 0 and t = 1 min so that the data is comparable to either approach for future work.We also showed back-extrapolation graphs for both timepoints.The PV for the participant was thus calculated as:PV = D/C0 and C0 = Plasma concentration of ICG at t0 = 0 min, back-transformed from natural log).This procedure was repeated for each participant with a new calibration curve constructed from the participant’s plasma.PV was also calculated by weight and body surface area .We also examined the relationship between PV and BSA, because strong association between the two have been reported .Overall, ˜1.5 mL of pre-injection plasma and 0.2 mL of each post-injection plasma sample was sufficient to measure PV.This included five standard concentrations, which were sufficient for the estimation of PV for all subjects.The total of ˜2.5 mL of plasma needed per participant was much lower than the amount used in other studies .While this is a small amount of plasma, we recommend a total of 14 mL of blood collection to allow for repeat testing if needed.As well, although we used 5 post-injection samples, we found that 3 samples would be sufficient.The laboratory work for each participant measurement of PV took approximately two hours to complete, after blood collection.A total of nine women enrolled and were included in the analyses.Eight women self-identified as white and one as African-American.Participants were college educated or were undergraduate students.Two women were married; all women were nulliparous except one.The mean ± SD age of participants was 25.0 ± 4.5 years, BMI was 23.5 ± 2.9 kg/m2, and total body fat was 28.6 ± 5.0%.At the beginning of the visit, mean systolic blood pressure was 106 ± 8 mmHg and mean diastolic blood pressure was 71 ± 4 mmHg.Table 3 presents the PV of each participant sorted from low to high.The mean ± SD of the correlation coefficient for the standard curve was 0.989 ± 0.023; 6 out of 9 were >0.99.The correlation coefficient for the decay curve was 0.991 ± 0.013; 7 out of 9 were >0.99.The mean ICG elimination rate constant was 0.25 ± 0.06 /min and the clearance of ICG was 402 ± 119 mL/min.The mean coefficient of variation for PV across the nine participants was 1.7%.The mean PV for t = 0 was 1608 ± 394 mL.PV by body size was 25 ± 5 mL/kg body weight and 941 ± 193 mL/m2 body surface area.The relationships between PV and anthropometric measures are presented in Fig. 4.The correlation was particularly strong for BSA and plasma volume.The correlation coefficients were high for both the calibration curves and ICG decay curves.A sample standard curve and ICG decay curve from the study are shown in Figs. 2 and 3, respectively.The dye had a circulatory half-life of 2.9 ± 0.9 min and the ICG-plasma disappearance rate was 25.4% per minute.No participant reported any adverse events for the duration of the study.This study developed and field tested an efficient method for measuring PV using ICG dye among healthy, non-pregnant women.We have shown that the measurement of PV can take less than 2 h to accomplish compared to commonly used methods that can take several hours to obtain results .This makes the method practical for research, however challenges for use in clinical settings remain.Some have used the non-invasive pulse dye densitometry to avoid the need for blood draws when using ICG, which deserves further consideration for application to PV estimation.This densitometry method has not yet been approved by the FDA for use in the US.Similarly, BVA-100, a semi-automated system for blood volume analysis reduces the time required to measure blood volume using 131I-labeled HSA as tracer.This method has been approved by the FDA but because it uses a radioactive iodine isotope, there are concerns using it in some populations like pregnant women and young children.We avoided using radioactive isotopes because our long-term goal is to have a method that works for maternal and child health research.Another concern in clinical settings is overestimation of PV due to the escape of albumin-bound ICG into the interstitial space.This has been documented in patients with capillary leaks .In healthy patients, losses are minimal and do not appear to affect PV estimates .By adjusting methods and concentrations, we found that a small quantity of plasma can be used to measure PV – needing only 14 mL of blood to be drawn from each person.Further, our method reduced the number of post ICG-injection samples needed to 5, which is fewer than other methods that require 7–10 post-injection samples .PV can be estimated with as few as 3 post-injection plasma samples, but having 5 samples improves the accuracy of estimates.Though we collected blood every 45 s, other time intervals could be used.The most important factor is that timing of blood collection should be precisely recorded.Altogether, we have reduced the total amount of blood needed, which is helpful in all populations but can be important in certain clinical cases.Currently, no universally accepted standard values for comparing PV across age and gender exist.However, our measured values are comparable to estimates from Pearson et al. who estimated PV among more than 400 males and females from different equations .Although we did not compare our results with the method recommended by ICSH or other common methods for measuring PV, our estimates are comparable to what is commonly reported in literature for the age group examined.Furthermore, previous studies have shown that PV estimates from ICG were comparable to the ICSH recommended method – 125I-HSA and/or that of Evans blue dye and our goal was not to re-validate the ICG measurement.Taken together, the short circulatory half-life and high plasma disappearance rate give further support of ICG rapid clearance from the plasma, and safety when used in humans.This also makes it an ideal tracer for repeated use in plasma volume determinations.The half-life reported in this study was consistent with previous estimates .125I–HSA, has a circulatory half-life of 60 days .This makes repeated use of the method unacceptable because of the possibility of accumulation in the body.Over the years, interest in and use of ICG has been increasing particularly in clinical settings.A review in 2012 showed that in 1970 there were 38 studies on the use of ICG in PubMed, compared to 397 studies in 2010 .As more evidence becomes available in PV measurements, ICG use will become more common.In conclusion, ICG is a safe, efficient method for the measurement of PV in adults.As reported here, we successfully piloted the method in nine participants.Since then, we have used the same method in a longitudinal study of 35 women with 3 visits across the menstrual cycle, resulting in over 100 PV measurements.In our method development, we have further improved the method by reducing the amount of time and the volume of plasma needed to measure PV.PV can be estimated within 2 h using only 5 post-injection blood draws and only ˜2.5 mL of plasma per participant.ICG should be a recommended method for PV measurement in future research.The Pennsylvania State University, College of Health and Human Development.Authors declared no conflict of interest exist.The protocol was approved by The Office for Research Protections at The Pennsylvania State University and conducted in line with the Declaration of Helsinki.All participants provided written informed consent before enrolling into the study.
Plasma volume (PV) can be an important marker of health status and may affect the interpretation of plasma biomarkers, but is rarely measured due to the complexity and time required. Indocyanine green (ICG) is a water-soluble tricarbocyanine dye with a circulatory half-life of 2–3 min, allowing for quick clearance and repeated use. It is used extensively in medical diagnostic tests including ophthalmologic imaging, liver function, and cardiac output, particularly in critical care. ICG has been validated for measuring PV in humans, however previous work has provided minimal published details or has focused on a single aspect of the method. We aimed to develop a detailed, optimal protocol for the use of ICG to measure PV in women of reproductive age. We combined best practices from other studies and optimized the protocol for efficiency. This method reduces the time from blood collection to PV determination to ˜2 h and the amount of plasma required to estimate PV to 2.5 mL (1.5 mL before ICG injection and 1.0 mL post-injection). Participant inconvenience is reduced by inserting an intravenous (IV) catheter in only one arm, not both arms. Five post-injection plasma samples (2–5 min after ICG bolus) are enough to accurately develop the decay curve for plasma ICG concentration and estimate PV by extrapolation.
279
Zinc toxicity stimulates microbial production of extracellular polymers in a copiotrophic acid soil
Extracellular polymeric substances are complex high-molecular-weight mixtures of polymers synthesized by microbial cells."EPS play cardinal roles in nutrient acquisition, stabilization and protection of biofilm structure, microbial adhesion to the habitat matrix, and impart resistance to toxicity.Indeed, stress seems to be a common factor underlying many of the triggers to production of EPS.These stressful triggers can include physical shear, bacteriophage abundance, organic contaminants, biocides and antibiotics.In the past few decades, the majority of reports on the production and roles of microbial EPS focus on aqueous environments, such as marine and wastewater treatment systems.Exogenous organic substrate and heavy metal ion concentration are crucial factors influencing microbial EPS production and biofilm formation in these systems.For instance, silver ions and nanoparticles affect the composition of phototrophic biofilm in operated bioreactors, and copper and iron concentrations affect the profile of phenolic compounds exuded by marine microalgae.Zinc is a heavy metal of particular concern.Zn2+ can be highly damaging for the environment and organisms exposed to it, and is routinely discharged during anthropogenic activity in the mining, chemical, pulp and paper industries.A recent study of wastewater treatment systems investigated the biological complexation of Zn2+ by EPS and found that stretching vibration of OH, NH groups and CO bonds were implicated.Elsewhere, in agricultural technology, the Zn-tolerant plant-pathogen was shown to produce large amounts of EPS-polysaccharide in response to additions of Zn2+ to flow cells.Indeed there is a growing interest in the responses and roles of microbial exudates in porous media, especially soils, for the purposes of bioengineering and agronomy but investigations of native and community-wide EPS responses in soils directly are rare.While easily accessible/labile C is now understood to be a pre-requisite for substantial production of EPS from soil biota, the influence of any heavy metal contamination on proteinaceous and polysaccharide exudate production by soil native microbial populations in situ has not yet been reported.Besides the knowledge-gap in soil systems, the practical significance of EPS in other environments as a tolerance mechanism against heavy metal contamination suggests there may be further un-explored value in the understanding of soil EPS dynamics such as for more efficient bioremediation of contaminated soils.EPS is also thought to be vital for restoring a range of other important soil ecological and agronomic functions that are related to altered hydraulic dynamics and soil structure.Recently proposed methods to measure EPS in soil were applied by Redmile-Gordon et al.The authors used 15N isotope probing -and measures of soil ATP- to demonstrate that extraction with cation exchange resin could be used to contrast changes in total polysaccharide and protein fractions exuded by the native soil microbial biomass.This approach builds upon one of the most frequently used ways to extract EPS in saturated aqueous systems.Through the application of this method, biodiesel co-product was subsequently shown to be an efficient and sustainable choice of substrate to support EPS production in soil.BCP was selected as a C substrate to support microbial metabolism owing to a global and pressing need to reconcile issues of food security and bioenergy through integrated synergies.The growing range of uses for BCP in soils ranges from the capacity to reduce direct N2O emissions preventing NO3− contamination of groundwater and supporting production of EPS via the native soil microbial biomass.Here, we present a laboratory experiment to investigate the responses of microbial EPS production to BCP in a soil contaminated with Zn2+.The objective of this study was to quantitatively determine how polysaccharide and protein exudate fractions of a heterotrophic soil microbial biomass was affected by Zn stress.A further objective was to categorise these responses broadly as either belonging to the highly soluble fraction or the relatively insoluble but CER extractable fraction: EPS.Samples of sandy soil were collected from the surface horizon of a permanent grassland area adjoining plots of the ‘Market Garden Experiment’ at Rothamsted Experimental Farm, Husborne Crawley, Bedfordshire, UK.The soil contained 16.86 mg g−1 organic carbon, 1.55 mg g−1 total nitrogen and a pH of 5.95.Twelve moist portions of soil were placed in glass funnels.These were arranged randomly to compare four treatments with three replicates of each.The treatments compared were: glycerol addition, biodiesel co-product addition, BCP plus ZnCl2 addition, and no C addition.The Glycerol-C and BCP-C were provided at the start of the experiment at rates of 20 mg C g−1 soil.To ensure that the growth of native microbes and EPS throughout the soil were not limited by nutrient availability, ammonium nitrate and monoammonium phosphate were added to all soils at concentrations of 1.50 mg N g−1 soil and 0.35 mg P g−1 soil, respectively, other nutrients were provided at a concentration of 0.10 mg g−1 soil.After 24 h of C and nutrient addition, 20 mL of 0.01 M CaCl2 was applied to the soil surface in each funnel.This step was repeated each day thereafter to simultaneously re-moisten the soil, remove excess substrate C from soil pores, and redistribute solutes as would occur in a more natural system exposed to weather in an open environment.Dilute CaCl2 is commonly used in preference to deionized water in soil laboratory studies as a surrogate for rainwater owing to osmotic similarity.A contrasting three portions of 0.01 M CaCl2 were spiked with ZnCl2 to deliver 300 μg Zn2+ g−1 soil.These were added to three replicates of the BCP-amended soils each day and allowed to drain freely.This daily addition of Zn2+ is similar to the difference in Zn concentration between contaminated and uncontaminated soils taken from the same site described previously by Chander and Brookes with ‘uncontaminated’ soil yielding a concentration of 107 μg Zn g−1 soil by digestion HCl:HNO3).All treatments were incubated in the dark at 25 °C for 7 days.While 10 days has previously been given for development of EPS in these conditions 7 days was chosen in the present study for analytical convenience.Importantly, it is around this time that EPS responses are likely to be detectable because EPS production is typically greatest around the transition between ‘log’ and ‘stationary’ growth phases.From our previous observations of inflection points for cumulative CO2 release curves this point appears to occur sometime between 4 and 12 days in the conditions specified above.At the end of the 7 day incubation period, excess pore-water was removed by applying a 40 cm of mercury-equivalent vacuum to the funnel-outlet and the mesocosms were destructively sampled.The SMP and EPS extraction protocols were followed as described in the open access article by Redmile-Gordon et al.Accordingly, the residual SMP fraction was extracted from moist subsamples placed in 50 mL polypropylene centrifuge tubes on an end-end shaker set to 2 cycles per second at 4 °C using 0.01 M CaCl2 at a 1:10 soil:solution ratio.Extracts were then centrifuged at 3200×g for 30 min, the SMP solution was decanted and frozen for subsequent analyses.EPS was then extracted from the remaining pellet by re-suspending in new tubes containing 25 mL of EPS extraction buffer.Buffer was prepared in 18 MΩ H2O to: 2 mM Na3PO4·12H2O, 4 mM NaH2PO4·H2O, 9 mM NaCl, 1 mM KCl, adjusted to pH 7 with 1 M HCl and cooled to 4 °C.Sufficient CER was prewashed twice in the above buffer and then added as an amount equal to 178 mg per mg organic carbon in the untreated soil, therefore, 7.50 g CER per 2.5 g soil sample in the present case.This was shaken at the same speed as for SMP removal but for 2 h at 4 °C.Samples were then centrifuged at 4000×g for 30 min and the supernatant transferred into new tubes.These were frozen and stored at −20 °C prior to analysis.Total polysaccharide and uronic acids were quantified as described by DuBois et al. and Mojica et al., respectively.The extracted protein content accounting for colorimetric interference from humified organic material in the extracts was measured using the Lowry technique as modified for microplate format and described in the open access article by Redmile-Gordon et al. except that no dilution of extracts were required, i.e. 100 μL of EPS extract was analysed by direct comparison against absorbance of one set of standards containing 0–100 μg Bovine Serum Albumin mL−1 EPS buffer.For the SMP extracts, the standard concentration range was identical except made in a matrix of 0.01 M CaCl2.All data were normally distributed, meeting the required assumptions for a one-way ANOVA without transformation.The LSD test was subsequently applied for comparison of means at a 0.05 significance level using GenStat.We found the concentrations of both EPS-polysaccharide and EPS-protein were significantly increased with the addition of organic C, especially through unrefined BCP addition.The same trend for an increased EPS response to BCP vs. refined glycerol, except in a clay-loam soil of neutral pH was seen previously by Redmile-Gordon et al.The aforementioned study also found that more EPS was produced from BCP made with recycled cooking oils compared to either refined glycerol or BCP produced from virgin oilseed rape.This is an important consideration in the selection of C substrates to support microbial processes either in soils or in bioreactors, as this enhances the wider benefits of making biofuels from waste oils.In the above case, the increase in EPS production was not due to any heavy metal content in the BCP, as heavy metal concentrations in all the C substrates applied were consistently below the metal content of straw biomass harvested from a reference stadard of pristine grassland.In batch culture conditions, Freitas et al. also observed greater increases in EPS production from unrefined co-products of biodiesel when compared to pure glycerol.While the reasons for this are currently unknown, glycerol is a completely hydrophilic C source, whereas BCP also comprises salts of fatty acids, and small quantities of hydrophobic fatty acid methyl esters, and unreacted mono- and di-glycerides.The uses of organic materials in the remediation of Zn-contaminated acid soils or sites have been widely reported.However, BCP shows additional potential in remediation technology through augmenting the production of microbial EPS and SMP.Importantly, the production of BCP requires less energy than refining it to extract components such as high purity glycerol but refining is still routinely performed in higher tech biodiesel plants.Direct uses for BCP in remediation would negate the need for expensive biodiesel-plant machinery and thus improve the feasibility of energy security projects for community-scale biodiesel enterprises in developing countries.On the larger scale, international policy for biofuels is negatively affected by notion that biofuels are the driver of indirect land-use change.iLUC is a genuine problem threatening dwindling wildlands especially in developing countries.While the logic behind resting blame for iLUC with biofuels is questionable the use of BCP as a soil improver in marginal areas could decrease land-use pressure on already high-functioning soils which would be a quantifiable reversal of iLUC.The use of BCP to support microbial growth and EPS production in marginal soils is therefore of particular interest for sustainable intensification.Evidence pointing towards EPS as a microbial tolerance mechanism against Zn stress is more established in non-soil disciplines such as rivers and clean, simplified systems, with visualisation techniques in soils being almost impossible due to the plethora of densely packed opaque minerals and decaying organic materials.Extraction approaches remain highly challenging due to the range of potentially interfering organics typical of most soils.However, in the present study we show that the total EPS polysaccharide fraction increased in response to Zn2+ addition to a soil already containing a complex microbial community.These results are in accordance with, and expand upon, the environmental relevance of findings in other studies.For example, the increased cell-specific production rates of EPS-polysaccharides previously observed in flow-cells of monocultures spiked with Zn2+ appears to hold true, both a) in the highly complex soil environment and b) as a general dynamic in our soil at the scale of the microbial community.However, in the study of Wei et al. it was found that the measured EPS concentration had decreased after exposure to Zn.In this case the authors had used much higher concentrations of Zn, and EPS concentration was estimated by application of hot NaCl.The application of hot extracts is known to cause heat shock, cytoplasm leakage and even lysis.Hot extraction of EPS is thus likely to co-extract internal cell biopolymers and so results should be interpreted with care: potentially reflecting the toxicity to microbial cells as opposed to reflecting a genuine reduction in cell-specific production of EPS.Previously in soil science, Zn has also been demonstrated to reduce the size of the soil microbial biomass and inhibit nutrient turnover.However, an increased CO2 evolution per unit of microbial biomass typically accompanies this phenomenon: Chander and Brookes attributed this to “less efficient utilization of substrates for biomass synthesis” and thus implied a redirection of C towards tolerance mechanisms.Smolders et al. suggested that in locations where they found total cell biomass was not affected by availability of Zn that some unmeasured tolerance mechanisms must have existed to prevent the expected toxicity.In the study of Smolders et al., soils were also spiked using ZnCl2 and at concentrations up to 1000 μg Zn g−1 soil.Importantly, no studies are known to show an increase in microbial biomass due to contamination with Zn2+.The EPS production efficiencies in uncontaminated soils were previously found to depend on the C/N ratio of available substrates for microbial growth.While the effect of Zn2+ addition on EPS production efficiency per se was not an objective of the present study, the addition of excess Zn2+ is highly unlikely to have increased the microbial biomass, and so the clear increase in EPS and SMP production between soils given BCP and those given BCP plus Zn2+ points towards microbial allocation of metabolites into Zn tolerance mechanisms.The increased SMP-uronic acid and EPS-polysaccharide production represent more, and less mobile fractions, respectively.These are therefore useful starting points from which to investigate the ecological significance of exudate dynamics, with practical potential for example in the bioremediation of Zn from acid soils.EPS are considered as a highly useful adsorbent for heavy metal contamination, owing to the many functional groups, including carboxyl, amine, and hydroxyl groups for example earning a role in treating metal-loaded wastewater.However, spatial analyses in artificial systems have shown that Zn2+ exhibits greater affinity for cell surfaces than for glycoconjugates in the EPS in such cases it follows that more mobile complexing components such as SMP would be required to enable translocation away from the cell.We found that uronic acid concentrations in the EPS did not vary between any of the treatments.Pereira et al. claimed that EPS uronic acids were mostly indicative of cyanobacterial activity, while in the present study, no light was provided, so cyanobacterial activity would have been negligible.Nonetheless, we found statistically significant increases in uronic acid content of SMP amounting to a 100% increase over control with the addition of BCP, which became an increase of more than 130% when Zn2+ was subsequently added.This points towards mobile uronic acids being a more responsive component to Zn2+ than uronic moieties in the EPS.Kaplan et al. investigated Zn association with exudates of Chlorella stigmatophora and found that complexes were formed with concentration being directly proportional to the quantity of dissolved polysaccharides and speculated this was due to complexation by the mobile uronic acid fraction.Indeed, aqueous solutions of naturally produced uronic acids have since been used in bioremediation efforts to remove Zn and other heavy metals from contaminated soils.However, other work has also shown that the capacity of uronic moieties for Zn complexation can be low with the efficacy of heavy metal sorption being dependent on concentrations of competing cations and oxidation status of the environment being studied.Further studies in diverse media such as soils therefore have much to offer in this regard.The present study concerns the EPS production of a native soil microbial community exposed to Zn2+contamination in the laboratory.Additional studies in field-soils of more complex systems are needed to determine if similar dynamics occur in the field.Meanwhile, laboratory studies of the mobility of Zn and other heavy metals both independently and as mixtures would be informative, including rates of exchange between soil solution, soil colloids, and the more mobile and less mobile fraction of exudates.Currently the interplay between metals and biofilms in soils is still poorly understood which limits the efficacy of remediation technologies like phytoextraction.Indirect mechanisms of Zn transport in the extracellular habitat, not related to complexation of Zn have also been postulated to explain the removal of Zn from soil.For example, where inoculation of soil with EPS-producing Pseudomonas decreased the sorption of Zn to soil surfaces, here, the authors ascribed this effect to the shielding of active sites of soil organic matter with inert bacterial polysaccharides.The wealth of unknowns in this area raises many questions such as where and when do EPS and SMP fractions increase/decrease the bioavailability and mobility of heavy metals.Finally, regarding the proteinaceous fraction of exudates, in the present study EPS-protein was enhanced by organic C addition, but not by inclusion of Zn2+.In the SMP fraction, protein was almost undetectable.Our findings are in line with other reports proposing that enzymes are not released indiscriminately into solution as soluble biopolymers, but are instead retained by the EPS.While Tonietto et al. found that Zn had a strong affinity for proline and hydroxyproline residues, the apparent lack of any association between Zn and proteinaceous exudates in our data do not suggest that this dynamic holds true in this acid soil.As a consequence it might be interpreted that EPS-protein had little to do with any tolerance mechanism against Zn toxicity.However, it remains a possibility that a pool of non-extractable proteinaceous Zn complexes was accumulating in the soil.Future work tracking EPS dynamics with heavy metal fluxes into stabilized soil organic matter would be informative in this regard.The data presented here indicate that the EPS differences between the BCP and BCP plus ZnCl2 treatments can be confidently ascribed to Zn2+, suggesting that in acid soils, native microbial communities are likely to tolerate heavy metal toxicity by two modes of action: i) stimulating production of EPS-polysaccharides, and ii) exuding soluble uronic acids.Given that altered EPS dynamics will affect the mobility of toxic metals via soil physical and biochemical changes, the further investigation of the movement and fate of Zn2+ as affected by EPS production could help better inform current practice and illuminate new opportunities for soil management and remediation.We therefore propose that measurements of soil microbial EPS and SMP are included in studies investigating the progress of remediation of soils that have been deleteriously exposed to heavy metals.As the soils dataset expands, we envisage that models able predict EPS responses to metals and the subsequent impacts of EPS on spatial and physical aspects of soil function will yield useful data for biotechnological application in non-soil environments.The method of EPS extraction from soil using CER exchange described here was adapted from -and is sufficiently similar to- existing approaches applied in wastewater treatment systems.Method similarity will ensure data obtained from soils is cross-relevant to other environments, and the advancement of technology in biotechnological applications such as continuously operated bioreactors.In this way, the outputs of each scientific discipline can synergistically contribute to the other, and to the sustainability of science generally: by magnifying returns, and deepening our understanding of the triggers to community-wide biofilm formation and dispersal.
The production of extracellular polymeric substances (EPS) is crucial for biofilm structure, microbial nutrition and proximal stability of habitat in a variety of environments. However, the production patterns of microbial EPS in soils as affected by heavy metal contamination remain uncertain. Here we investigate the extracellular response of the native microbial biomass in a grassland soil treated with refined glycerol or crude unrefined biodiesel co-product (BCP) with and without ZnCl2. We extracted microbial EPS and more readily soluble microbial products (SMP), and quantified total polysaccharide, uronic acid, and protein content in these respective extracts. Organic addition, especially BCP, significantly stimulated the production of EPS-polysaccharide and protein but had no impact on EPS-uronic acids, while in the SMP-fraction, polysaccharides and uronic acids were both significantly increased. In response to the inclusion of Zn2+, both EPS- and SMP-polysaccharides increased. This implies firstly that a tolerance mechanism of soil microorganisms against Zn2+ toxicity exists through the stimulation of SMP and EPS production, and secondly that co-products of biofuel industries may have value-added use in bioremediation efforts to support in-situ production of microbial biopolymers. Microbial films and mobile polymers are likely to impact a range of soil properties. The recent focus on EPS research in soils is anticipated to help contribute an improved understanding of biofilm dynamics in other complex systems - such as continuously operated bioreactors.
280
Does rare matter? Copy number variants at 16p11.2 and the risk of psychosis: A systematic review of literature and meta-analysis
There continues to be a debate as to whether genetic influences on schizophrenia are better explained by a “common disease-common allele model” or a multiple rare variant model where mutations are highly penetrant, individually rare, of recent origin and sometimes specific to individuals or families.There has also been a growing interest in the study of the different psychiatric conditions and copy number variants.CNVs are micro-deletions and micro-duplications of segments of genome ranging from a few hundred base pairs to several megabases.Genome wide screening for CNVs has become possible with the development of micro-array based technologies, namely array comparative genomic hybridization and genome wide SNP chips.Micro-deletions at 1q21.1, 15q11.2, and 22q11.2, and micro duplications at 16p11.2 have been associated with an increased risk of schizophrenia.Furthermore it appears that some of the CNVs associated with schizophrenia have a pleiotropic effect: the same CNV can be associated with several different clinically defined conditions such as epilepsy, ADHD, obesity, intellectual disability, schizophrenia, bipolar disorder, autism and even normal phenotype.Our main aim was to synthesize the current evidence for the association of CNVs at 16p11.2 and psychosis.Our secondary aim was to investigate the association between schizophrenia sensu strictu and 16p11.2 CNV.We applied the PRISMA Statement Criteria to our systematic search of the literature.All primary genetic studies were included.Primary genetic studies were defined as studies where CNVs were investigated in new case control samples, historic case control samples or a combination of both.We also included meta-analyses of the association between CNVs at 16p11.2 and psychosis.We did not limit our search based on the age of participants and we did not apply any publication date restrictions.We identified all relevant studies by searching PubMed, Web of Knowledge and OMIM.The search was run on 13 October 2013 and re-run again on 5 March 2014.We used the following search terms to browse the three databases:PubMed,: AND ;,OMIM, by searching for 16p11.2 and 16p11 2;,Web of Knowledge, by searching AND .The eligibility assessment was performed by GG.We excluded studies published in languages other than English, studies that focused on animal samples, narrative reviews, systematic reviews, commentaries, letters to the editor, editorials, PhD theses, book chapters and any data presented orally or in the form of posters.We developed our own quality control grid as we could not find any standardized methodology applicable to psychiatric genetic studies.GG conducted initial quality assessment of the included studies, and afterwards NB checked the quality controlled data.Any disagreement was resolved by discussion between the two authors and if an agreement could not be reached, a third author adjudicated an outcome.There is increasing interest in CNVs at 16p11.2 because of their association with psychosis.Of the 15 studies that we retrieved, seven had reported significant association between adult onset SCZ and SCD and 16p11.2 duplications, with frequencies ten times higher in cases compared to controls.They also found an increased risk of SCZ or SCD that was between 8 and 25 times higher in individuals with 16p11.2 duplications.On the other hand three studies which showed increased frequencies of duplications in cases still failed to find a statistical significance, possibly due to smaller datasets.Walsh et al. and Ahn et al. focused their analysis on COS; a more severe and possibly more genetically loaded form of SCZ.Despite finding an increased frequency of duplications in cases with SCZ compared with controls than in similar studies of adult onset SCZ, their results did not reach statistical significance.The role of deletions at the distal region of 16p11.2 and psychosis is still highly uncertain with one study finding an association and one study that failed to replicate this finding.Similar uncertainty is also seen in the evidence for the role of duplications at the proximal 16p11.2 and the risk of bipolar disorder.The concept that certain recurrent CNVs, including 16p11.2, are important risk factors for a small proportion of patients with Schizophrenia is rapidly gaining credence.A number of rare CNVs that appear to have pleiotropic CNS effects can be considered strong susceptibility loci for a broad range of neurodevelopmental disorders.In other words the risk associated with these CNVs is not exclusively for schizophrenia.It will be important to investigate whether healthy controls with 16p11.2 duplications have neuropsychological intermediate phenotypes.In our meta-analysis on 11 studies we robustly confirmed a ten-fold increased risk of psychotic illness in patients with proximal 16p11.2 duplications.Moreover in our “post-screening for risk of overlapping sample” analysis we found a fourteen fold increased risk for psychosis in patients with the duplications.We found no statistical association between micro-deletions and psychosis at proximal 16p11.2.Guha et al. and Rees et al. were the only studies that explored in two independent samples the distal portion of the same region, the first finding a strong association between micro-deletions and psychosis but the other failing to observe any deletion in 6882 cases with SCZ.The robust association of 16p11.2 duplications and psychosis argues for a detailed study of the duplicated region.It is important to determine how the duplication confers this increased risk of psychosis e.g. by gene or micro RNA dosage effect.We extracted relative risks and calculated their standard errors and confidence interval.The pooled relative risk was estimated using a fixed effect model, weighting for the inverse variance.The fixed effect approach assumes that all the studies are estimating the same effect and only random variation between subjects leads to the observed study effect to vary.This approach has been shown to be more conservative compared to using a random effects model.We explored heterogeneity using a forest plot."We tested for the presence of heterogeneity amongst studies using Cochran's Q statistics where a value close to 0 indicates there is no heterogeneity and we used I2 statistic to quantify the degree of heterogeneity.The I2 statistics ranges from 0% to 100% and provides a measure of the level of inconsistency across studies.Sensitivity analyses included combining studies of low risk of overlapping.We also assessed the impact of each study on the pooled estimate by omitting one study at a time to see the extent to which inferences depended on a particular study.We visually examined estimated effect sizes against their standard errors using funnel plots as recommended by Sterne et al. for evidence of bias and heterogeneity.Analyses were carried out in Stata V.13.A total of 15 studies were identified for inclusion in our review.The search of Web of Knowledge, PubMed and OMIM databases provided a total of 100 citations.After adjusting for duplicates, 76 remained.After reviewing by title and abstract 56 were discarded as they clearly did not meet our inclusion criteria.Four articles were retrieved after hand searching references in articles already selected and by hand searching references in previous reviews of the literature regarding the topic.One further article which was originally excluded due to it being a letter to the editor was later retrieved as it communicated original results.Therefore, a total of 25 articles were retrieved and fully analysed with their supplements.Of these 25 articles, one was excluded as it did not contain original data.Two studies were excluded because they focused on the detection of CNVs in intellectual disability.One article was excluded as it repeated data from a previous study by the same author.Four studies were excluded either because we could not retrieve specific information regarding the association between 16p11.2 CNV and schizophrenia or because the data was a mixture of both historical and new data focused on a different mental health disorder than the ones in our search criteria.One study was excluded as it presented data on a single nucleotide polymorphism in 16p11.2.A further study was excluded as its full data set was included in the study by Guha et al.For a summary of the results of each individual study please see Table 1.For the meta-analysis we explored deletions and duplications at proximal 16p11.2.11 studies were included.The studies by Guha et al. and Rees et al. were excluded because they focused on a distal region of 16p11.2.The study by Ahn et al. was excluded because the study design differed from all the others i.e. it hypothesized a higher frequency of 16p11.2 CNVs in COS vs healthy siblings and vs adult onset SCZ and did not therefore represent a straight forward case control study design.Likewise the study Levinson et al. was also excluded because their study was not a case control study, and the Grozeva et al. study was excluded because they had focused their analysis on a healthy control group compared with historical results in patients with SCZ.In our pre-quality control meta-analysis, we utilized a fixed effect method and found a pooled OR = 10.0; p = 0.647; I2 = 0%; test of OR = 1; z = 9.87; p < 0.001) for duplication and psychosis,; whereas for deletions we found a pooled OR = 0.736; p = 0.694; I2 = 0%; test of OR = 1; z = 0.76; p = 0.447).Two studies focused on CNVs in a distal region of 16p11.2.Whilst Guha et al. showed a six fold increased risk for SCZ and SCD for deletions, Rees et al. failed to replicate these findings.The latter provided a combined analysis which showed an overall OR = 3.39.Four studies that analysed the relationship between 16p11.2 CNV duplications and BD, only the meta-analysis of McCarthy et al. found a statistically significant increased risk of four times in patients with the duplication, whilst the others failed to replicate significant results.For a summary of the quality check list and risk of overlapping of each study please see Table 2.We defined a “high quality” study as those which fulfilled any three of the first five quality criteria in the grid."If the study didn't meet the aforementioned standard then it was defined as “low quality”.In total, ten of the studies met the threshold for high quality.As this quality appraisal did not manage to screen for the risk of repeated measure we therefore analysed only those studies which showed a low likelihood of overlapping results.We also included the most recent and largest study, which despite presenting a high risk of overlapping results actually showed a low likelihood to overlap with the four aforementioned studies.This was done in order to maximize the pool of patients and controls selected with a minimum risk of overlapping results.Out of the 5 studies which passed our quality control measure for low risk of overlapping we found a M-H pooled OR = 14.4; p = 0.827; I2 = 0%; test of OR = 1; z = 5.13; p < 0.001) for duplications.The meta-analysis was then repeated focusing only on cases with schizophrenia and therefore the study by Priebe et al. was excluded.For this meta-analysis we found M-H pooled OR of 16.0 p = 0.836; I-squared = 0.0%; test of OR = 1; z = 5.00 p < 0.001).To our knowledge we have been the first to have rigorously applied the PRISMA statement criteria to conduct a meta-analysis of the risk for CNVs at 16p11.2 and psychosis.Furthermore, in doing so, we believe this to be the first study to have applied a systematic quality control and incorporate the risk of overlapping results criteria to our meta-analysis.Several previous publications have provided original results and combined/meta-analytical results with previous datasets; however we argue that the choice of previous dataset appeared at times arbitrary and not supported by a rigorous selection.Despite the low heterogeneity within our selected studies there are intrinsic limitations in combining observational studies as reported by Stroup et al.We encountered a systematic positive results bias; in fact by utilizing the standard search methods we have only been able to detect studies which showed positive results either in the cases or in the controls.Hence studies which find no CNVs at 16p11.2 usually do not report their analysis in their text or tables and so escaping the search engine search.For example our own study of CNV in BD was not detected by the search parameters utilized here.We have however investigated though rigorous criteria all literature in different search engines.Finally we did not go further than searching data presented in the original article or in the supplementary information published in the same journal.To access the entire database was going beyond the scope of our study; unfortunately this causes an intrinsic risk of re-counting the same findings more than once with a risk of the over-inflation of positive findings.However, the frequency in cases and controls is fairly consistent across studies and large numbers have been included in the meta-analysis giving the best possible estimate of the effect sizes of CNV at this locus in SCZ.This study was not funded by any public, private or not-for-profit grants.First author: Giovanni Giaroli MD, Division of Psychiatry, UCL,Second author: Nocholas Bass MD, Division of Psychiatry, UCL,Third author: Andre Strydom PhD, MSc, MBChB Division of Psychiatry, UCL,Fourth author: Khadijia Rantell PhD Division of Psychiatry, UCL,Senior and corresponding author: Andrew McQuillin PhD, Division of Psychiatry, UCL,Dr Giaroli has received honoraria for serving on a speaker bureau for Ely Lilly, Shire and FlynnPharma; also he has served as board advisor for Shire.The other authors declare no conflict of interests.
Background: In the last 5. years an increasing number of studies have found that individuals who have micro-duplications at 16p11.2 may have an increased risk of mental disorders including psychotic syndromes. Objective: Our main aim was to review all the evidence in the literature for the association between copy number variants (CNVs) at 16p11.2 and psychosis. Methods: We have conducted a systematic review and a meta-analysis utilising the PRISMA statement criteria. We included all original studies (published in English) which presented data on CNVs at 16p11.2 in patients affected by schizophrenia, schizoaffective disorder or bipolar disorder. Results: We retrieved 15 articles which fulfilled our inclusion criteria. Eleven articles were subsequently selected for a meta-analysis that showed a 10 fold increased risk of psychosis in patients with proximal 16p11.2 duplications. We conducted a second meta-analysis of those studies with low risk of overlap in order to obtain the largest possible sample with the lowest risk of repeated results: 5 studies were selected and we found an odds ratio (OR) of 14.4 (CI = 5.2-39.8; p< 0.001) for psychosis with proximal 16p11.2 duplications. The results were not significant for micro-deletions in the same region. Finally extracting only those studies that included patients with schizophrenia we found an OR = 16.0 (CI = 5.4-47.3: p< 0.001). Conclusions: There is a fourteen fold-increased risk of psychosis and a sixteen fold increased risk of schizophrenia in individuals with micro-duplication at proximal 16p11.2.
281
Vaccines, our shared responsibility
The fifteenth annual Developing Countries Vaccine Manufacturers’ Network meeting held from October 27–29, 2014 in New Delhi, India, and co-organized by Panacea Biotec, marked another year of progress in global vaccination.The DCVMN is a public health driven, international alliance of manufacturers, working to strengthen vaccine supply through information and professional training programs, technology improvements, innovative vaccine research and development, encouraging technology transfer initiatives, to improve availability of safe, effective and affordable vaccines for all people.Engagement and participation in this field was exemplified by the high participation of vaccine manufacturers and included over 240 delegates from 36 countries, notably 25 percent of which were female."Delegates represented major global health organizations such as the World Health Organization, Pan American Health Organization, United Nations International Children's Fund, Gavi—The Vaccine Alliance, governmental agencies such as the National Institutes for Health, Japan International Cooperation Agency, National Institute for Biological Standards and Control, United States Pharmacopeia, United States Department of Health and Human Services, and non-governmental organizations including PATH, Clinton Health Access Initiative, Médecins sans Frontières, Aeras Foundation, Hilleman Laboratories, and the Bill & Melinda Gates Foundation, and more than 50 life sciences corporations, including 33 vaccine manufacturers from developing countries, all working to support the mission of increasing the quality and availability of affordable vaccines for all people.The meeting was jointly opened by M. Suhardono, President of DCVMN and Dr. R. Jain, Joint Managing Director of Panacea Biotec; M. Suhardono expressed his praise to all vaccination partners for achieving Polio-free status in the South East Asia region through targeted and consistent supply of polio vaccines.Dr. R. Jain congratulated and thanked Panacea employees for over a decade of work to produce and supply more than 10 billion doses of vaccines to developing countries.Dr. G.N. Singh, Drugs Controller General of India, subsequently extended a warm welcome to the global stakeholders in attendance.Dr. G.N. Singh especially commended the contributions of developing country vaccine manufacturers to maintaining healthy populations through the global supply of vaccines.The leaders noted the growing number of corporate members that have successfully achieved WHO pre-qualification of vaccines and thanked all involved stakeholders for their dedication to a healthier future.The initial session focused on the role of vaccines in addressing priorities in public health within developing countries, specifically facilitating access to high-quality and affordable vaccines.Presentations were led by Dr. Bahl, S. Guichard, N. Dellepiane, L. Slamet, and S. Inglis.The over twenty year history of polio eradication in India was summarized by Dr. Bahl from WHO South East Asia Regional Office, highlighting the importance of vaccines to the health of the public.Dr. Bahl attributed the dramatic decline and ultimate eradication of polio to national vaccination campaigns involving community engagement, diligent work by millions of health workers, an extensive surveillance system to detect incident cases of polio virus, and strong governmental support.Efforts began in 1978 and continued until India was finally removed from the list of polio endemic countries on March 27th, 2014, defining the entire WHO South East Asia Region as polio-free.India continues to maintain national educational campaigns, routine immunization, and strong surveillance and travel regulations to mitigate the risk of importation of polio virus.Highlighting another success story of vaccination on reducing disease burden in developing countries, S. Guichard from WHO SEARO discussed the Global Measles Elimination Programme initiated in 2000.Countries participating in this programme aim to achieve over 90 percent vaccination coverage at the national level and 80 percent coverage at the district level by 2015.This coverage level is sought to reduce incidence of measles to less than 5 cases per million total population and mortality from measles by 95 percent.In South East Asia region, countries have included with measles elimination, the prevention of Rubella and congenital rubella syndrome using measles and rubella combined vaccine.Supply of MR vaccine currently does not meet the amount required to implement measles elimination and preventing rubella and congenital syndrome, particularly in the South East Asia Region.Vaccine manufacturers are addressing the challenges posed by this public health priority, by developing new collaborations and build new facilities to increase MR vaccine supply and to achieve WHO pre-qualification of available products, as soon as possible."N. Dellepiane, representing WHO's Regulatory Systems Strengthening team, discussed the regulatory requirements to supply vaccines to international markets.Dellepiane elaborated on the five major challenges faced by manufacturers to register vaccines in producing and receiving countries.These challenges consist of: limited expertise to review technical product information in vaccine regulatory dossiers, lack of appropriate expertise and certified personnel to perform good manufacturing practice inspections in many countries, additional clinical trials requested despite the availability of sufficient data from trials conducted in other regions, epidemiological settings, and socio-economic conditions, limited compliance with good clinical practice standards for some clinical trials, and lengthy registration/review processes which delays in registration even in emergency situations.WHO supports governments and manufacturers to facilitate effective regulatory process by: developing briefing workshops and guidance documents regarding the pre-qualification process for vaccines, holding pre-submission meetings with manufacturers to discuss product-specific dossiers, supporting countries for rapid review and registration of priority vaccines such as bivalent oral polio vaccine and inactivated polio vaccine, and supporting global collaborative networks, such as DCVMN and the Developing Countries Vaccine Regulators Network.S. Inglis, Director of NIBSC, provided his insights into the importance of pre-licensure dialogue between regulators and the vaccine industry regarding regulatory processes.Inglis shared the perspective that, “Good regulators make good manufacturers, and good manufacturers make good regulators”, since early regulatory advice can be built into product design phases, saving time and resources for both manufacturers and regulators.Highlighting the importance of vaccine safety, L. Slamet discussed post-licensure communication among regulators and industry to ensure that vaccination benefits outweigh foreseeable risks for users.Slamet encouraged manufacturers from developing countries to clearly communicate benefits and risks with governments, NGOs, regulators, health professionals, and patients to increase and maintain confidence in vaccines.The DCVMN annual meeting continued with discussion regarding collaborations and partnerships for vaccine supply.Presenters for this topic were O. Levine, D. Mulenga, M. Pereira, M. Malhame, J. Schafer), C. Egerton-Warburton), Y. Ikeda, D. Saha, R. Salerno, and K. Sampson).The topic was opened by O. Levine, Director of Vaccine Delivery at BMGF.BMGF aims to support prevention of 11 million deaths, 3.8 million disabilities, and 230 million illnesses by 2020, through high, equitable, sustainable vaccine coverage.Levine discussed three main goals proposed by BMGF for collaborations and partnerships across the vaccine market, including: ensuring uninterrupted supply of affordable and suitable vaccines for Gavi, improving the market dynamics information and expertise to solve vaccine access challenges, strengthening global health and manufacturers’ partnerships to enable better alignment of goals, alignment with global strategy, and coordination of internal investments.D. Mulenga, Deputy Director of the Supply Division at UNICEF, updated attendees about the vaccine procurement required to achieve a sustained and uninterrupted supply of affordable, high-quality vaccines.Echoing the prior presenters, Mulenga pointed to rapidly increasing demand for vaccines such as pentavalent, Bacillus Calmette–Guérin, pneumococcal, rotavirus, and measles-rubella, and the insufficient supply."Further, Mulenga pointed to the challenge to achieve the Decade of Vaccines' goal of 90% vaccination coverage nationally, since coverage is stagnating in some countries .This challenge is further enhanced by the fact that less than 30% of the poorest countries meet the WHO standards for an adequate vaccine supply chain .Opportunities exist for new manufacturers to leverage supply of vaccines, already licensed and available in domestic markets, to other developing countries through UNICEF pooled procurement; yet these vaccines must be pre-qualified by WHO, which is a condition for international procurement.M. Pereira from PAHO discussed the global collaboration through the PAHO Revolving Fund for vaccine procurement in the Americas.Launched in 1979 as a technical cooperation programme and mechanism to procure essential vaccines, syringes, and other related supplies, the Revolving Fund relies on strong principles of regional solidarity and financial self-sustainability.The achievement of regional polio eradication, and measles and rubella elimination reflect its success.Further, between 80 and 90% of countries in the Americas have already introduced new pneumococcal, human papilloma virus, and rotavirus vaccines.Challenges remain to be resolved, such as disruption of vaccines’ supply, limited competition for vaccines produced by few manufacturers, and demand increasing faster than supply."PAHO's efforts to improve reliability of the demand forecast, increased planning capabilities, and awarding longer-term contracts will improve supply security. "Discussing the Gavi, the Vaccine Alliance strategy, M. Malhame provided an overview of Gavi's supply and procurement initiatives. "Malhame's presentation highlighted the many new vaccine introductions in 2014 and the routine availability of pentavalent vaccine in all 73 Gavi countries.Specific vaccine activities were discussed including new yellow fever campaigns, the achievement of a stockpile of cholera vaccine, the current window of opportunity for malaria vaccines, the assessment of rabies prophylaxis, and the impact of the maternal influenza vaccination.Malhame also focused on the remaining uncertainties concerning pricing of vaccines post-Gavi financing."Temporarily dispelling this concern are the commitments of two manufacturers to a five year price-freeze for pentavalent vaccines for countries graduating from Gavi's financial support.J. Schafer, from BARDA discussed approaches to ensuring vaccine preparedness for global pandemic influenza.Preparedness entails generating the capacity to supply vaccines to a majority of the population within six months of pandemic declaration, establishing and maintaining stockpiles of vaccines to cover about 10 percent of the population, and enhancing sustainable influenza vaccine production capacity in developing countries.Preparedness should be created through technical support of developing country vaccine manufacturers, facilitating license agreements for technology transfer, and innovation such as use of adjuvant formulation to increase vaccine doses delivery.Schafer suggested using a tool developed by the United States Centers for Disease Control and Prevention to assess risk factors of the virus, attributes of the population, and environmental and epidemiological components of the pandemic .The Global Health Investment Fund was introduced by C. Egerton-Warburton who described the fund as providing a new stream of capital to projects or product development efforts aimed at addressing infectious diseases.The main objectives of GHIF are to enable new products, encourage affordable prices, promote greater product supply, develop new markets, and encourage collection of data."To date, 70 investment opportunities have been reviewed by the Fund's investment committee, such as tuberculosis diagnostic products and oral cholera vaccines from EuBiologics.GHIF is actively seeking additional investment opportunities.Y. Ikeda discussed the collaborations and partnerships in which JICA is involved regarding vaccine manufacturing and immunization.Control of infectious diseases is of high priority for JICA both domestically and internationally.The transfer of an oral polio vaccine technology from Biken to Biofarma was instrumental in satisfying worldwide demand and gave ground to global polio eradication efforts.In partnership with BMGF, JICA also supports vaccine procurement through a “loan conversion mechanism” for polio eradication in Pakistan and Nigeria.JICA is currently supporting the transfer of a MR vaccine manufacturing technology from Japan to Polyvac in Vietnam, following the precedent set by multiple previous successful projects.JICA supports UNICEF supply of vaccine cold chain equipment to India, Afghanistan, Angola, Liberia, Zambia, and Zimbabwe.D. Saha introduced the work of the USP which is recognized as an official compendium determining and compiling the quality standards for drugs and biological products enforced by the United States regulatory authorities.In collaboration with global institutions such as NIBSC and WHO, USP develops product monographs, reference materials standards, and training programs in microbiology, biotechnology, and pharmacopeia analysis.D. Saha concluded by inviting vaccine experts to volunteer as candidates to the USP Council of Experts and the Expert Committees for the 2015–2020 Convention cycle.R. Salerno from Sandia National Laboratories highlighted the recent Ebola outbreak in West African countries as exemplary of the risks to the vaccine manufacturing industry that are posed by emerging and reemerging infectious diseases.It is the ultimate aim of vaccine manufacturers to protect vaccine users from unsafe products, protect employees and the environment from harmful agents, and prevent dangerous materials and proprietary information from malicious uses.Sandia has developed a tool to assist decision makers in defining risk criteria and making informed decisions in planning, mitigation, and communication of biorisks.Sandia is partnering with DCVMN to offer risk-assessment training to vaccine manufacturers.This session concluded with a presentation by K. Sampson who introduced APACI.APACI serves as a trusted and independent source of information for decision makers and the public to foster best practices in prevention and treatment of influenza.Initiated in 2002 as a working committee modeled on the Influenza Specialist Group in Australia, APACI educates key opinion leaders and works with governments and institutions to enhance pandemic planning.Through educational awareness and public information the number of influenza vaccine doses deployed in Australia, have increased from 500,000 in 1991 to over 7 million in 2014.Overall, these presentations encouraged successful global collaborations and partnerships to continue and urged new ones to be initiated all under the goal of increasing vaccine supply.New frontiers in vaccination were discussed within presentations on vaccine innovations and delivery technologies offered by G. Madhavan, J. Kalil, R.K. Suri, R. Mehta, A. Nanni, K. Ella, Dr. T.S. Rao, W.Meng, X. Liao, R. Steinglass, D. Zehrung, S. Jadhav, D. Kristensen, C. Collins, and T. Cernuschi.Opening the discussion, G. Madhavan from Institute of Medicine spoke about the importance of strategic planning for prioritizing new vaccine development and policy.Historically, vaccine development projects were prioritized by infant mortality equivalents or cost-effectiveness.Madhavan demonstrated the use of a decision-support software tool called Strategic Multi-Attribute-Ranking-Tool for Vaccines—or “SMART Vaccines” to assist in vaccine priority setting efforts.Developed over multiple phases, this software provides transparency in vaccine comparison, thus facilitating discussions among various stakeholders in the vaccine enterprise .J. Kalil from Butantan focused the dialogue on vaccines that remain difficult to develop, yet necessary to ensure health worldwide.The discussion initially focused on the difficult process of developing a vaccine against Streptococcus pyrogenes to prevent rheumatic heart diseases.Another example provided was a T-cell multi-epitope based HIV vaccine, which is already being tested on a primate animal model .Finally, Kalil discussed the live attenuated tetravalent dengue vaccine candidate under evaluation and the trial design for related phase II and III studies.Another innovative vaccine, the Sabin-IPV project, was presented by R.K. Suri from Panacea Biotec.Under a charge by the World Health Assembly to develop “safer processes for production of IPV and affordable strategies for its use in developing countries” , the WHO, BMGF, and a Netherlands government laboratory have collaborated to ensure availability of Sabin-IPV through public sector channels in developing countries.Following promising performance in preclinical and clinical studies , Panacea Biotec was selected as one of the manufacturers to receive technology transfer of the vaccine manufacturing process.R. Mehta from Cadila Biotech presented the virus like particles-based recombinant technology for vaccine development.The self-assembling feature of recombinant VLPs offers efficient expression of 3-dimensional structures, good stability, high immunogenicity, and non-infectiousness safety.This innovative technology platform can be used to speed up the delivery of pandemic influenza vaccine doses released 10 to 12 weeks after cloning, while traditional production methods require 20 weeks after virus inoculation.The VLPs technology has been transferred to an Indian facility and has been validated for production of influenza vaccines.From Aeras Foundation, A. Nanni outlined TB as the top infectious disease killer from the past century.While TB treatments cost the global economy an estimated 1 billion dollars daily, funding for new vaccine development is insufficient to produce viable solutions.Additionally, antibiotic resistance confounds global efforts to control the epidemic allowing some evolving strains to become virtually untreatable.Engagement of large vaccine manufacturing institutions within developing countries is vital to support effective TB vaccine development, future vaccine supply, and ultimately reduction of the disease burden.A presentation by K. Ella from Bharat Biotech International provided an overview of domestic and international partnerships to develop innovative vaccines.Ella discussed vaccines for neglected diseases such as Chikungunya, and discussed its launch of the novel typhoid conjugate vaccine.This institution also supported the development of the first indigenous rotavirus vaccine called ROTAVAC®, recently approved for pilot introduction in India.T.S. Rao, from India Department of Biotechnology, asserted that vaccines are the fastest growing area within the pharmaceutical and biological sectors in India.Rao announced a new Vaccine Grand Challenge Program with the objective of accelerating development of promising candidate vaccines through pre-clinical and clinical development and commercialization .The Enterovirus 71 vaccine was discussed by W. Meng from Sinovac.This vaccine addresses hand, foot, and mouth disease which was first reported in New Zealand in 1957 and has continued to present an increasing number of cases globally .Development of a vaccine against this disease is necessary to prevent related morbidity and mortality.A Vero cell-based, inactivated vaccine candidate has been tested in a clinical trial with over 11,000 subjects, demonstrating safety, immunogenicity, and high efficacy .The vaccine is currently being optimized for large scale manufacturing and is expected to be available soon.X. Liao from Innovax talked about innovations in the HPV vaccine, a vaccine that is increasingly important as incidence of cervical cancer tends to rise, especially in developing countries .While two HPV vaccines have been available since 2006, only 30 percent of countries worldwide have introduced them in national immunization programs.The high cost of these HPV vaccines remains a barrier for poor countries.Furthermore, immunogenicity requires three doses of the currently available HPV vaccines, while WHO recommends a two dose schedule, provided vaccination is initiated prior to 15 years of age4.Innovax aims to launch a new vaccine by 2018 that is currently in phase III clinical trials, with the aim to accelerate access to affordable HPV vaccines and reduce incidence of cervical cancer globally.New approaches to vaccine delivery were shared by R. Steinglass, from John Snow Inc.Steinglass discussed that immunization managers have become more informed customers with preferences for vaccine formulations, presentations, and packaging that fit well with their programs.Additionally, they are concerned about heat stability, storage temperatures, storage volumes, waste volume, ease of preparation and administration, and volume of dose administered.Steinglass encouraged vaccine manufacturers to research their markets to learn the product preferences directly from prospective clients and discussed the importance of incorporating these preferences earlier in the manufacturing process."Steinglass suggested consulting WHO's “Assessing the Programmatic Suitability of Vaccine Candidates for Pre-Qualification” which lists preferred characteristics of vaccines .D. Zehrung, from PATH provided an overview of new vaccine delivery technologies, including delivery devices and novel primary, packaging, and formulations for traditional and novel vaccines.New delivery technologies included needle free devices to improve efficacy, safety, cost-effectiveness and public health benefits of vaccines.Zehrung discussed how new immunisation technologies may bring the potential benefits of increased access and coverage, and lower cold chain capacity requirements.Ideally, vaccine manufacturers may consider early integration of innovative technologies by aligning the vaccine development process with the preferred product profile recommendations of the Vaccine Presentation and Packaging Advisory Group and with the requirements of WHO guidelines on programmatic suitability of vaccine candidates for pre-qualification .S. Jadhav provided an overview of nasal and aerosol vaccines, currently in development at the Serum Institute of India.Jadhav discussed lessons learned from OPV campaigns which emphasize the importance of ease-of-use, acceptability, affordability, and safety.The characteristics of being painless, easy to administer, and safer than needle administration make intranasal vaccine delivery technologies potentially more acceptable, while the fact that these vaccines mimic natural infection and induce the appropriate immune response make them potentially more effective.Multiple intranasal vaccine delivery technologies were discussed, including an inhalable dry powder vaccine for measles applied with a PuffHaler device and a lyophilized nasal spray for live attenuated influenza vaccine.D. Kristensen from PATH presented three trends in vaccine packaging and labeling: improvement of tracking and tracing capability through added barcoding on secondary and tertiary packaging, improvement of storage capability through minimization of container dimensions for primary to tertiary packaging, and efficiencies for delivering vaccines in a “controlled temperature chain” by labeling the vaccines for limited storage at higher-temperature.The VPPAG has been advancing public and private sector dialogue and work in all three areas while updating the preferred product profile for vaccines to reflect the consensus recommendations reached by the group.Improving the vaccine supply chain in developing countries was discussed by C. Collins from CHAI who suggested increasing use of freeze-protected cold chain equipment.A CHAI review of existing data found that freeze exposure occurred in 18 to 67 percent of vaccine shipments throughout various stages of storage.Such exposure may reduce vaccine potency, ultimately providing vaccines with potentially less-effective vaccines.Total demand for vaccine refrigerators is expected to reach 110,000 units by 2018 in the 53 Gavi-eligible countries, making it increasingly vital that freeze-protected equipment is used to prevent damage to vaccines."The closing presentation focused on sustainable vaccine supply in middle income countries, presented by T. Cernuschi from WHO's Expanded Program on Immunization.MICs continue to report high mortality from diseases that are preventable by immunization ."The vast majority of the world's unvaccinated children reside in MICs.While a large share of these MICs is well supported by donors, sixty-three countries are not benefitting from a unified international strategy for immunization.In these countries vaccine-preventable disease burden and unvaccinated children is currently relatively low, but substantial and unacceptable nonetheless.Many of these countries have strong systems and the potential to make rapid gains in vaccination coverage if key barriers are removed.WHO established a task force to investigate obstacles to new vaccine adoption and mobilize resources for improving immunization in neglected MICs.A recent analysis revealed that significant reduction of deaths by vaccine-preventable diseases can be achieved both through introductions of new vaccines and through increased coverage of traditional vaccines to 90 percent by 2025 in middle-income countries as illustrated in Fig. 2.Attendees to the 2014 DCVMN annual conference left the meeting reinvigorated to continue their collaborative efforts preventing the spread of infectious diseases worldwide through improving vaccination coverage.Four areas of action were jointly identified to strengthen and foster sustainable vaccine supply from DCVMs: review manufacturing facilities design layout and infrastructure, provide adequate training on evolving good manufacturing practices, quality management systems, and the WHO prequalification process, encourage dialogue to resolve regulatory challenges, and facilitate access to independent experts able to resolve vaccine industry issues.United by a shared responsibility for a global community free of infectious diseases, DCVMN members and partners foster the development and supply of safe, effective, and affordable vaccines for future generations.Presentations of this conference are available on the DCVMN website at http://www.dcvmn.org/event/dcvmn-15th-annual-general-meeting.The authors are employees of the respective indicated organizations, and have no conflict of interest to declare.DCVMN International did not provide any financial or travel support to speakers or moderators to participate at this meeting.Important note: This report summarizes the views of an international group of experts as presented at a scientific conference in a given time point and context, and does not necessarily represent the decisions or the stated policy of any institution or corporation.
The Developing Countries Vaccine Manufacturers' Network (DCVMN) held its fifteenth annual meeting from October 27-29, 2014, New Delhi, India. The DCVMN, together with the co-organizing institution Panacea Biotec, welcomed over 240 delegates representing high-profile governmental and nongovernmental global health organizations from 36 countries.Over the three-day meeting, attendees exchanged information about their efforts to achieve their shared goal of preventing death and disability from known and emerging infectious diseases.Special praise was extended to all stakeholders involved in the success of polio eradication in South East Asia and highlighted challenges in vaccine supply for measles-rubella immunization over the coming decades. Innovative vaccines and vaccine delivery technologies indicated creative solutions for achieving global immunization goals.Discussions were focused on three major themes including regulatory challenges for developing countries that may be overcome with better communication; global collaborations and partnerships for leveraging investments and enable uninterrupted supply of affordable and suitable vaccines; and leading innovation in vaccines difficult to develop, such as dengue, Chikungunya, typhoid-conjugated and EV71, and needle-free technologies that may speed up vaccine delivery. Moving further into the Decade of Vaccines, participants renewed their commitment to shared responsibility toward a world free of vaccine-preventable diseases.
282
Evaluating improvements in a waste-to-energy combined heat and power plant
Energy resources are essential for the social and economic development of all nations.A rise in the energy demand is inevitable as the populations of the world increase with improved lifestyle and industrial development .An adequate management of energy resources and protection of the global environment are vital to achieve sustainable economic development and thereby alleviate poverty, improve human conditions and preserve biological systems.Enhancing the efficient use of energy resources promotes sustainable development because it reduces the environmental and economic costs of expanding energy services .Furthermore, the improvement in the energy efficiency of a process is important for the advancement of energy production.It is the most cost-effective method of abating CO2 emissions, which is the main greenhouse gas that contributes to global warming .Moreover, it reduces the cost of producing heat and power, thus decreasing the cost of energy to consumers, improves the quality of the environment and the standard of living, upholds a stronger economy and secures the source of energy .Waste-to-energy technologies have helped reduce the amount of waste being dumped in landfill sites and in converting non-recyclable waste materials into useful energy resources in the form of heat and electricity .However, the efficiency of energy conversion in solid-waste plants is low when compared with other solid fuels, such as coal and biomass due to the low steam properties used in order to prevent high corrosion rates .Exergy analysis has been shown to be an effective tool in furthering the goal of attaining a more efficient use of energy resources .Its aim is to identify the locations and magnitudes of thermodynamic irreversibilities in a process.Exergy analysis explicitly takes the effects of the surroundings into account: it provides a more realistic picture of improvement potentials compared to a pure energy analysis.On the other hand, the definition of the state of the surroundings is not always unambiguous, leaving some uncertainties in the analysis.The term “exergy” was proposed by Zoran Rant, who used it to develop a model for the chemical exergy of a fuel material that was structurally complicated ."Szargut and Styrylska improved Rant's model by considering the chemical composition of the fuels and obtained a correlation between the ratio of the chemical exergy and the lower heating value.Bejan et al. investigated the application of the exergy method in thermal process in a cogeneration system including gas turbine and heat-recovery steam generator.They found that combustion chamber was the component with highest thermodynamic inefficiency and it can be reduced by preheating the combustion air and reducing the air-fuel ratio.Reguagadda et al. performed an exergy analysis of a coal-fired power plant; their investigations showed that the greatest exergy destruction occurred in the boiler due to heat transfer to the working fluid, flue gas losses and the combustion reaction.Taniguchi et al. , who used an exergy method to evaluate the temperature level of air combustion in a coal combustion process, confirmed that using air that was warmer than the ambient temperature enhanced the exergy efficiency of the system.Srinivas et al. analysed the steam power cycle with feedwater heaters from an exergy perspective.They found that the temperature difference between the working fluid and the flue gas could be decreased by the installation of feedwater heaters, which helps to reduce the entropy generated in the boiler.Kamate and Gangavati applied an exergy method to a co-generation plant based on bagasse to compare the performance of two types of steam turbine.They found that the efficiency of the plant was higher when a non-condensing steam turbine was used rather than an extraction steam turbine, due to the non-rejection of heat in the condensation process in the former.The latter is, however, preferred as it produces more electricity.Solheimslid et al. performed an exergy analysis of a municipal solid-waste combined heat and power plant located in Bergen, Norway.They compared different methods to calculate the chemical exergy of the waste; their investigations showed the methods to be in good agreement.The exergy efficiency of the plant was calculated to be 17.3%.Grosso et al. found that exergy analysis was a more reliable measure of performance criteria in waste incineration plants in Europe than the energy recovery efficiency analysis proposed in the Waste Frame Directive.To the best knowledge of the authors, no research work has been done on the process improvement evaluations and comparisons of state-of-the art techniques applicable in waste-to-energy facilities using an exergy method.Therefore, the aim of this paper is to evaluate improvements that can be made in a municipal heat and power plant fired by solid waste, considering the most recent development in this technology.Although a conventional method of exergy analysis to a particular process identifies exergy destructions in a system it does not, however, consider either the constraints in the conversion method or the impact of each individual component.It has been applied for evaluations in the energy system in UK , in a coal thermal plant and for solar energy .In this method, the improvement potential relates the inefficiency experienced in a system with its exergy efficiency.Nevertheless, the improvement is limited to the current performance of the specific real process system without taking any future development in the system into consideration.The method does not compare the specified process with its theoretical process for available advancement and relative progression in the system.This method has been applied to a gas turbine co-generating system , in a combined cycle power plant , fluidized bed boiler and geothermal power plant .The unavoidable exergy destruction rate is determined by selecting the most important thermodynamic parameters of the studied component to give its maximum achievable efficiency .Though this method compares the real process with an advanced process, their efficiency improvement is limited to technological constraints.Moreover, efficiency limited by technology is not predictable and may change over time for a given process as a result of subjective decisions .Hence, a modified exergy-base improvement evaluation method is introduced.In this study, the improvement potential of a process is determined by comparing the exergy destructions of the real process with the equivalent theoretical process.The theoretical process is defined as the conditions when the thermodynamic limits and maximum performance of the real process have been reached.The maximum performance is achieved by optimizing the entire system and using the parameters of each component of the process plant that give its maximum efficiency.No considerations are taken regarding cost and material properties in the theoretical process.Although it is not anticipated that technological enhancements will reach their theoretical limits, the latter do, however, provide information of the progress that is possible and the improvements that are needed in the former.The conventional exergy method is used to investigate the exergy destructions in the components and the entire system, while the improvement potential introduced in this study is applied to assess the possible future enhancement of the components when compared with their theoretical processes.The process exergy efficiency, given in Equation, is normally used to compare the useful output with the required input of a particular system, even though it does not provide a benchmark for process improvement.The improvement potential is therefore introduced here: it compares the exergy destructions of the real process with that of the theoretical process, thereby providing a more realistic description of the changes that are possible with respect to the constraints of the conversion pathway selected.Furthermore, the theoretical limit of maximum efficiency is not subjected to change over time for a given process, unlike the limits of technology efficiency evaluated by previous researchers.Two variations of a process plant are used in this work: the case and the theoretical study.The case study process is based on the design parameters of a heat and power plant fired by solid waste that is currently under construction.The plant has a fuel energy input of 100 MWth.The waste fuel used in the process has a lower heating value of 11.6 MJ/kg as received and a moisture content of 33.1 wt-% ; a chemical analysis of the fuel on a dry basis is presented as;;;;; and .A flow diagram of the process in the plant used in the current study can be seen in Fig. 1.There are two air heaters, a boiler with a combustion section and a heat exchanger section, a turbine, a condenser, a condensate pump, a feed-water pump, a deaerator and a feed-water heater.A flue gas recirculation process is employed in the plant to reduce the temperature of the combustion chamber, with the gas being discharged later through the chimney stack.The isentropic efficiency of both the turbine and the pump were selected from the typical range of 70–90% and 75–85%, respectively, for the real process plant .The theoretical process, in contrast to the real case, is not limited by technological conditions such as physical and economical constraints.It gives the highest efficiency of the process; even though its efficiency cannot be achieved in practice it does, however, provide a benchmark or target for the design of the process .Here, the greatest improvement in the plant is achieved by optimizing the entire system, using the parameters of the component that give the greatest efficiency.In the boiler combustor, the temperature is taken as being the adiabatic flame temperature of the waste fuel, i.e. 1677 °C, operating under stoichiometric air conditions.In the boiler heat exchanger and other heat exchangers of the plant, a pressure drop of zero and a minimum temperature difference of 0.1 °C is assumed.An isentropic efficiency of 100% is assumed for the pumps and the steam turbine.The assumptions in the theoretical process are based on the minimum exergy destruction of the component .Optimization of the system is performed using an Aspen Plus software simulator and by considering each of two variables: extraction pressures and steam pressure.Here, one variable is adjusted while the other is kept constant until the maximum value is achieved.This procedure is repeated for each variable and iterated until the overall maximum efficiency of the plant is reached.The flue gas recirculation process was not considered, however, as it reduces the maximum temperature in the combustion zone and thus increases the destruction of exergy.Modelling and simulation of the case study and theoretical processes of the heat and power plant fired by solid waste were performed with Aspen Plus.The Peng-Robinson property model was chosen for the estimation of the flue gas because it contains conventional components, namely N2, O2, H2O and CO2, at atmospheric pressure and high temperature regions; the IAPWS-95 property method was used to model the properties of water and steam .A method for improving the potential of a system has been developed so that the enhancement of the process can be evaluated efficiently.It has been applied to the energy conversion process of a municipal heat and power plant fired by solid waste whilst under construction.Here, the exergy destructions of the case study process plant is compared with the theoretical process.Table 1 shows the results of the evaluations made of improvements in performance conducted on the components of the case study process plant.The theoretical efficiency was achieved at the optimal values of 37, 2.3 and 1.32 bar for the first, second and third extraction pressures, respectively.The hypothetical component was introduced in order to convert the flue gases emitted from the stack into the environmental condition.The overall exergy efficiencies of the case study and theoretical processes are 25% and 56%, respectively.The exergy efficiencies of the processes determined from Equation and presented in Table 1 were found by using the conventional method of comparing the available exergy with the input exergy.The method used for the system analysis of the case study process shows that significant improvement should focus primarily on the boiler, followed by the steam turbine because of the high destruction of exergy in these components.The boiler has been identified as the component with the largest exergy destruction, which is due to irreversible combustion reactions: this is in agreement with previous exergy efficiency evaluations of a thermal power plant .Although the conventional exergy method identifies the components and processes with the highest exergy destruction it does not, however, account for the relative efficiency that determines the maximum possible improvement of a particular component in the system.The exergy efficiency of the case study process was therefore compared with the theoretical process in Equations and for efficient performance evaluation of the energy conversion processes.Table 1 shows that the boiler is the component with the highest improvement potential.The method for improving potential that is developed in this work substantiates the fact that this component should be targeted in the quest to improve the overall performance of the system, as examined in the conventional exergy method.In addition, the present study investigated improvement that may be possible by determining the maximum efficiency of the components.For example, in the case study process plant investigated, the efficiency improvement of the boiler will never exceed 62% due to constraints in the combustion of fuel.It indicates that even though this component has the largest exergy destruction of 66 MW, 64% of improvement can theoretically be achieved.In the overall process plant, on the other hand, 53% of the total exergy destruction can be improved in the boiler.The improvement potential relative to the total exergy destruction in the case study process plant using the method developed in this study, along with the van Gool and Tsatsaronis and Park methods all identified the boiler as the components with the highest improvement potential, with 53%, 53% and 36% respectively.Furthermore, the three methods agree that over 80% of total improvement potential should be in the boiler.Though the improvement potential calculated for the boiler, using this method is similar to van Gool method.However, van Gool method does not identify the maximum theoretical conditions that limit the process efficiency.The Tsatsaronis and Park method showed a lower improvement potential than the current method as a result of the technological constraints their method employs.Technological limitations are subjected to change over time for a given process, whereas the method developed here is based on theoretical limits that are fixed for a given process .The boiler was identified as having highest improvement potential in the base case plant, as shown in Table 1.Different modification methods were therefore applied to this component for efficiency enhancement, i.e. changing the bed material, converting the waste boiler into a gas boiler and using Inconel, which is a corrosion-resistant material in the boiler walls.In addition, flue gas condensation and changing the air heating medium to reduce the stack temperature, were also considered in the quest to better utilize the exergy in the stack.The efficiency enhancement evaluation of the WTE plant producing only electricity was also examined and compared with that of the combined heat and power plant.The design parameters of the base plant and the variables in the different modifications made are shown in Table 2; in both cases, waste was used as the fuel with the same energy input of 100 MW.Modification 1 involves reducing the excess air from 39% to 11% and assumes that the bed material in the combustion chamber is changed to ilmenite, an oxygen-carrying metal oxide.The bed material has the ability to absorb, release and distribute oxygen uniformly in the boiler furnace.Less air is therefore required to reduce the amounts of carbon monoxide and unreacted hydrocarbons.The use of this material has been investigated and applied to a CFB boiler/gasifier reactor at Chalmers University of Technology, Gothenburg, by Thunman et al. .Modification 2 incorporates the integration of flue gas condensation.Here, the temperature in the stack is cooled from 160 °C to 110 °C and a portion of the energy in the flue gas outside the system boundary is recovered in the heat exchange for use as district heating.Flue gas at 160 °C is first condensed below the dew point temperature of about 50 °C for the separation of water vapour.It is then reheated to 110 °C before being discharged via the stack.A flow diagram of this modification is shown in Fig. 3.In Modification 3, the temperature and pressure of the steam in the case study process are increased from 420° to 440 °C and 50 bar–130 bar, respectively.In addition, an intermediate reheater is integrated into the system.It reheats the wet steam after the first turbine extraction from 180 °C to 320 °C.The high steam parameters are those used in the waste-to-energy plant of Afval Energie Bedrijf, Amsterdam .Here, the furnace membrane walls are protected by Inconel, a corrosion-resistant material for use in high temperature applications.Modification 4 is a combination of Modifications 1, 2 and 3.Modification 5, as shown in Fig. 5, integrates waste gasification with a gas boiler and is used in the waste gasification plant in Lahti, Finland .The waste is gasified at about 900 °C and then cooled to 400 °C before being subjected to the gas cleaning process.The energy from the waste heat is used for evaporating part of the water from the economizer.The product gas is combusted in a gas boiler operating with a steam temperature and pressure of 540 °C and 121 bar, respectively.Whilst Modification 6 has the same structure and operating variables as Modification 5, it also incorporates a flue gas condensation process.In Modification 7, the two air heaters in the base plant were removed and replaced by a high-pressure feedwater heater and a new air heater.Here, the temperature of the flue gas in the stack was deceased from 160 °C to 130 °C.The air heater, which was integrated into the system after the economizer, was heated by flue gas.It should be noted that both the base plant and Modifications 1–6 use steam to preheat the air entering the combustion chamber.Table 3 presents the generation of electricity and the production of district heating, together with the energy and exergy efficiencies, for the case study process plant and its modifications.The results show that reducing excess air increases the exergy and the energy efficiencies by 0.9% and 1.6%, respectively, for the overall plant and 0.4% and 1.4% in the boiler, respectively, when compared with the case study process.This is due to a decrease in the loss of flue gas and that less steam is extracted in the turbine to preheat the incoming air.As a result, more heat is transferred to the water/steam in the boiler heat exchanger sections, which increases the production of both electricity and district heat.The introduction of flue gas condensation in Modification 2 decreases the exergy loss from the flue gas to the surroundings from 2.1 MW to 1.5 MW.Here, 30% of the exergy content in the flue gas was utilized and used for district heat.This includes both the actual heat of condensation but also the net decrease in the temperature of the flue gas stack, which was cooled from 160 °C to 50 °C and reheated after condensation to 110 °C.The greatest amount of district heating is produced here, yielding an increase of 4.3% and 11.4% in overall exergy and energy efficiencies, respectively.Although this modification did not have any effect on the production of electricity, the electricity demand of the plant may, however, increase due to the large pressure drop witnessed during the condensation process.Modifications 3–6 have the highest electrical generation and exergy efficiencies because of the high steam temperatures and pressures used in their respective processes.Modifications 3 and 5 showed the lowest production of district heat and Modifications 2, 4 and 6 showed the highest, which was due to the integrated flue gas condensation process.In addition, the greatest reduction of exergy loss in the flue gas, of about 38%, was noted in Modification 4: this was a result of combining flue gas condensation and reducing the amount of excess air.Modification 7 enhances the production of district heating without integrating condensation of the flue gas.It also helps to reduce the loss of flue gas in the stack by decreasing the temperature from 160 °C to 130 °C: the temperature must be sufficiently high to avoid low-temperature corrosion.Furthermore, although Table 3 shows that Modification 3 does not change the energy efficiency of the boiler and the overall process, exergy efficiency increments of 8% and 9% were nevertheless observed in the respective processes.This confirms that the energy method is not a reliable tool for evaluating a system.The improvement in efficiency for the electricity production only is shown in Table 4.Here, the flue gas condensation process is not considered as this does not increase the production of power.In order to accomplish this process, the condensing pressure after the steam turbine was reduced from 1 bar to 0.08 bar.Comparison of the base case plant and the different improvement modifications shows that Modifications 3 and 5, with the highest steam temperatures and pressures, have not only the highest production of electricity but also the greatest energy and exergy efficiencies.Different methods of improving the efficiency of a heat and power plant fired by solid waste have been investigated and evaluated.They are based on the component with the highest improvement potential, which compares the exergy destructions of the plant with its theoretical process in order to identify the parts in which improvements may be made, as well as their significance.The analysis made in this study identifies the maximum limits for improving the efficiency of the system.It was found that 64% of the total exergy destruction in the case study process can be improved.The boiler was identified as being the component with the greatest potential for making improvements to the plant, with a theoretical efficiency of 62%.Constraints in the combustion process, however, mean that 53% of the improvement possible in the overall process plant can be achieved theoretically in this component.Based on the component with the highest potential for improvement, the different methods that were investigated showed Modifications 2, 4 and 6, involving flue gas condensation to be the best options for enhancing the efficiency of the district heating process in a combined heat and power plant.Modification 7, which involves changing of air heating medium from steam to flue gas is the best method for the production of heat without flue gas condensation.Modifications 3 and 5 with reheating process and waste gasification were found to be the best for the production of electricity only, with exergy efficiency of 26% and 28%, respectively.The authors declare no conflict of interest.
Evaluation of different alternatives for enhancement in a waste combustion process enables adequate decisions to be made for improving its efficiency. Exergy analysis has been shown be an effective tool in assessing the overall efficiency of a system. However, the conventional exergy method does not provide information of the improvements possible in a real process. The purpose of this paper is to evaluate state-of-the art techniques applied in a municipal solid-waste fired heat and power plant. The base case plant is evaluated first; the results are then used to decide upon which technical modifications should be introduced and they are thereafter evaluated. A modified exergy-based method is used to discover the improvement potential of both the individual components and the overall base case plant. The results indicate that 64% of exergy destruction in the overall process can theoretically be improved. The various modifications selected involve changing the bed material, using a gasifier followed by a gas boiler and incorporating a more durable material into the boiler walls. In addition, changing the heating medium of the incoming air (from steam to flue gas) along with a reduction in the stack temperature and the integration of flue gas condensation were considered for utilizing the exergy in the flue gases. The modification involving gasifier, gas boiler and flue gas condensation proved to be the best option, with the highest exergy efficiency increment of 21%.
283
Improved predictions from measured disturbances in linear model predictive control
The very foundation of model predictive control is to predict the future behavior of a system based on a model .In order to improve the control performance, feedforward from measured disturbances may also be included in this prediction model.This requires that the prediction model, in addition to the dynamics from the control input to the output, also includes the dynamics from the measured disturbance to the output.Predictions of the output response from measured disturbances may then be made in the same manner as with the inputs .However, there are two fundamental differences between control inputs and measured disturbances in the MPC framework:While future control inputs are decision variables in the MPC formulation, and thus are known, future measured disturbances are unknown to the controller.Control inputs typically change only once per sampling interval, while disturbances are typically sampled from variables that change continuously between samples.To cope with the first difference, common practice is simply to assume that future disturbances will remain constant at the last available measurement, though other assumptions may be more appropriate if a better knowledge of the disturbance dynamics is available .A practical example of the latter can be found in , where a measured disturbance was extrapolated into the future using an autoregressive model.In , it is described how a model of disturbance dynamics, if available, may be included in the MPC predictions in a state-space formulation.The second difference is usually ignored both in practical implementations of MPC and in the theoretical literature.This paper addresses implementation aspects and performance improvements from feedforward of measured disturbances with focus on this difference.We have not found any previous literature addressing this issue.MPC arose from the industrial applications IDCOM and DMC , where the prediction model is based on finite-impulse or step-response models.MPC with state-space models soon dominated academic research .In , it is even stated that “There is really no good reason for working with step response models.,However, step-response models have remained popular in industrial applications, the main reasons being that step-response models are intuitive, easy to maintain, and allows for easy and straight-forward system identification .It should be noted, though, that there are standard algorithms that translate step-response models into state-space form.The results in this paper do not rely on which model representation is used, and both state-space and step-response model representations are considered in this paper.To achieve offset-free tracking of setpoints in MPC, and counteract the effect of various uncertainties, a feedback mechanism must be included in the prediction model.The most widely-used industrial implementations of MPC use a constant output step disturbance model to achieve offset-free tracking .The current measured output is then compared to the output of the prediction model, and the error is added to the future predictions.For state-space MPC formulations, there are many alternative methods for offset-free tracking, see e.g. , and no particular method seems to have become “standard practice”.The purpose in any case is to estimate and counteract the effect of uncertainties in the system, such as plant-model mismatch and unmeasured disturbances .The method proposed in for systems with full state measurements is implemented for state-space systems in this paper.The paper is organized as follows: First, in Section 2, the main issue considered in this paper is discussed, and a method to address this issue is proposed.In Section 3, a typical MPC formulation for a SISO system with a measured disturbance is given, considering both state-space and step-response prediction model formulations.The implementation of both the conventional and the proposed method is described in Section 4, and some of the conceptual differences between the two methods are discussed analytically.How the prediction of future disturbances affects the control performance differently for the two methods is also discussed here.In Section 5, closed-loop simulations of a SISO example system are presented comparing the method proposed in this paper to the conventional implementation method.Both the disturbance dynamics, prediction of the measured disturbance, measurement noise, and tuning of the controller is considered in this example.In Section 6, the two methods are implemented in simulations of a realistic industrial MIMO example; a petroleum production well with an electric submersible pump installed.Finally, the main conclusions and results in this paper are summarized and discussed in Section 7.The standard approach for MPC implementations based on state-space models is to discretize the system model using zero-order hold.It is then assumed that all control inputs are piecewise constant, and only change at the exact time of the samples.As discussed e.g. in , by using ZOH, an exact discretization of the continuous-time system is obtained, implying that the dynamics of the discrete-time system will coincide perfectly with the continuous-time system at the sampled points in time, given that the input signals are in fact applied using ZOH.In MPC, the applied control input is calculated once every sampling interval, and kept constant between samples, and is thus in fact implemented using ZOH.Using ZOH in the discretization will thus provide a near perfect match between the discretized and the continuous-time system models.Also for a measured disturbance, the discretization is typically performed using ZOH, basically treating the measured disturbance just like another control input to the system.However, while ZOH is very suitable for a control input, an assumption that also a measured disturbance remains constant between samples is usually inaccurate, as measured disturbances typically are sampled from variables that change continuously.If ZOH is applied to a measured disturbance when discretizing the model, a sampled continuous disturbance signal will consequently be interpreted as a piecewise constant signal by the resulting prediction model, as illustrated in the top plot in Fig. 1.As seen in this illustration, the “ZOH signal” dZOH does not match the continuous-time signal d very accurately, but suffers from what could be considered a “time delay”, as the ZOH signal ignores any change in the continuous signal between samples, and is only updated when a new sample is taken.The consequence of introducing this time delay in the prediction model is that the effect of the measured disturbance is in fact predicted to occur later in time.This reduces the accuracy of the prediction model, and the dynamics of the discrete-time model do not match those of the continuous-time model, even if the exact same disturbance is applied to both systems.This is demonstrated later, in the illustrative example in Section 2.4.As discretizing using ZOH provides a poor match for a continuous signal, one might instead consider discretizing the model using first-order hold for the measured disturbance.This is equivalent to assuming that the measured disturbance changes linearly between samples, as illustrated in the second plot in Fig. 1.According to , “The First-Order Hold method provides an exact match between the continuous- and discrete-time systems in the time domain for piecewise linear inputs” and “is generally more accurate than ZOH for systems driven by smooth inputs.,This is quite clear from the illustrations in Fig. 1.This implies that discretizing using FOH would generally provide a more accurate discrete-time model than with ZOH, which is also demonstrated later, in the example in Section 2.4.First, most MPC formulations, both in practical implementations and in the literature, assume that there is no direct feedthrough from the control input or measured disturbance to the output, so that the prediction model is given in the form.Using a model discretized using FOH may thus in some cases make it difficult to implement known methods from the literature, such as methods for offset-free control.Even if a method can be reformulated to be used with a prediction model in the form, verifying the correctness of the new formulation may not always be straightforward.Second, the output function takes a different form with FOH than with ZOH, implying that the state vector is also different, and no longer holds.Since ZOH should be used for the control input, using FOH for the measured disturbance entails that the system must be discretized separately for the control input and the disturbance, and the superposition theorem must be used to combine the resulting sub-systems to obtain the complete discrete-time prediction model.This results in an augmented state vector, with twice the number of states as in the original system, with all the implications this has for the complexity of the resulting MPC optimization problem.There may exist a minimal realization of the combined system with the same number of states as the original system in some cases, though it has not been investigated in this study whether it does so in general.In conclusion, though FOH is far more accurate than ZOH for continuous disturbances, changing the discretization method for the disturbance may in practice prove to be more complicated than what can be justified from the possible advantage of addressing this issue in the first place.Thus, a much simpler approach that approximates FOH discretization is proposed in this paper, as discussed in the following section.Method B may thus increase the accuracy of the predictions in the MPC controller, and thus the control performance, with a minimal effort.This produces the following output from Matlab:Note how there is no direct feedthrough from the input to the output of the continuous-time system in this example, and how this is still the case for the system discretized using ZOH, but not for the system discretized using FOH.The discrete-time systems can be simulated in Matlab, using both method A and method B for the ZOH-system, as outlined below:The systems are now simulated with the continuous disturbance d = sin, and the discrete-time disturbance dk obtained by sampling the disturbance d with the same sampling interval T = 1 that was used in the discretization.The result of these simulations are shown in Fig. 2.The detailed implementation of method B for an MPC formulation is presented in Section 4, but a few observations should be made at this point in the discussion.From a continuous-time perspective, it is obvious that the system output y at time t =T will depend on the disturbance d in the time interval kT ≤ t ≤T.The fact that yk+1 with method B depends on dk+1 is thus only an indication that the goal to obtain a better match between the continuous- and discrete-time systems is achieved.As discussed later in Section 4 and demonstrated in the SISO example in Section 5, this dependency implies that method B will be able to take full advantage of any knowledge about the disturbance dynamics which can be used to predict/extrapolate the measured disturbance more accurately.On the other hand, the fact that yk+1 with method A does not depend on dk+1 implies that any such knowledge can only be exploited for the prediction of yk+2 onwards, which is in fact a major limitation for the conventional method A, as discussed in Section 4.4.The recursive prediction model can easily be extended to the MIMO case using the superposition principle and block matrices/vectors.The general idea behind method B is independent of how the measured disturbance is predicted.In Section 4.1, a general expression for ΔDk is first derived for the conventional method A, and then the corresponding expression for the proposed method B is derived in Section 4.2.The resulting general expressions for ΔDk for each method are applicable to any given prediction of the measured disturbance on the form.Implementing the methods for specific disturbance predictions is discussed in Section 4.3.The most common approach to predict the future measured disturbance is simply to assume that the measured disturbance will remain constant at the last measurement .This is referred to as the “constant disturbance assumption” in the sequel.Though this is the most commonly implemented prediction, it is often quite inaccurate, but this naturally depends on the disturbance dynamics.The constant disturbance assumption might be a suitable assumption if the measured disturbance is quite random and difficult to predict.An example of this is demonstrated in the SISO example in Section 5.The constant disturbance assumption may be suitable for disturbances that change fast and/or randomly, but by design, the dynamics of the measured disturbance are often quite slow compared to the sampling frequency of the controller.For a disturbance that varies slowly and smoothly, the change in the measured disturbance is often quite similar for consecutive samples.Consider for example the sine plotted in Fig. 2.It is quite clear that a linear extrapolation of this signal is a much better prediction than the constant disturbance assumption, at least for a few steps.It may thus be more accurate to assume that the measured disturbance steps Δdk rather than the measured disturbance dk will remain constant in the future.This assumption is denoted the “linear change assumption” in the sequel.Although the linear change assumption might be suitable a few time steps into the prediction horizon, it is only bounded by the length of the prediction horizon, and might be quite inaccurate after a few steps, but this depends on the disturbance dynamics, the sampling frequency, and the length of the prediction horizon.In this section, one of the most fundamental differences between the considered methods is discussed.First consider the prediction model in the MPC formulation.As the MPC control action is calculated once every sampling interval and only the first input of the predicted input sequence is actually implemented, and since all predictions further into the prediction horizon through the prediction model rely on the first predicted output yk+1, a prediction error in the first step has a much larger negative effect on the control performance than a prediction error later in the prediction horizon.Looking at the last term of, it is clear that only the first element of ΔDk is used to predict yk+1.The accuracy of the first element of ΔDk may thus have a major impact on the control performance, and one of the main differences between method A and method B is in fact the first element of ΔDk.On the one hand, this is a major limitation of method A, as it is inherently restricted to the linear change assumption for the first time step of the prediction horizon, even if a more precise prediction/extrapolation of the measured disturbance is available.As this is not the case for method B, which relies on the disturbance prediction also for the first time step, this implies that method B has a much bigger potential to exploit information about the future measured disturbance to obtain a more accurate prediction of yk+1, and thus improve the control performance.This is demonstrated quite clearly in the SISO example, in Section 5.2.3.But on the other hand, the most commonly implemented disturbance assumption is the constant disturbance assumption, even though this assumption is often quite inaccurate.In most practical implementations, the sampling rate is intentionally chosen very fast compared to the disturbance dynamics, in which case, as discussed in 4.3.2, the linear change assumption is often a much better assumption than the constant disturbance assumption, at least for the first few time steps in the prediction horizon.The fact that method A with the constant disturbance assumption actually implements the linear change assumption for the quite important first time step of the prediction horizon, however unintentional, might in practice improve the control performance quite significantly compared to implementing the constant disturbance assumption with the more precise method B, where also the first time step is based on the explicitly stated assumption.This is also demonstrated in the SISO example, in Section 5.2.1.These results combined imply that, if the prediction of the measured disturbance can be chosen freely, it is always possible to obtain a performance with method B that is equal to or better than the best performance achievable with method A.While method B has a greater theoretical potential than method A, a more interesting question is which of the methods that performs better in practice, i.e. when the disturbance prediction is based on practical considerations such as known disturbance dynamics, and not simply considered a degree of freedom when deriving the controller.As the prediction model with method B is more accurate for continuous disturbances, it may be natural to assume that method B also will provide a better control performance than method A when a continuous disturbance is applied.However, there are a number of factors that affect the control performance other than the accuracy of the prediction model formulation, including:The disturbance dynamics and the accuracy of the predicted future disturbance,The feedback mechanism in the MPC formulation,The controller configuration and tuning,Uncertainties,Given all these factors, and especially the many possible approaches to derive a prediction/extrapolation of the future measured disturbance, one cannot conclude that method B will perform better than method A in general, but it should be clear from the discussion so far that method B is fundamentally more precise than method A, and has a greater potential.This is sought to be demonstrated through the examples in the following sections.Note that the simulations in this section are not intended to illustrate realistic control problems, but to clearly demonstrate the fundamental characteristics of the two considered methods and the theoretical results from Section 4.The results presented in this example are based on a state-space prediction model, as described in Section 3.2.1.Simulations with a step-response formulation show very similar results, and as the main findings are the same with both representations, the results with step-response models are omitted from this presentation.In the initial simulations in this example, the MPC controller is tuned as an unconstrained dead-beat controller, i.e. there is only a weight on the output and no weight on the input moves), there are no constraints on the input or the output– are omitted), and the prediction horizon is just one step, as the controller will always predict a zero output error after the first step.This is not meant to imitate a practical or realistic controller tuning.A dead-beat tuning is a very aggressive tuning, and usually a less aggressive tuning is required in practical implementations due to uncertainties and other considerations in the system.However, in this simple example, with a perfect system model, and no unmeasured disturbances, a dead-beat tuning is quite reasonable.Also, with this tuning, the system output will be equal to the prediction error of the MPC controller, which is very convenient when discussing the results and comparing the considered methods.This simulation shows that, even though method B is based on a more accurate prediction model, the conventional method A actually provides a better control performance for this scenario.However, it is quite obvious that the constant disturbance assumption is quite inaccurate for the smooth sine disturbance, and that the linear change assumption discussed in Section 4.3.2 would be a much better choice.As discussed in Section 4.4, method A actually ignores the explicitly stated disturbance assumption for the first time step of the prediction horizon, and instead implicitly implements the linear change assumption.As the linear change assumption is more accurate than the constant disturbance assumption, the result is that the predictions with method A are more accurate than with method B, even though the underlying prediction model is less accurate.That the prediction model is more accurate with method B is quite clear in the bottom plot of Fig. 3a, where it is shown that method A operates with a much larger calculated state disturbance dx than method B.Taking a closer look at the simulation results in Fig. 3a, keeping in mind that the output is equal to the prediction error with the dead-beat tuning implemented, it can be seen quite clearly that the output with method B is directly proportional to the disturbance steps Δd.Method B provides very good predictions when the disturbance is nearly constant, e.g. at samples 7 and 14, and poor predictions when the disturbance is changing rapidly, e.g. around sample 11.This is exactly the behavior one would expect from implementing the constant disturbance assumption.On the other hand, the prediction error with method A is directly proportional to the change in the disturbance steps, and the predictions are most accurate when the disturbance is changing steadily in one direction, e.g. at samples 5, 11 and 17, which is exactly the behavior one would expect from implementing the linear change assumption.This confirms that only method B is actually true to the explicitly stated constant disturbance assumption, while method A instead implicitly implements the linear change assumption.It should also be noted that the state disturbance with method A is in fact nearly identical to the prediction error with method B.This shows that with method A, the mismatch between the explicitly stated disturbance assumption and the implicit linear change assumption is interpreted as a plant-model mismatch by the controller.On the other hand, even though the prediction error with method B in this scenario is larger than with method A, the state disturbance with method B is nearly negligible.This shows that the prediction error with method B is correctly interpreted as a mismatch between the predicted and the actual disturbance.Given the results in Section 4.3.2, it is to no surprise that simulation results with the linear change assumption are identical for the two methods, and the same as the results with method A in Fig. 3a.The simulations in Sections 5.2.1 and 5.2.3 are now repeated, but this time measurement noise in the form of white Gaussian noise is added both to the measured disturbance and the measured output.In these simulations, the noise on the measured disturbance has standard deviation σd = 0.1, while the noise on the output has standard deviation σy = 0.002.The Matlab function ar is again used to obtain an AR-model to extrapolate the measured disturbance, but this time the model is estimated based on noisy measurements from 200 samples, and a model order of 10 is chosen to provide better noise filtering.The results of the simulations with measurement noise are shown in Fig. 3c and d. Due to the noise introduced in the system, the results are not as easily compared in the figures, but some measures of performance are given in Table 1.The column “Control error” shows the mean square error on the output for each of the simulations, and the performance increase/decrease for method B relative to method A, in percent.The column “Aggressiveness” shows how actively the control input is used in each simulation, i.e. the mean absolute value of the input steps Δu.The results with the constant disturbance assumption show that the performance of the two methods are now quite comparable, with only a 4.6% difference.Comparing the control error with and without measurement noise, it is clear that method A is affected a lot more severely by the measurement noise than method B.This may be attributed to the implicit linear change assumption, as method A will always predict that a change in the measured disturbance will be repeated in the next time step, and thus overestimate the effect of any measured change that is simply due to measurement noise.The result is a too aggressive controller, with large steps on the input.As seen in the last column of Table 1, the input is in this case used 26% less actively with method B than method A, even though the control error is relatively similar.Further, the results show that when the measured disturbance is extrapolated using the identified AR model, method B again by far outperforms method A, with a 74% smaller MSE on the output, while method A again does not benefit at all from the more precise extrapolation.The aggressiveness of the controller with method B is also further reduced, in this case by 39.7% compared to method A.These results show that the inaccuracy of method A in combination with measurement noise may cause an overly aggressive control, and if the measured disturbance is predicted accurately, method B may thus outperform method A both with respect to control error and aggressiveness.To contrast the smooth and slowly varying smooth sine disturbance, the system is now simulated with a lot more random disturbance based on filtered white noise.Due to the highly random dynamics of this disturbance, the constant disturbance assumption is presumably a very suitable disturbance prediction, and is the only disturbance prediction considered in this scenario.Noise free simulation results with this disturbance are shown in Fig. 4a, while measurement noise is added in the simulations shown in Fig. 4b.The measures of performance are given in the last two rows of Table 1.As this disturbance changes rapidly and randomly, the implicit linear change assumption in method A does not work well in this scenario, and excessively large spikes are experienced on the output.Both with and without measurement noise, the control error is about halved with method B, and the aggressiveness of the controller is reduced by about 40%.These results confirm that method B outperforms method A when the disturbance assumption matches well with the real disturbance dynamics.So far, only a dead-beat tuning has been considered, i.e. zero move penalty and a one-step prediction horizon).The dead-beat tuning is generally considered too aggressive for practical implementations, but was convenient when comparing the two methods.The effect of tuning with the two considered methods is now investigated.For this purpose, the simulations in Sections 5.2 and 5.3 are repeated with a move penalty p ranging from 10−5 to 102, and a prediction horizon Hp = 10.Considering again the smooth sine disturbance from Section 5.2, the control error and the aggressiveness of the controller with a varying move penalty p are shown in Fig. 5.,The results in Fig. 5a and c show that with the constant disturbance assumption, method A performs better than method B regardless of the move penalty.Without measurement noise, the aggressiveness is quite similar with the two methods, but when measurement noise is introduced, method A is clearly more aggressive than method B, at least for relatively small move penalties, in which case the performance of method A is also just slightly better than method B.The results in Fig. 5b and d show that while method A does not benefit from a more accurate extrapolation of the measured disturbance with a dead-beat tuning, it does benefit from this when p > 0, especially when measurement noise is added.However, the optimal performance of method B is still far better than the optimal performance of method A. On the other hand, method B only has a better control performance than method A when the move penalty is relatively small, and method A performs better for large p.But it should also be noted that when method A performs better, method A is also more aggressive than method B. Considering the discussion in Section 4.4, this is quite reasonable.Due to the implicit linear change assumption, method A always overestimates the effect of the measured disturbance, and thus makes more aggressive moves to counteract this.When p is restrictive enough to reduce the control performance of method B, the increased aggressiveness of method A counteracts some of the restrictiveness of the move penalty, which results in a reduced control error.Another quite interesting side effect of the increased aggressiveness with method A is seen in Fig. 5d in particular.Here the performance with method A is actually improved by increasing the move penalty up to a certain point, and the optimal performance is achieved with the penalty p = 0.0316.This indicates that method A is inherently too aggressive and thus very sensitive to measurement noise, and actually benefits from being restricted by a move penalty p > 0, even though this is quite counter-intuitive.Similar tendencies are also seen in Fig. 5b and c. On the other hand, the results show that method B consistently shows a better performance with a smaller move penalty, which is a lot more intuitive.One exception is in Fig. 5b, where also method B shows a better performance with p > 0.This is, however, on a lot smaller scale, with a control error less than 10−6, which is really negligible in a realistic scenario with measurement noise, plant-model mismatch, etc.Also in Fig. 5d, the optimal performance with method B is achieved with p > 0, but the performance difference compared to p = 0 is negligible.The control error and aggressiveness of the methods with optimal tuning are shown in Table 2.These results show that, while the dead-beat tuning is optimal with method B, method A benefits a lot from an increased move penalty when measurement noise is added.Interestingly, the aggressiveness of the methods is a lot more similar with the optimal tuning than with the dead-beat tuning shown in Table 1, which again indicates that method A is inherently too aggressive with a dead-beat tuning.However, even with an optimal tuning, when measurement noise is added and the disturbance is extrapolated using the AR model, the control error is still nearly halved with method B. On the other hand, method B performs 17.8% worse than method A when the much less accurate constant disturbance assumption is implemented.Once again, this confirms that method B outperforms method A when an accurate prediction of the measured disturbance is implemented, while method A may benefit from the implicit linear change assumption if the explicitly stated disturbance prediction is inaccurate.With the random disturbance from Section 5.3, the control error and the aggressiveness of the controller with a varying move penalty are shown in Fig. 6, and the measures of performance are given in the last two rows of Table 2.The main findings in this scenario are:Method A is always more aggressive than method B if the same move penalty is used.The optimal performance with method A is achieved with p > 0, while with method B, the optimal performance is achieved with p close to 0.The optimal performance with method B is better than the optimal performance with method A.The tuning of the controller is again a lot more intuitive with method B.These results are the same both with and without measurement noise.Again, it may be concluded that method B performs better than method A when the disturbance assumption matches well with reality, and also results in a more intuitive tuning of the controller.It should be noted, though, that in this case, method A with optimal tuning is actually less aggressive than method B with optimal tuning.But then again, for example in the simulations with measurement noise, with p = 1.633 · 10−3, the aggressiveness with method B is the same as the aggressiveness of method A with the optimal tuning, while the control error is 497 · 10−3, which is 9.7% better than the optimal performance with method A.The results without measurement noise are similar.This shows that when the aggressiveness is identical, method B still performs better than method A in this scenario.And since the performance of method B is even better with optimal tuning, one could simply say that method B is able to utilize more of the control potential than method A.In this section, the considered methods are compared in closed-loop simulations of a realistic industrial multivariable system.The system and the control problem were thoroughly presented and discussed in and , and only a brief summary is presented here.The system considered in this example is an oil production well with an electric submersible pump installed, as shown in Fig. 7.The ESP is installed inside the well to create artificial lift, in order to boost the production from the well, and improve recovery from the reservoir.There are many control challenges related to an ESP installation.Failure of an ESP installation has a huge economic impact, both due to production loss and the cost of replacing the pump.The main priority in this system is thus to maintain acceptable operating conditions for the ESP, to prevent failure or reduced life-time of the pump.There are many variables that affect the life-time of an ESP, but this example focuses on thrust forces acting on the pump shaft, and the power consumption of the pump motor, in addition to the production from the well.An outline of the control problem considered in this example is given below.More details regarding the system, modeling and associated control concerns may be found in and .The control inputs in the system are the pump frequency, denoted f, and the production choke valve opening, denoted z.The main control objective is to sustain a given production rate from the well, while maintaining acceptable operating conditions for the pump.As the inflow into the well is determined by the difference between the reservoir pressure and the bottomhole pressure inside the well, a desired production rate from the well may be sustained by keeping the bottomhole pressure at a desired setpoint.Under the assumption that a constant pressure at the inlet of the ESP will ensure a constant bottomhole pressure, this is achieved indirectly by keeping the ESP inlet pressure, denoted pin, at a desired setpoint.Finally, the power consumption of the pump must be limited with regard to the life-time of the ESP motor, and preferably minimized to reduce operation costs.This is achieved by limiting and minimizing the electric current through the ESP motor, denoted I, which is directly proportional to the power consumption.The main disturbance in this example is the manifold pressure pm, i.e. the pressure at the outlet of the well, which is a measured disturbance.This pressure may vary considerably due to other components in the production system, such as booster pumps, separators and other wells producing to the same manifold, especially when such components are started up or shut down.Measurement noise is added to the measured disturbance pm, while perfect measurements of the outputs are used in this example for an easier comparison of the results.Based on a model of the system derived in , a simulator for the considered system is implemented in Matlab.An MPC controller based on a step-response prediction model, as described in Section 3.2.2, is also implemented in Matlab.The step-response models are obtained from the simulator by applying steps in the inputs with the system at a steady state close to the desired operating point.The simulator model and the prediction model in this example are thus not the same.As shown in Section 5.4, the effect of tuning is quite different for the two methods.Specifically, due to the implicit linear change assumption in method A, the controller is usually more aggressive with method A than with method B.As reducing wear and tear on the installation is vital in this system, the aggressiveness of the controller is very important when evaluating the performance.To make the performance of the considered methods as comparable as possible, the controller is tuned differently for the two methods in this example, so that the level of aggressiveness is similar for the two methods.The difference in performance is then seen mainly on the outputs of the system, and the performance is thus more directly comparable.To achieve this, a tuning that provided a decent performance2 with respect to the control objectives defined above was first found for method A, and then the move penalties p on the control inputs were slightly reduced with method B to obtain a comparable aggressiveness.In addition, the weight on the output I was increased with method B to achieve a similar power consumption.The implemented control targets and tuning parameters for each method are given in Table 3.The sampling interval is set to T = 1 s, the prediction horizon is set to Hp = 10, and the measured disturbance is extrapolated using the constant disturbance assumption.Simulation results with the above controller configuration are shown in Fig. 8a.The top plot shows the measured disturbance, i.e. the manifold pressure pm.The real disturbance is plotted with a solid black line, and the measurements are plotted with red dots.In the remaining plots, method A is plotted with a solid blue line and method B with a dotted red line.The outputs are shown in the next three plots, i.e. the ESP inlet pressure pin, the relative pump flow q0 and the electric current of the ESP I.The outputs are plotted relative to their setpoints, so that the setpoint is at zero in the plots.The next two plots show the inputs, i.e. the pump frequency f and the choke opening z.The bottom plot shows the prediction error for the ESP inlet pressure.Some measures of performance are given in the column “Constant dist.assumption” in Table 4, which shows the results for each of the methods, and the difference in percent.The first row shows the mean square error for tracking of the setpoint for the ESP inlet pressure pin, the second row shows the MSE for the setpoint for the relative flow q0, the third row shows the average current I, the next two rows show the aggressiveness of the controller, i.e. the average step size of the inputs f and z, and the last row shows the MSE for the prediction error for the ESP inlet pressure pin.As seen in the table, the aggressiveness of the two methods as well as the power consumption are nearly identical with this tuning, but tracking of the inlet pressure setpoint is improved by 18.5% with method B compared to method A, though tracking of the relative pump flow is 1.7% less accurate.The performance of MPC is generally increased with a higher control update rate, but the update rate is often restricted by the computationally demanding MPC algorithm, which usually involves numerically solving a quadratic programming problem online.Depending on the application, the sampling rate of the measurements may also be a limitation.For example, instruments installed inside oil production wells often have a very slow sampling rate.If the measured disturbance is sampled more often than the control update rate, this could be exploited for better noise filtering and/or more accurate extrapolation of the measured disturbance.Considering the results in this paper, method B would probably benefit more from such extra information than method A.The results in this paper show that a system discretized using ZOH is quite inaccurate when a measured disturbances sampled from a continuous variable is applied as an input.This is nevertheless the standard method for MPC implementations based on state-space models, and the equivalent to step-response models.It was shown that discretizing the system using FOH would be more precise, but quite impractical in the MPC framework.Thus a much simpler approach that approximates FOH discretization, and is very easy to implement in the MPC framework, was proposed, denoted method B.It was shown that method B is much more precise for continuous inputs than the conventional method, denoted method A.In MPC, future values of the measured disturbance must also be predicted.This is often done simply by assuming that the measured disturbance remains constant in the future, denoted the “constant disturbance assumption” in this paper, though more accurate extrapolations may be more appropriate.It was shown through an analytical comparison of the methods that method B has a greater theoretical potential than method A, but simulation results show that the performance in practice depends on how accurately the measured disturbance is predicted.The analytical comparison also revealed that due to the inaccuracy of method A, in the first step of the prediction horizon, it does not actually implement the explicitly stated prediction of the measured disturbance, but instead implicitly predicts that a change in the measured disturbance always will be repeated, denoted the “implicit linear change assumption”.The proposed method B, on the other hand, implements the explicitly stated disturbance prediction correctly.It was shown in the SISO example that the implicit linear change assumption embedded in method A can in some cases actually be a benefit, if the implicit linear change assumption is a better match with the actual disturbance dynamics than the explicitly stated disturbance prediction.For example, the implicit linear change assumption is often much better than the commonly implemented constant disturbance assumption for smooth and slowly varying disturbances.But it was also shown that the linear change assumption may also be implemented with method B, if stated explicitly, and the performance of the two methods will then be identical, and further, that if the explicitly stated disturbance assumption is better than the linear change assumption, the more precise method B provides a much better control performance than the conventional method A, which benefits very little from a more accurate prediction.These results confirm that method B is conceptually more precise and has a greater potential than method A.The simulation results also indicate that due to the implicit linear change assumption in method A, method A will usually result in a controller that is more aggressive than method B with the same tuning, and also more sensitive to measurement noise.Due to this, method A relies on a more restrictive controller tuning than method B to achieve optimal performance, while the controller tuning with method B is a lot more intuitive.The performance of the two methods is thus not directly comparable with the same tuning.Very promising results for method B were shown also in the more realistic MIMO example.The control error for the main control objective was reduced significantly by implementing method B when the constant disturbance assumption was implemented, and even more when a more accurate extrapolation was considered.Extrapolating the measured disturbance does not increase the complexity of the MPC optimization problem that is solved online, and very little effort is often required to obtain a more accurate extrapolation of the measured disturbance than the constant disturbance assumption.But as shown in this paper, due to the fact that the conventional method A to some extent ignores the explicitly stated disturbance prediction, the effort to derive and implement a more accurate extrapolation is barely rewarded in a conventional MPC implementation.This might be one of the main reasons that the constant disturbance assumption has remained so popular, while a more accurate extrapolation of the measured disturbance is rarely implemented in practice, and very few examples of this exist in the literature.On the other hand, with method B, the MPC controller benefits a lot from a more accurate prediction of the measured disturbance, as it should.Implementing method B may thus enable significant improvements of the control performance through better disturbance predictions.Implementing the proposed method B only requires a minor modification in the prediction model, comparable to filtering the measured disturbance, and does not rely on which model representation is implemented in the prediction model.Both state-space models and step-response models were considered in this paper, with very similar results.Method B may thus improve the control performance significantly with a minimal effort.
Measured disturbances are often included in model predictive control (MPC) formulations to obtain better predictions of the future behavior of the controlled system, and thus improve the control performance. In the prediction model, a measured disturbance is in many ways treated like a control input to the system. However, while control inputs change only once per sampling interval as new control inputs are calculated, measured disturbances are typically sampled from continuous variables. While this difference is usually neglected, it is shown in this paper that taking this difference into account may improve the control performance. This is demonstrated through two simulation studies, including a realistic multivariable control problem from the petroleum industry. The proposed method requires only a minor modification in the implementation of the prediction model, and may thus improve the control performance with a minimal effort.
284
Synthesis and tunable luminescent properties of Eu-doped Ca2NaSiO4F - Coexistence of the Eu2+ and Eu3+ centers
Phosphor-converted white light-emitting diodes have attracted much attention in recent years for their high efficiency, reasonable cost, long lifetime and environmental friendliness .As is known to all, the pc-WLED by fabricating a blue LED chip with the yellow-emitting phosphor Y3Al5O12:Ce3+ has some important drawbacks.Consequently, w-LEDs with a near-UV LED chip and tri-color or two complementary wavelength phosphors being fabricated were studied widely .In consideration of the merits and drawbacks in compatibility and cost for tri-color phosphors with different hosts, it is better to develop a single-component white-light phosphor for fabricating white LED devices.Generally, single-component white-light phosphor can be obtained by two means as follows: co-doping two or more activators into the same host; different luminescence center from the same ion in the host, for example, different Ce3+ emission in one host .This gives us an idea that it would be better if white light can be realized by two activators from the same element in different valence states.Hence, we consider exploring a single-component white-light phosphor doped with Eu2+ ions with bluish-green emission and Eu3+ ions with red emission.Recently, some Eu2+/Eu3+ co-doped phosphors have been reported, such as in CaO , Sr2B5O9Cl , SrB4O7 , Ca3Y2Si3O12 , LiMgPO4 , LiBaBO3 , Sr1.5Ca0.5SiO4 , and Ba2Lu2Cl .The detail structure of Ca2NaSiO4F was first reported by Andac with the orthorhombic structure, and Krüger and Kahlenberg reported another monoclinic structure .To the best of our knowledge, until now, very few phosphors with Ca2NaSiO4F as host were reported.Recently, You et at reported the structure and photoluminescence properties of phosphors Ca2NaSiO4F:Re for wLEDs, and energy transfer mechanisms for Ce3+ → Tb3+ were studied systematically .In this work, we report the preparation and luminescent properties of Eu2+/Eu3+ co-doped phosphors Ca2NaSiO4F:Eu2+/Eu3+.White light emission can be realized in this case by adjusting the Eu overall concentrating.It is believed that this phosphor Ca2NaSiO4F:Eu can act as a promising candidate for application in n-UV w-LEDs.The Ca2NaSiO4F:xEu phosphors were synthesized by a high-temperature solid-state reaction.The raw materials were CaCO3 , SiO2, NaF, and Eu2O3.The raw materials were carefully weighed stoichiometrically and ground in an agate mortar.After mixing and thorough grinding, the mixtures were preheated at 600 °C for 3 h in CO reducing atmosphere, then the temperature was increased to 950 °C, and kept at 950 °C for 4 h.The final products were cooled to room temperature by switching off the muffle furnace and ground again into white powder.The phase purity and structure of the final products were characterized by a powder X-ray diffraction analysis using Cu Kα radiation on a PANalytilal X’pert Powder X-ray Diffractometer at room temperature.The Photoluminescence properties were measured on a HITACHI F7000 fluorescence spectrometer equipped with a 450 W Xenon lamp as the excitation source.The luminescence decay spectra were measured by a FLS 920 steady-state spectrometer equipped with a fluorescence lifetime spectrometer, and a 150 W nF900 ns flash lamp was used as the flash-light source, respectively.All the measurements were performed at room temperature.The phase purities of the as-prepared samples were examined by X-ray diffraction at RT.Fig. 1 shows the XRD patterns of typical samples Ca2NaSiO4F:0.01Eu2+/Eu3+, Ca2NaSiO4F:0.06Eu2+/Eu3+, Ca2NaSiO4F:0.10Eu2+/Eu3+ and the standard data.The diffraction patterns of the samples agree well with the standard data for Ca2NaSiO4F.Hence, it can be concluded that the dopant Eu ions are completely incorporated into the host lattice by substituting for Ca2+ ions without making significant changes to the crystal structure.It has been reported that Eu3+ ions can be partially reduced into Eu2+ in air or weak reduction atmosphere .That is to say, Eu3+ and Eu2+ can coexist stably in a single host lattice.It is a good way to design white light emitting phosphors with Eu3+ and Eu2+ ions in a single host lattice for solid state lighting application.The photoluminescence excitation and photoluminescence emission spectra of Ca2NaSiO4F:0.01Eu phosphor are presented in Fig. 2.By monitoring 520 nm emission, it can be seen that the excitation spectrum exhibits a broad band with a peak at around 356 nm, which corresponds to the 4f → 5d allowed transition of Eu2+.The emission spectrum under 356 nm excitation shows a broad band and some weak lines ranged from 570 to 700 nm.This observed broad-band emission is attributed to the 4f65d–4f7 transition of the Eu2+ ions, which dovetail with the work reported by You .Besides, three small narrow emission lines with peaks centered at ∼577, ∼65, ∼700 nm exist in curve b, which correspond to the 5D0 → 7F0, 5D0 → 7F2 and 5D0 → 7F4 transitions of Eu3+.It indicates that Eu3+ ions are not reduced into Eu2+ ions completely.In order to prove the existence of Eu3+ ions in the Ca2NaSiO4F host, 614 nm emission line is chosen as monitoring wavelength to measure the excitation spectrum, as shown in curve c.A broad band with a maximum at ∼268 nm and several sharp lines can be seen in curve c.The broad band should be assigned to the charge transfer transition between oxygen ligand and Eu3+.The sharp peaks in the range of 300−500 nm are attributed to the 4f6−4f6 intraconfiguration transitions of Eu3+ ions.Therefore, it can be confirmed that both Eu2+ and Eu3+ ions exist in the Ca2NaSiO4F host.Fig. 2 shows the emission under the excitation of 268 nm which is corresponds to the Eu3+ charge transfer band.The Eu3+ characteristic emissions can be observed clearly in the emission spectrum.It should be noted that the excitation spectrum shows no absorption at 356 nm wavelength.It means that 356 nm light can hardly excite Eu3+ ions directly.So why Eu3+ emissions can be detected upon 356 nm excitation in Fig. 2b?,We believe that the energy transfer from Eu2+ to Eu3+ could be the only reason.However, we should be also aware that Eu2+ excitation band cannot be detected by monitoring Eu3+ 614 nm emission as shown in Fig. 2c.So it is concluded that the energy transfer of Eu2+ to Eu3+ is by means of radiation and re-absorption.This is not surprising, because overlap between the excitation spectrum of Eu2+ and emission spectrum of Eu3+ can be clearly seen at around 465 nm in this case.Luminescence spectra of samples Ca2NaSiO4F:xEu under 356 nm excitations are presented in Fig. 3.As mentioned above, the short-wavelength part of the spectra, bluish-green broad band emission with a maximum about 510 nm is attributed to the 4f65d1 → 4f7 transition of Eu2+, while the series of sharp peaks located in the long-wavelength range is ascribed to the 5D0 → 7FJ transitions of Eu3+.Furthermore, the relative intensity of Eu3+ versus Eu2+ luminescence vary with the doping content of overall Eu.To observe directly the relative emission intensity variation, the intensities of Eu2+ and Eu3+ as a function of the overall Eu content are given in Fig. 4.With the overall Eu concentration increasing, it can be seen that the relative emission intensities of Eu2+ ions decrease systematically, while those of Eu3+ increase distinctly.There should be three reasons for this intensity variation : concentration quenching of Eu2+ ions; the increasing difficulty of Eu3+ → Eu2+ reduction with increasing content of the Eu; energy transfer from Eu2+ to Eu3+ occurs.However, it still needs to be pointed out that the reason for why the difficulty increases in the reduction process is not clear in the present experiment.Further work should be done to reveal it.In general, if the radiative energy transfer works, the decay time of the sensitizer remains constant with increasing concentrations of the activator .Fig. 5 presents the decay curves of the Eu2+ emission in Ca2NaSiO4F:xEu upon excitation at 356 nm.We find that the two decay curves for different Eu concentration samples overlap each other, with a similar decay time about 235 ns, which further proves that the mechanism of Eu2+ → Eu3+ energy transfer is considered to be the radiation and re-absorption, but not the resonance non-radiative energy transfer.The CIE chromaticity diagram and CIE chromaticity coordinates for the Ca2NaSiO4F:xEu phosphors upon excitation at 356 nm were calculated through emission spectra, and shown in Table 1 and Fig. 6, respectively.It appears that the emission color can be tunable by controlling the Eu overall concentration.As the x value increases from 0.01 to 0.10, the corresponding emission color of the phosphors shifts from bluish-green to white and eventually to orange-red.In particular, by controlling Eu overall concentration at x = 0.10, a white light emission with CIE coordinates of is realized as shown in points 3 in Fig. 6.Even though the white light point in this case deviates from the regular white light point a little, it is still a fact that the phosphors Ca2NaSiO4F:Eu may be a potential single-component white-light phosphor for n-UV LEDs, because CIE coordinates can be further improved by making fine adjustments of Eu overall concentration.In summary, bivalent Eu2+ and trivalent Eu3+ ions were detected together in a novel phosphors Ca2NaSiO4F:Eu by UV−vis luminescence spectroscopy.The blue emission of Eu2+ at around 520 nm and red emission of Eu3+ appears simultaneously upon excitation at 356 nm due to radiative energy transfer from Eu2+ to Eu3+.The relative intensity of Eu3+ versus Eu2+ luminescence gets higher and higher.Hence, the emission color of Ca2NaSiO4F:xEu changes continuously from bluish-green to white and eventually to orange-red as the concentration of the Eu increases.The present results show that the Ca2NaSiO4F: Eu phosphor can act as a single-component white-light phosphor for wLEDs.
Novel phosphors Ca2NaSiO4F:Eu were synthesized successfully by the conventional solid-state method in CO atmosphere, and their spectroscopic properties in UV-vis region were investigated. The photoluminescence properties show that Eu3+ ions were partially reduced to Eu2+ in Ca2NaSiO4F. As a result of radiation and re-absorption energy transfer from Eu2+ to Eu3+, both Eu2+ bluish-green emission at around 520 nm and Eu3+ red emission are observed in the emission spectra under the n-UV light excitation. Furthermore, the ratio between Eu2+ and Eu3+ emissions varies with increasing content of overall Eu. Because relative intensity of the red component from Eu3+ became systematically stronger, white light emission can be realized by combining the emission of Eu2+ and Eu3+ in a single host lattice under n-UV light excitation. These results indicate that the Ca2NaSiO4F:Eu phosphors have potential applications as a n-UV convertible phosphor for light-emitting diodes.
285
Sustainable Application of a Novel Water Cycle Using Seawater for Toilet Flushing
Freshwater supports life and is the most essential natural resource.However, its quantity and quality are currently threatened by the anthropogenic activities of a fast-growing global population .Approximately 80% of the global human population is affected by either water scarcity or water insecurity .Even when the estimation of freshwater scarcity is made using a blue water footprint instead of blue water availability, the worldwide water shortage remains a critical issue .In view of this issue, wastewater reuse/recycling, rain water harvesting, and seawater use have been extensively researched as viable solutions .Considering that, on the one hand, over half of the world’s population lives in coastal areas that cover only 10% of the earth’s land surface and, on the other hand, that seawater accounts for 97.5% of all water resources on the planet , the use of seawater appears to be the best solution.Seawater desalination—using reverse osmosis—for a potable water supply is a mature technology.Technologies for the optimization of seawater desalination have been the focus of much development in water research, but the wide application of this process is still hindered by its high costs and high energy consumption .Meanwhile, seawater for toilet flushing has been developed as a unique approach to alleviate water shortage in places such as Hong Kong .The SWTF system has been applied in Hong Kong since 1958, and today serves up to 80% of the city’s inhabitants, enabling the city to cut its annual freshwater consumption by at least 22% .The successful implementation—for more than 50 years—of the SWTF system to supply water for non-potable uses demonstrates that SWTF is an excellent way to increase water efficiency at a city level .In addition, the presence of sulfate in seawater has been exploited in the sulfate reduction, autotrophic denitrification, and nitrification integrated process that was invented for the treatment of saline wastewater .By integrating the microbial sulfur cycle into conventional biological wastewater treatment, which is based on the carbon and nitrogen cycles, the SANI process applies low sludge-yielding microbes such as sulfate-reducing bacteria and autotrophic denitrifiers to remove carbon and nitrates.Use of the SANI process reduces the space required for the process of wastewater treatment and sludge handling by 30%–40%, slashes biological sludge production by 60%–70%, lowers energy consumption by 10%, and reduces greenhouse gas emissions by 10%, compared with conventional activated sludge processes that are coupled with anaerobic sludge digestion and biogas energy recovery .The SANI process has been thoroughly investigated—and its advantages and suitability solidly demonstrated—in a 500-day lab-scale trial , a 225-day 10 m3·d−1 onsite pilot-scale trial , and a 1000 m3·d−1 demonstration-scale trial at the Sha Tin Sewage Treatment Works in Hong Kong .The results indicate that SWTF, coupled with the SANI process for wastewater treatment, may afford enhanced economic and environmental benefits over other urban water cycle systems.To this day, the applicability of the SWTF-SANI coupled approach for coastal water-scarce urban areas has not been evaluated by life-cycle assessment in the context of a full urban water system.Hence, an extensive environmental sustainability analysis of this novel hybrid water resource management system with respect to its impacts on climate change, energy consumption, and land occupation is deemed necessary.This study is therefore aimed at assessing the environmental performance of a city-scale water system incorporating SWTF and SANI in comparison with a conventional system that applies seawater desalination for partial potable water supply and/or reclaimed water for toilet flushing.To achieve this assessment, an LCA of these different systems is conducted to evaluate their respective environmental sustainability .A whole-system perspective is taken in this study in evaluating the environmental impacts of an urban water system over its entire life-cycle, covering aspects such as water eutrophication, energy consumption, climate change, ozone depletion, and land occupation.Since water security is a serious issue in some of the eastern coastal cities of the Chinese mainland , four representative cities—Hong Kong and Shenzhen, which purchase water originating from Dongjiang River, and Beijing and Qingdao, which depend on the diversion of water through the South-to-North Water Diversion Project—are selected for our case study.Six categories of environmental impacts are first evaluated in these four cities under five urban water scenarios.Next, a sensitivity analysis is carried out to identify the most important impact factors for a city, such as seaside distance, effective population density, and water scarcity condition, under the different water system scenarios.Finally, suitable conditions for applying the SWTF system are suggested.This study provides valuable information for the choice of water resources management systems to mitigate water scarcity in a sustainable manner.According to the ISO standard, an LCA consists of four phases: ① goal and scope definition, ② inventory analysis, ③ impact assessment, and ④ results interpretation .When seawater is used for toilet flushing, the discharge of the resulting saline wastewater to the sewer affects the subsequent wastewater treatment process.In addition, the treated saline wastewater should be discharged back to the sea instead of to a freshwater ecosystem.Based on this concept, the evaluation and comparison of water resources management approaches should comprehensively consider the water pumped from a water resource all the way to the final discharge of the effluent from a wastewater treatment plant to the ecosystem.The particular aspects that should be considered include the water catchment, water treatment, water supply system, wastewater collection system, wastewater treatment processes, and discharge of the treated wastewater , as shown in Fig. 1.The goal of this study is to assess the environmental impacts of alternative water resources and wastewater treatment methods for water-scarce cities.The functional unit is set as 1 m3 of supplied water.Fig. 1 illustrates the chosen system boundary, which encompasses ① water abstraction from sources such as local freshwater, imported freshwater, or seawater; ② potable water treatment processes such as conventional freshwater treatment, RO desalination, or wastewater reclamation; ③ pipelines for freshwater distribution, seawater distribution or reclaimed wastewater distribution, and sewage collection systems; ④ wastewater treatment processes, namely conventional activated sludge processes or the SANI process; and ⑤ effluent discharge back to the sea.Both the construction and operation phases are considered to be within the scope of this study, while the material transportation and demolition phases are excluded, given that their impacts are generally considered insignificant .Five typical or potentially applicable scenarios are compared in this study, as shown in Fig. 1.The conventional freshwater system scenario refers to a conventional freshwater supply, with a single pipe system coupled with the conventional activated sludge process for wastewater treatment.This scenario is set as the control for comparison.In the seawater desalination scenario, seawater desalination with RO is applied to replace the importation of freshwater from other regions in the FWA scenario.In the SWTF scenario, seawater is simply treated with grid and screen for large particle removal, followed by disinfection to produce a water supply for toilet flushing in a separate pipe system, in which used freshwater and seawater are collected together and treated using the conventional activated sludge process.In the SWTF-SANI scenario, the wastewater treatment process in the DSA scenario is replaced with the SANI process for comparison.Finally, the freshwater and grey water system scenario is an example of applying RWTF, in which centralized nanofiltration is used for treating grey water and the treated water is supplied to the user in an individual pipe system.The life-cycle inventory consists of inputs of materials, chemicals, and energy, primarily based on data from the water utilities in Hong Kong or, if this is unavailable, on the most accurate data taken from the literature .All the inputs are determined based on the functional unit.For simplicity, the inventories of similar facilities in different scenarios are considered to be the same.Detailed input inventories are provided in Table S1 to Table S5 in the Supplementary Information.SimaPro 8.1 software is used to organize the inventory data according to the ISO 14044 standard procedure.The impacts are calculated using the ReCiPe Midpoint method provided by SimaPro 8.1 for the proposed harmonized impacts in the cause-effect chain; the midpoint indicators quantify the relative impacts that occur during the life-cycle of systems in terms of climate change, human toxicity, freshwater eutrophication, land occupation, and ozone depletion.The climate change impact is calculated using the Intergovernmental Panel on Climate Change equivalent factors for direct effect.Energy consumption is determined by the Ecoinvent 2.0 method because it is not defined in the ReCiPe Midpoint method.Hong Kong, Shenzhen, Beijing, and Qingdao are chosen as the specific study areas, given that they all suffer from water shortage under different geographical and water conditions.For example, Hong Kong and Shenzhen import water from Dongjiang River, while Beijing and Qingdao heavily rely on the SNWDP.Table 1 provides a preliminary summary of the water and geographical conditions for these four cities.Detailed information and their sources are provided in Table S1 in the SI.The major variations considered for these different cities are the importing distance of freshwater, distance from the coast, availability of freshwater, ratio of water used for toilet flushing to total water consumption, and effective population density.For the effective population density, only the density in the city core is considered.The city core contains more than 75% of the city population; that is, the low population densities of the surrounding residential suburbs are excluded.In the sensitivity analysis of four indicators, the critical conditions were initially considered.These conditions represent the worst-case scenario for the application of an urban water system with SWTF.However, these initial values were found to be too specific, considering that cities vary in their endowments, such as in the availability of groundwater resources.Hence, the parameters were subsequently varied in relatively large but reasonable ranges, as listed in Table 2.Detailed computation methods and relevant equations are described in the SI.In addition to the measures described above, 5% and 10% uncertainties are adopted from a real system and from the recent literature, respectively, and an extra uncertainty of 15% is further considered for the estimated lengths of domestic pipe networks and the importing distance of freshwater .Uniform random distributions are assumed for the 10 000 iterations of Monte Carlo simulation in order to achieve high precision of the simulation results.Hong Kong, Shenzhen, Beijing, and Qingdao are four typical cities facing serious water scarcity because of quantitative deficiency in their water resources.Hong Kong and Shenzhen purchase non-local freshwater to meet their demands, while Beijing and Qingdao rely on water imported from southern China in 1432 km and 1467 km long canals, respectively .These two solutions are neither environmentally friendly nor economically ideal.Therefore, seawater desalination, SWTF, and reclaimed water are evaluated as alternatives for the water supply in the four cities in comparison with the conventional freshwater supply.The use of seawater or reclaimed water is intended to reduce reliance on freshwater.The freshwater withdrawals in the different scenarios in the four cities are summarized in Fig. 2.Theoretically, seawater desalination in the FRA scenario can replace all non-local uses of freshwater, but it is unsustainable.Hence, the bulk of the freshwater demand in the FRA scenario for each city represents the amount of freshwater withdrawal locally, and the balance of the freshwater demand is considered to come from seawater.Given the differences in the amounts of freshwater used in the FWA and FRA scenarios for all four cities, Hong Kong and Shenzhen are found to be in a more precarious situation than Beijing and Qingdao.In the case of Hong Kong and Shenzhen, seawater and reclaimed water for toilet flushing in the DSA, DSS, and DNA scenarios can only replace the freshwater used for toilet flushing, which constitutes approximately 20%–30% of the total freshwater demand of a city.Therefore, neither SWTF nor RWTF can meet the water demand in Hong Kong or Shenzhen.This finding suggests that an additional amount of water must be imported from a distant water source, that is, Dongjiang River, to meet the current demands.However, in the case of Beijing and Qingdao, the amount of seawater or reclaimed water used for toilet flushing is sufficient to alleviate the freshwater shortages, indicating that the application of SWTF or RWTF in these cities can totally eliminate their reliance on the SNWDP.The environmental impacts of the five urban water systems studied in this paper—that is, freshwater only, seawater desalination, SWTF coupled with the conventional activated sludge process, SWTF coupled with the SANI process, and reclaimed water—were analyzed for each of Hong Kong, Shenzhen, Beijing, and Qingdao.Fig. 3 shows the results.Overall, the FRA scenario with seawater desalination using RO yielded the worst environmental impacts in terms of all six aforementioned indicators, especially in Hong Kong and Shenzhen.Replacing 80% of the freshwater demand in the FRA scenario with seawater desalination triples the resulting negative environmental impacts, because of the significant share of freshwater in comparison with other scenarios.Apart from the FRA scenario, the environmental impacts of the other four scenarios are relatively similar for all four cities considered, although the impacts are different for freshwater eutrophication in Beijing.However, the scenario of SWTF coupled with the SANI process for wastewater treatment yielded lesser environmental impacts in Qingdao, Hong Kong, and Shenzhen, mainly because of the combined effects of a simple treatment of seawater for toilet flushing and the environmental friendliness of the SANI process.The comparisons of FWA, DSA, DSS, and DNA for Beijing show trends that are dissimilar to those for the other three coastal cities.The application of reclaimed water in Beijing is more environmentally friendly than the use of either the SNWDP for the total freshwater supply or the SWTF systems.The relatively poor environmental impacts of the DSA and DSS options in Beijing are attributed to the long-distance pipes for seawater supply and the discharge of treated saline water into the sea, although these options result in a significant reduction of freshwater eutrophication compared with other options, as shown in Fig. 3.Therefore, the distance from the coast is an essential parameter in the evaluation of SWTF-associated scenarios.The negative environmental impacts in Beijing and Qingdao are substantially greater than those in Hong Kong and Shenzhen, owing to the former two cities’ more extensive domestic pipeline networks.Most previous studies have also indicated that water transportation is responsible for more than 30% and up to 70% of the contribution of urban water systems to climate change and electricity consumption.In particular, in cities with a population density below 4000 persons·km−2, relatively longer domestic pipelines for each unit volume of water transported are needed .In addition, the lower water consumption per capita in northern China means that the pipeline systems in Beijing and Qingdao contribute substantially to the negative environmental impacts.In summary, the significant environmental impacts of domestic pipeline networks are caused by a low effective population density and a low water transportation efficiency per unit length of pipeline.With rapid urban development, it is expected that per capita water consumption will surge, thereby leading to a decrease in the environmental impacts of the pipe networks per cubic meter of water transported .Clearly, effective population density—rather than per capita water consumption—is another important factor in the evaluation of the environmental impacts.Hence, distance from coast and effective population density are two of the most important parameters associated with the impact of an urban water system on the environment.The development of a simple model to predict environmental impacts as these two parameters are varied could aid decision makers in selecting an optimum water supply alternative, based on the specific conditions in a given city.A sensitivity analysis is, therefore, conducted to assess the effect of the aforementioned parameters for each city.In the next section, the parameters with the most significant effects are presented and the different scenarios are compared.The results of the LCA for the four scenarios of the urban water system showing the largest variations in energy consumption, climate change, land occupation, and human toxicity in the four cities are shown in Fig. 3,, and, respectively.The results suggest that these indicators are likely to be strongly dependent on the geographical conditions and state of urban development of a city, such as the distance from the coast for seawater abstraction, distance from the importing source of freshwater, and effective population density.The sensitivity analysis was conducted with these city-specific parameters as the input variables for the different scenarios for all four cities.,Fig. 4 shows the variations in energy consumption, climate change, land occupation, and human toxicity for the four scenarios as functions of the variations in the importing distance of freshwater, availability of freshwater, effective population density, distance from coast, and ratio of water used for toilet flushing to the total water consumption of a city.In general, the environmental impacts of the seawater-based scenarios are more adverse than those of the freshwater-based scenarios.The worst environmental impact of the two scenarios involving SWTF arises from the choice of parameters for the sensitivity analysis, which represented the critical cases for the application of SWTF.These parameters include a short freshwater importing distance of 70 km, a low effective population density of 3000 persons·km−2, and a long distance from the coast of 300 km.With these assumptions, the negative environmental impacts shown in this study are higher than those found in similar studies.Taking the impact category of most concern, that is, climate change, the values are higher than those in all recent relevant studies .Therefore, the results presented in Fig. 4 should only be used for analyzing the most important parameters.It appears that in the SWTF scenario, the environmental benefits increase as the distance from the coast decreases, which is not the case in the other scenarios.Effective population density is the parameter with the highest impacts on the results of the LCA.There is a power law relationship between effective population density and environmental impacts for all scenarios.However, the impacts of the other parameters tested are much lower.For all scenarios, the environmental impacts increase sharply as the effective population density decreases.This variation is less remarkable at densities above 12 000 persons·km−2—equivalent to just 40% of the maximum environmental impacts—which indicates that there is potential for large cities with population densities above 12 000 persons·km−2 to reduce the environmental impacts per cubic meter of water used in the closed loop of a water cycle system.Therefore, the development of water supply and wastewater treatment in megacities such as New York, Shanghai, Guangzhou, Tokyo, Seoul, and Singapore, all of which have high population densities, is more environmentally sustainable than in cities with lower densities.The second most important parameter is distance from the coast, which shows significant effects on the LCA results only for the scenarios involving SWTF.,It is clear that the environmental impacts worsen as the distance from the coast increases because long-distance canals are needed for transporting seawater and discharging saline wastewater.An increase in distance from the coast from 0 to 300 km led to approximately 20% more energy consumed on average, resulting in 24% and 10% increases in climate change and human toxicity, respectively.Land occupation was the most affected indicator, increasing by approximately 45% as distance from coast rose from 0 to 300 km.Therefore, the potential for applying SWTF in inland cities should be carefully assessed by LCA, particularly for land occupation.For other parameters, namely freshwater transport distance, freshwater availability, and ratio of water used for toilet flushing to total water consumption, the variations used in the sensitivity analysis were 30–300 km, 0–100%, and 20%–40%, respectively.Despite the large variations, these parameters had minimal effects on the environmental impacts.In addition, the results indicate that these three parameters with negligible effects are related to the water supply system, thus leading to the conclusion that the water supply system causes low environmental impacts compared with those of domestic pipelines.This finding is consistent with the results of the case study shown in Fig. 3, where domestic pipelines contributed 30%–80% of the total environmental impacts.Hence, reducing the environmental impacts of water and wastewater treatment systems largely depends on optimizing the domestic pipe network.The environmental impacts of the domestic pipe network mainly come from the construction of the pipe network itself and from the energy used for water lifting, which are closely related to the length of pipe and amount of water transported.A higher population density can reduce the total per capita pipe length and improve the energy efficiency of water transportation.This result explains the significant effects of population density on the environment.Based on the results of the sensitivity analysis, effective population density and distance from the coast are confirmed to be the two most important parameters.Hence, they are selected as the major impact factors to evaluate the potential application of the SWTF scenarios under different conditions.In this evaluation, the environmental impact indicators selected are climate change and land occupation, which yielded the highest sensitivities to variations in the above two parameters.Fig. 5 compares the different scenarios on the basis of land occupation.The curved surfaces are plotted as functions of three varying parameters, namely effective population density, distance from the coast, and land occupation.Other parameters were kept constant at their mean values, although this does not favor the application of SWTF.The projected shadow areas on in the x-y plane reflect the conditions that would favor the application of SWTF as an alternative for water supply, and the equation shows the intersection of the two curved surfaces.In general, the shadow areas for DSS versus FWA and DSS versus DNA, shown in Fig. 5 and, are larger than those for DSA versus FWA and DSA versus DNA, shown in Fig. 5 and, because of the additional environmental benefits from the SANI process, which are mainly related to the lower land occupation.If the conventional activated sludge process is used for saline wastewater treatment, the city should be located within 30 km of the seashore, with limited impact from population density.The application of SWTF still benefits the environment beyond a 60 km distance from the seashore if the SANI process is applied for wastewater treatment.For cities with an effective population density less than 1100 persons·km−2, the application of SWTF is, in general, not environmentally beneficial even if the city is located along the coast.However, as mentioned before, none of the water management approaches considered in this study are environmentally sustainable if the effective population density is at or below 1100 persons·km−2.The land occupation increases 3.5 to 7 fold regardless of the scenarios investigated and Fig. 5) at population densities below 3000 persons·km−2.The application of SWTF in large coastal cities with high population densities can further reduce humanity’s footprint to a more sustainable level.The application of SWTF is much more environmentally beneficial than the use of a reclaimed water system, as shown in Fig. 5 and, which can be explained by the fact that the DNA scenario does not show any advantage over the FWA scenario.This is true for all the environmental impact indicators except freshwater saving.Moreover, the potential for cross-contaminations between the reclaimed water and freshwater systems represents a health risk .It can be avoided by implementing SWTF because residents can easily detect the misconnections from salty taste of seawater.Fig. 6 illustrates the effects on climate change of the different scenarios.SWTF yields lower effects on climate change than on land occupation.This finding is consistent with the results of the sensitivity analyses.In other words, when the SWTF scenario is evaluated based on land occupation, its potential application appears to be always environmentally sustainable in terms of indicators other than land occupation.In this way, the SWTF scenario is unlike the other scenarios.Considering the significant reduction in freshwater eutrophication for the SWTF scenario when coupled with marine wastewater discharge, the application of SWTF in inland cities can be a solution to the water scarcity issue.Therefore, the SWTF system and SANI process should be promoted for the development of densely populated modern cities that are within 60 km of the seashore.Examples of such cities include Macau, Tokyo, Singapore, New York, Ningbo, Mumbai, and L’Hospitalet de Llobregat.,In this study, different scenarios of water-wastewater closed loops are compared with each other and with a conventional system in order to assess their potential applications in densely populated and water-stressed cities.The scenarios are seawater desalination, SWTF, RWTF, and the conventional freshwater system.By studying the cases of four representative cities, this study implies that the urban water scenarios with SWTF are more environmentally friendly than other options in Hong Kong, Shenzhen, and Qingdao.In addition, the application of SANI technology for wastewater treatment in scenarios with SWTF is more beneficial to the environment.However, SWTF does not perform better than RWTF in Beijing, due to the long seaside distance.Sensitivity analyses of the environmental impacts—derived from the LCA—are also conducted.The result implies that effective population density and seaside distance are the most responsive impact factors for the application of SWTF.The environmental impacts caused by effective population density show similar effect potential and trend in all the indicators, and these effects will be insignificant when the effective population density is above 12 000 persons·km−2.Seaside distance affects land occupation more than it affects other indicators.SWTF, as exemplified by the SWTF system that has been practiced in Hong Kong for over 60 years, is a promising water supply alternative for modern cities, and particularly for those that are located within 30 km of the seashore and that have an effective population density higher than 3000 persons·km−2.In addition to its environmentally friendly performance, this approach can help slash freshwater consumption by 20%–30%, which will significantly alleviate water stress problems in most cities.When the SANI process for wastewater treatment is coupled with the SWTF system, the effective population density can be reduced to 1100 persons·km−2 and the distance from the seashore can be doubled to 60 km without negatively impacting the environment; this result is unlike the results obtained for the scenarios using freshwater.Most fast-growing metropolitan areas around the world are located on a coast.Hence, a freshwater supply coupled with SWTF, and subsequent use of the SANI process for wastewater treatment, could be the next generation of water-wastewater closed loop systems for more sustainable development.Xiaoming Liu, Ji Dai, Di Wu, Feng Jiang, Guanghao Chen, Ho-Kwong Chui, and Mark C. M. van Loosdrecht declare that they have no conflict of interest or financial conflicts to disclose.
Global water security is a severe issue that threatens human health and well-being. Finding sustainable alternative water resources has become a matter of great urgency. For coastal urban areas, desalinated seawater could serve as a freshwater supply. However, since 20%–30% of the water supply is used for flushing waste from the city, seawater with simple treatment could also partly replace the use of freshwater. In this work, the freshwater saving potential and environmental impacts of the urban water system (water-wastewater closed loop) adopting seawater desalination, seawater for toilet flushing (SWTF), or reclaimed water for toilet flushing (RWTF) are compared with those of a conventional freshwater system, through a life-cycle assessment and sensitivity analysis. The potential applications of these processes are also assessed. The results support the environmental sustainability of the SWTF approach, but its potential application depends on the coastal distance and effective population density of a city. Developed coastal cities with an effective population density exceeding 3000 persons.km−2 and located less than 30 km from the seashore (for the main pipe supplying seawater to the city) would benefit from applying SWTF, regardless of other impact parameters. By further applying the sulfate reduction, autotrophic denitrification, and nitrification integrated (SANI) process for wastewater treatment, the maximum distance from the seashore can be extended to 60 km. Considering that most modern urbanized cities fulfill these criteria, the next generation of water supply systems could consist of a freshwater supply coupled with a seawater supply for sustainable urban development.
286
Basophil activation test discriminates between allergy and tolerance in peanut-sensitized children
Peanut-allergic, peanut-sensitized but tolerant, and non–peanut-sensitized nonallergic children were prospectively and consecutively enrolled from our Pediatric Allergy service on the days when the investigator was available to perform BAT.The allergic status to peanut was determined by using OFCs, except for children with a convincing history of systemic reaction to peanut within 1 year of their visit and wheal size of SPT of 8 mm or more8 and/or P-sIgE level of 15 KUA/L or more,8 who were considered peanut allergic; and children who were able to eat 4 g or more of peanut protein twice a week without developing allergic symptoms, who were considered peanut tolerant.Peanut sensitization was defined by a wheal size of SPT of 1 mm or more and/or P-sIgE level of 0.10 KUA/L or more.All children underwent clinical evaluation, SPT, P-sIgE determination, component-resolved diagnosis, and OFC, as appropriate.An additional sample of blood was drawn in lithium heparin for BAT, which was performed within 4 hours of blood collection.The study was approved by the South East London Research Ethics Committee 2, and written informed consent was obtained from parents of all children.SPT was performed using peanut extract, as previously described.18,The level of sIgE was measured using an immunoenzymatic assay."DBPCFC consisted of 6 verum doses and 3 placebo doses randomly interspersed with verum doses up to a cumulative dose of 9.35 g of peanut protein.Children of 1 to 3 years were given 1 placebo and 5 verum doses up to a cumulative dose of 4.35 g of peanut protein.In infants, the OFCs were open up to a cumulative dose of 4.35 g of peanut protein.Nine older children also received an open OFC for logistical reasons.OFCs were considered negative when all doses were tolerated."If an allergic reaction developed at any stage after a verum dose, the OFC was considered positive and the symptoms treated.If a reaction followed a placebo dose, the patient was brought in for 2-day challenge.19,Heparinized whole blood was stimulated for 30 minutes at 37°C with peanut extract diluted in RPMI medium at serial 10-fold dilutions from 10 μg/mL to 0.1 ng/mL."For details about the extract and allergen concentrations, see this article's Online Repository at www.jacionline.org.20",Polyclonal goat antihuman IgE, monoclonal mouse antihuman FcɛRI, formyl-methionyl-leucyl-phenylalanine, or RPMI medium alone were used as controls.Before erythrocyte lysis, cells were stained with CD123-FITC, CD203c-PE, HLA-DR-PerCP, and CD63-APC."Basophils were gated as SSClow/CD203c+/CD123+/HLA-DR−.Basophil expression of CD63 and CD203c was evaluated using FACS CantoII with FACSDiva software.The flow cytometry data were analyzed using FlowJo software by an investigator who was blinded to the clinical features of the participants.Basophil activation was expressed as %CD63+ basophils and as the stimulation index of the mean fluorescence intensity of CD203c.We estimated that a sample of 32 PA and 32 PS children would give us 99% power, at a 2-sided type I error probability of 0.05, to detect a significant difference in the %CD63+ basophils after peanut stimulation between PA and PS on the basis of data from a previous study.21,Qualitative variables were compared between PA and PS children using the Fisher exact test or χ2 tests, and continuous variables were compared using the Mann-Whitney U test or the Kruskal-Wallis test.The performance of allergy tests was examined against the allergic status to peanut using receiver-operating characteristic-curve analyses.The cutoffs to predict peanut allergy and peanut tolerance for BAT and the various allergy tests with optimal accuracy were determined and validated.We performed internal validation using repeated random subsampling validation and “leave-one-out” methodologies.22,Both methodologies produced similar results in estimating the optimal cutoff points, and the former methodology is reported.The 95% CI was constructed using bootstrapping methodology with 1000 replications to reflect on the reproducibility.23,An external validation study was also conducted using a new cohort of 65 subjects mainly recruited from the Peanut Allergy Sensitization study, a group of patients from all over the country who were excluded from the Learning Early About Peanut Allergy study,18 and from a private Pediatric Allergy clinic in London.The cutoffs previously determined in the primary study population were applied to this validation study population and sensitivity, specificity, predictive values, likelihood ratios, and accuracy were calculated.Three Pediatric Allergy specialist attending physicians were asked to classify 44 equivocal cases from the primary study population as peanut allergic or tolerant on the basis of history and results of SPT, P-sIgE, and CRD.The agreement between physicians was calculated as percentages and assessed with κ statistics.24,Statistical analyses were performed with SPSS 20.0 and STATA 12.1 for Windows.Significance was determined using a 2-sided α level of 0.05.In the primary study population, after ROC-curve analyses, we compared the performance of BAT with SPT, P-sIgE, and Arah2-sIgE using conventional cutoffs.6,8,12,We further assessed the diagnostic utility of BAT when considered in combination with other allergy tests, that is, considering the results of different tests simultaneously, and when considered as a second or third step in the diagnostic process, that is, performed in selected patients in whom the results of single or of combinations of tests were equivocal.When interpreted individually, the results of standard allergy tests were considered diagnostic of allergy when the positive predictive value cutoff was 95% or more, diagnostic of tolerance when the negative predictive value cutoff was less than 95%, and equivocal when between the positive and the negative cutoffs.For BAT, we used the cutoff for the mean of %CD63+ basophils at 10 and 100 ng/mL of peanut extract and considered BAT equivocal in the case of “nonresponders”.The combination of allergy tests was interpreted as equivocal if one test result was 95% or more PPV cutoff and another test result was less than 95% NPV cutoff or when all tests gave equivocal results or a combination of equivocal results and results less than 95% NPV.In these simulations, OFCs were deemed required when the interpretation of tests was equivocal.The combination of SPT and P-sIgE was the clinical reference point against which the change in the number of OFCs required was determined."One hundred nine children, 76% boys, aged from 5 months to 17 years, participated in the study.Sixty-six OFCs to peanut were performed: 20 positive, 41 negative, and 5 indeterminate.These 5 patients were excluded.Among the study participants, 48 patients underwent DBPCFC and 13 open OFCs."Demographic and clinical features of the study population are represented in Table I and in Figs E4 and E5 in this article's Online Repository at www.jacionline.org.The basophils of 12 children were “nonresponders” and were necessarily excluded from the comparison of BAT results between groups and from the ROC-curve analysis; however, they were taken into account when assessing the clinical application of BAT and its effect in the reduction of OFCs.In PA children, basophils showed increased expression of CD63 and CD203c, with increasing concentrations of peanut extract up to 100 ng/mL followed by a plateau."The basophils from PS children did not significantly respond to peanut neither did basophils from NA children. "This difference in basophil response between groups was reflected in other parameters of BAT.The %CD63+ basophils in response to the negative control and non–IgE-mediated positive control was similar across groups.The proportion of nonresponders was higher in peanut-tolerant than in peanut-allergic children.Similar findings were observed for the stimulation index of CD203c.Peanut allergy and tolerance status was the reference point to evaluate the diagnostic performance of BAT on ROC-curve analysis.The best diagnostic cutoff values were obtained for %CD63+ basophils at 100 ng/mL and mean %CD63+ basophils at 10 and 100 ng/mL of peanut extract.These were simultaneously optimal, negative, and positive decision levels, with 98% sensitivity, 96% specificity, 95% PPV, 98% NPV, and 97% accuracy."See Table E4 in this article's Online Repository at www.jacionline.org for optimal cutoffs for other BAT parameters. "The area under the ROC curve for BAT was superior to that for other allergy tests. "Arah2-sIgE performed better than did sIgE to other peanut components. "To externally validate our findings, we prospectively recruited 65 children who underwent the same study procedures as the primary study population.The vast majority underwent OFCs, and all positive OFCs were DBPCFC."Applying the optimal cutoff previously determined for the mean of %CD63+ basophils at 10 and 100 ng/mL of peanut extract, BAT showed 100% specificity, 83.3% sensitivity, 100% PPV, 90.2% NPV, and 93.4% accuracy and was superior to SPT, sIgE, and Arah2-sIgE.The utility of BAT was further assessed in the subgroup of the primary study population with equivocal history and inconclusive results of SPT, P-sIgE, and CRD.Three Pediatric Allergy specialist attending physicians were asked to classify them as peanut allergic or tolerant on the basis of available information.In most of the cases, the physicians could not decide without doing an OFC.They correctly diagnosed 26% to 36% and misclassified 9% to 16% of the cases.Agreement between the 3 pairs of physicians was poor to fair, with κ values of 0.16, 0.29, and 0.36.The 3 specialists agreed in 16 cases: 4 correctly diagnosed, 1 misclassified, and in 11 cases they were unable to decide.In contrast, BAT provided 36 correct diagnoses, 2 false positives, and 1 false negative and required 5 OFCs.Excluding nonresponders, BAT had a diagnostic accuracy of 95%.We evaluated the diagnostic performance of different tests in the primary study population, including BAT nonresponders, in 3 ways: considering each test on its own; considering the results of different diagnostic tests simultaneously; and considering BAT as a second or third sequential step in the diagnostic process, performed in patients in whom the results of single or combinations of standard allergy tests were equivocal.Considering single tests, BAT performed best and allowed a reduction in the number of OFCs by two-thirds, followed by Arah2-sIgE and SPT.P-sIgE on its own performed the poorest, conferring the highest number of OFCs and correctly diagnosing only 55% of the patients.Considering combinations of allergy tests, it was best to combine 2 different tests as opposed to 3 or 4 tests.All combinations of tests required an increase between 2- and 3.5-fold in the number of OFCs compared with BAT alone."With a view to apply BAT in clinical practice, we assessed the role of BAT as a second or third step in the diagnostic workup, which would require a smaller number of BATs. "The 2-step strategy significantly reduced the number of OFCs, more than using Arah2-sIgE as a second step to SPT or to P-sIgE, as proposed by Dang et al.12 The 3-step sequential strategy of SPT→Arah2-sIgE→BAT further reduced the number of OFCs to zero at the expense of a slightly higher number of false-negative results.To arrive at a correct diagnosis of peanut allergy or tolerance, a considerable proportion of peanut-sensitized patients seen in allergy clinics need to undergo an OFC.Specialized centers have become overwhelmed with the increasing number of OFC requests, and overdiagnosis of peanut allergy due to overreliance on allergy tests alone is common.There is a large immunological gray area between 95% PPV and 95% NPV cutoffs for SPT, P-sIgE, and Arah2-sIgE.If we apply a single cutoff value based on the ROC-curve point-of-inflexion, the diagnostic accuracy of these tests suffers.In BAT, the ROC-curve optimal cutoff acted simultaneously as positive and negative cutoff with no immunologic gray area, allowing for a significant reduction in the number of OFCs, even among difficult patients with conflicting history and results of SPT, P-sIgE, and CRD.Unlike for these tests, for BAT, we were able to use the ROC-curve point-of-inflexion as a single cutoff value while maintaining a 97% diagnostic accuracy.Our study is the largest study assessing the role of BAT in the diagnosis of peanut allergy.21,25,26,It is the first study to prospectively validate BAT in an independent population and to evaluate its diagnostic performance on its own, in combination and sequentially with other allergy tests, as well as its effect on the number of OFCs.We studied a large population, including not only sensitized but also nonsensitized nonallergic patients.Although peanut-induced basophil activation would not be expected in the absence of P-sIgE, it was important to demonstrate the specificity of BAT in NA patients.BAT maintained its good performance in an independent population prospectively recruited to validate the diagnostic cutoffs.In 44 children with evidence of sensitization and conflicting allergy test results, 3 specialist doctors showed poor agreement and were unable to decide in most of the cases whether they were peanut allergic without doing an OFC, while BAT still performed very well in this subgroup.One of the strengths of our study is that participants were carefully clinically phenotyped, the vast majority by OFCs.In the primary study population, 23 patients were assumed to have peanut allergy on the basis of SPT and/or P-sIgE of 95% or more PPV cutoffs and positive history.This is a potential weakness of the study; however, given the extremely high probability that such patients would react clinically, we decided on clinical and ethical grounds not to challenge them.Most of the patients who were challenged underwent DBPCFC, but 4 children 1 year or younger and another 9 older children underwent open OFCs.This is a limitation of our study."However, most of the older children undergoing open OFCs had negative challenges and the 2 who had a positive OFC had objective unequivocal signs of an allergic reaction immediately after peanut ingestion, consistent with the new Practall guidelines' criteria for a positive OFC.27",In 5 patients, the OFCs were inconclusive, which highlights the fact that although DBPCFC is the gold standard, it is not foolproof in the diagnosis of peanut allergy.28,BAT may prove particularly useful in cases in which OFC cannot be performed or is indeterminate.In the external validation population, 94% of the patients were challenged and all positive OFCs were DBPCFC.The main limitation of BAT was the patients with nonresponder basophils, rendering BAT uninterpretable.The proportion of nonresponders we found was similar to that previously described.21,29-31,This is analogous, for example, to situations in which SPT cannot be interpreted because of a negative histamine control or in which P-sIgE cannot be interpreted in the light of a high polyclonal IgE production or indeed when an OFC is inconclusive.Importantly, these are not misdiagnosed patients but cases in which BAT is uninterpretable and the diagnostic workup needs to be taken further, namely, by doing an OFC.The fact that nonresponders were almost exclusively peanut-tolerant patients raises the question whether basophil unresponsiveness through the IgE-mediated pathway could be a mechanism underlying peanut tolerance.Another limitation was that different peanut extracts were used for different tests; however, all extracts contained the major peanut allergens.Furthermore, our study was performed in children recruited in a specialized clinical setting and thus may not reflect the results of BAT to peanut in adults or the general population.Further limitations to consider when applying BAT in clinical practice are the fact that BAT needs to be performed on live cells, soon after blood collection, and requires flow cytometry equipment and appropriately trained staff.Following the evaluation of the diagnostic performance of each test by ROC-curve analysis, we wanted to assess their effect on the reduction of OFCs.The effect of BAT was different in the 3 scenarios considered: single tests, combination of tests, and BAT as a sequential step in the diagnostic process.Very few studies have addressed the utility of combinations of allergy tests, and this deficiency has been highlighted as an unmet clinical need in the National Institute of Allergy and Infectious Diseases–sponsored food allergy guidelines.32,Considering single tests, BAT performed best, followed closely by Ara h2-sIgE and SPT, even when patients with nonresponder basophils were taken into account.P-sIgE performed the poorest and conferred the highest number of OFCs.Surprisingly, the different combinations of tests provided little, if any, advantage compared with BAT alone, with a uniform reduction in the percentage of correct diagnoses and a significant increase in the number of OFCs required.Disappointingly, the combination of tests did not result in a consistent decrease in the number of false-negative outcomes.Performing BAT as a sequential step reduced the number of BATs required and had a major effect in reducing the number of OFCs regardless of the test performed as first line.For instance, performing BAT after SPT or after Arah2-sIgE allowed a 97% reduction in the number of OFCs compared with the combination of SPT and P-sIgE and a 92% reduction compared with BAT alone.However, this was at the expense of 2 or 3 false-negative outcomes.To prevent any false-negative cases from occurring using this sequential test approach, we would need to challenge all the BAT-negative patients in addition to the patients with equivocal BAT; even in this more conservative scenario, the total number of OFCs was significantly reduced by 64% or 69% compared with combining SPT and P-sIgE.The decision on whether to increase the number of OFCs or of BATs, both reducing the possibility of false-negative tests, would depend on a cost-benefit analysis.We believe that SPT→BAT is better than Arah2-sIgE→BAT for practical reasons and given regional differences in the patterns of sensitization to peanut allergens.13,The 3-step diagnostic strategy further reduced the number of BATs required and eliminated the need for OFCs but this was at the expense of a higher false-negative rate, not from BATs but from SPT and Arah2-sIgE."For further discussion, see this article's Online Repository at www.jacionline.org.To conclude, considering SPT, P-sIgE, CRD, and BAT, BAT has the best diagnostic profile.Combinations of tests offer no significant advantage to BAT alone and led to an increase in the number of OFCs.The most accurate and cost-effective analysis appears to be that of using a 2-step sequential approach in which SPT or Arah2-sIgE is followed by BAT in equivocal cases.To maximize safety and decrease false-negative tests to 0%, the 2-step sequential approach can be modified to do OFCs in the cases with equivocal BAT as well as in BAT-negative patients.We should bear in mind the limitations of OFC.Future studies will determine whether BAT can add to the OFC as an in vitro gold standard.The basophil activation test to peanut can be performed in cases in which standard allergy tests have failed to diagnose peanut allergy before considering oral food challenges.
BACKGROUND: Most of the peanut-sensitized children do not have clinical peanut allergy. In equivocal cases, oral food challenges (OFCs) are required. However, OFCs are laborious and not without risk; thus, a test that could accurately diagnose peanut allergy and reduce the need for OFCs is desirable. OBJECTIVE: To assess the performance of basophil activation test (BAT) as a diagnostic marker for peanut allergy. METHODS: Peanut-allergic (n = 43), peanut-sensitized but tolerant (n = 36) and non-peanut-sensitized nonallergic (n = 25) children underwent skin prick test (SPT) and specific IgE (sIgE) to peanut and its components. BAT was performed using flow cytometry, and its diagnostic performance was evaluated in relation to allergy versus tolerance to peanut and validated in an independent population (n = 65). RESULTS: BAT in peanut-allergic children showed a peanut dose-dependent upregulation of CD63 and CD203c while there was no significant response to peanut in peanut-sensitized but tolerant (P < .001) and non-peanut-sensitized nonallergic children (P < .001). BAT optimal diagnostic cutoffs showed 97% accuracy, 95% positive predictive value, and 98% negative predictive value. BAT allowed reducing the number of required OFCs by two-thirds. BAT proved particularly useful in cases in which specialists could not accurately diagnose peanut allergy with SPT and sIgE to peanut and to Arah2. Using a 2-step diagnostic approach in which BAT was performed only after equivocal SPT or Arah2-sIgE, BAT had a major effect (97% reduction) on the number of OFCs required. CONCLUSIONS: BAT proved to be superior to other diagnostic tests in discriminating between peanut allergy and tolerance, particularly in difficult cases, and reduced the need for OFCs.
287
A stratigraphic investigation of the Celtic Sea megaridges based on seismic and core data from the Irish-UK sectors
The Celtic Sea contains an extensive assemblage of shelf-crossing linear ridges, covering an area of ∼65,000 km2 across the Irish, UK and French sectors, with their long axes generally orientated north-east to south-west.In the Irish-UK sectors, these are up to 200 km long, 15 km wide, 55 m high and 20 km apart, and represent the largest examples of such features in the world.These ‘megaridges’ are found between depths of −180 m and −100 m.In the French sector of the shelf, the ridges are smaller, existing up to 70 km long, 7.5 km wide, 50 m high and 16 km apart.Early workers argued that the ridges were tidal features, now moribund, formed during lower sea level, and it has subsequently been shown that rising post-glacial sea levels were associated with a mega-tidal regime capable of reworking shelf deposits to form ridges.Alternatively, a possible glacial origin of the ridges was considered by early workers, and has been reconsidered to account for the recovery of glacigenic sediments linked to seismic reflections within the flanks of the megaridges.The Celtic Sea shelf was glaciated by the Irish Sea Ice Stream, the offshore extent of which has been constrained by glacigenic sediments on the Isles of Scilly and in a handful of vibrocores from the Irish and UK sectors.The minimum extent of the ISIS was reconstructed from the distribution of over-consolidated diamict, Melville Till, recovered at the base of cores on the inner to mid-shelf collected by the British Geological Survey in the 1970s and 80s.Below water depths of −135 m, the MT gave way to cores of laminated silty clay, Melville Laminated Clay.Both sedimentary facies, MLC overlying MT, were retrieved in BGS vibrocore 49/-09/44 under 2 m of superficial sediment, acquired on the mid-shelf on the flank of a megaridge, corresponding to Ridge 3.Additional glacigenic sediments were recovered from three vibrocores on a megaridge flank, Ridge 5, near the shelf-edge and have been interpreted to contain both subglacially deformed sediments and laminated proximal glacimarine sediments containing a bivalve shell dated to 24.3 ka BP, suggesting extension of the ISIS to the shelf-edge during the Last Glacial Maximum.This shelf-edge age is consistent with dates from the south coast of Ireland, indicating that the initial ice advance occurred after 25–24 ka BP."This advance reached the Isles of Scilly by 25.4–24 ka BP before extending to the shelf-edge and subsequently retreating into St. George's Channel by 24.2 ka BP.This chronology suggests that the advance and subsequent retreat of the ISIS across the shelf was rapid.Tidal models of the Celtic Sea megaridges have been based on observations of their morphology and internal character, and modelling of shelf conditions during lower sea levels.Seismic profiles across the Celtic Sea ridges reveal dipping and truncated internal reflection surfaces, while short sediment cores obtained from the megaridges across the Irish-UK sectors show that the primary unit comprising the ridges, the Melville Formation, mainly consists of medium to coarse sand and gravel.Huthnance proposed a mechanism for ridge growth based upon the interaction between bottom friction over a mound and tidal currents, resulting in ridge growth through deposition on the crest and lateral migration.This is different to the mechanism of Houbolt, who suggested that longitudinal helical vortices either side of a mound can result in axial ridge growth with little lateral migration.Tidal ridges generally consist of medium sand with some bedding planes which transition to an underlying lag deposit at the base of the ridge, similar to observations of the MFm by Pantin and Evans.Tidal modelling investigations support the interpretation that the Celtic Sea ridges are constructional features formed during rising sea level by strong tidal currents following deglaciation ca. 21 ka BP, with the energy required to transport coarse sand.Palaeotidal model results presented in Scourse et al. suggest that the northern limit of the ridge field could represent the boundary where bed stresses weakened ∼10 ka BP, resulting in the features becoming moribund with no additional axial growth.However, a post-glacial tidal formation of the megaridges conflicts with the presence of glacigenic sediments on their flanks, including laminated and/or stiff fine-grained sediment, from the mid- and outer-shelf.Additionally, gravel and bounders have been recovered from the flanks of ridges across the Irish-UK sectors, with the presence of the former being suggested to represent a mantle of ice-rafted debris.The presence of glacigenic sediments overlying the ridges and the recovery of MT and MLC in core 49/-09/44 on a megaridge flank, was interpreted to indicate that the MFm existed prior to deglaciation.The observation that glacigenic sediments appear to drape the megaridge flanks in the Irish-UK sectors has been attributed to either partial glacial overriding of the mid-shelf ridges or to tidal ridges forming syngenetically with deglaciation.Alternatively, the entire internal bulk of the megaridges could represent large glacifluvial features or giant eskers.The large-scale internal cross-bedding and sandy composition of the MFm, as well as the presence of stiff glacigenic sediments, could be consistent with the characteristics of eskers.Eskers may also be hundreds of kilometers long and up to 80 m high, but commonly have widths <150 m that are consistent with a single subglacial meltwater conduits.However, eskerine ridges with widths of kilometres also occur, including features up to 10 km wide.Large ridges have been attributed to deposition from multiple conduits supplying sediment to over- and backlapping outwash fans along a receding ice margin.A time-transgressive origin can account for esker networks >100 km long within a receding ice-marginal zone, producing linear ridge segments with spacings of up to 19 km.Eskers are highly variable in structure and lithology, but generally contain several metres of plane- and cross-bedded sand and gravels.Additionally, eskers may contain a core of boulders and cobbles which fine upward and outward from the centre, representing the deposition of finer grained material due to decreasing meltwater pressures in the final stages of development.As eskers develop in ice marginal zones that may also be subaqueous, glaciofluvial sands and gravels may be interlayered with subglacial diamicts and both give way laterally to lacustrine or marine muds.The aim of this paper is to present new information on the Celtic Sea megaridges, based primarily on high-resolution shallow seismic data and sediment cores acquired in 2014 by the BRITICE-CHRONO project.These data improve our understanding of the stratigraphic context of the sedimentary units composing the megaridges and allow us to test hypothesised tidal and glacial formation mechanisms, providing essential stratigraphic context for glacial sediments of the Celtic Sea.The Celtic Sea shelf comprises bedrock outcrops and superficial sediments of Quaternary glacigenic deposits, mud, sand and occasional boulders."In contrast, the Celtic Deep, an elongated basin located south of St. George's Channel, comprises Quaternary sediments up to 375 m thick, interpreted to record deposition during several glaciations.The following section summarises the stratigraphic model of the Celtic Sea shelf and Celtic Deep from systematic litho- and seismic-stratigraphic studies by the BGS.Lithostratigraphic units include Layers A and B which form the superficial sediment cover across the shelf.Layer A consists of superficial sand, gravelly sand and muddy sand with thicknesses of up to 2 m in various locations, occasionally forming migratory bedforms.Below this is a coarser facies which commonly consists of a basal coarse sand to gravel, occasionally containing isolated boulders up to 0.5 m in diameter recovered from the megaridges across the shelf.Sediment cores recovered across the inner-shelf and from the Celtic Deep show the spatial uniformity of Layer B with an erosional lower boundary, observed as a sharp transition between sedimentary units.This succession of upper units extends as far north as the Celtic Deep, where laminated silty clays were interpreted to represent glacial deposition in quiet aqueous conditions, and are overlain by Layers A and B which have been radiocarbon dated to the late stages of post-glacial marine transgression.On the inner-shelf and in the Celtic Deep, Layer B contains ages ranging from 13.9 to 4 ka BP, the oldest age being obtained from the Celtic Deep, and is interpreted as extensively reworked while Layer A has been dated in the Celtic Deep to the last ∼13 ka BP.These two layers are found across the Celtic Sea shelf and overlie glacigenic deposits of the ISIS.Underlying Layers A and B, a single unit, the MFm, was identified across the shelf, corresponding to the bulk of the ridges.Based on seismic data, the MFm is inferred to predominantly consist of sand and is imaged to generally exhibit cross-bedding and other complex internal structures.It has also been noted that the MFm appears to mantle earlier deposits, suggested to have acted as a nucleus for ridge development.In places, the MFm is suggested to contain glacigenic sediments, as recovered in core 49/-09/44 where both MLC and MT facies are found.These studies inferred the MT and MLC to lie at the top of the MFm, while an exact relationship of these glacigenic sediments to the megaridges has not previously been determined.However, the correlation of core 49/-09/44 to recently acquired seismic data suggests that the MT corresponds to an identifiable reflection extending across the cored mid-shelf megaridge.On the outer-shelf, correlation of short cores to seismic data suggests that glacigenic sediments lie within the seabed refection and may extend across the surface of the megaridge.The Upper Little Sole Formation was identified from seismic data to exist below the MFm on the outer-shelf, separated by a typically strong acoustic reflection inferred to represent a coarse lag on its upper surface with channelling at the base of the ULSFm.A similar acoustic reflection beneath a ridge in the French sector was inferred to have been produced during transgression.Where this acoustic reflection separating the MFm and ULSFm is not imaged, this is explained by the bounding units having a similar lithology.In the UK sector, the ULSFm was inferred to consist predominantly of sand with some mud and was sampled by a single vibrocore recovered between megaridges on the outer-shelf, the base of which encountered a muddy sand containing an abundant foraminiferal assemblage interpreted to indicate a Late Pliocene or Early Pleistocene age and record marine deposition.Seismic reflection data were acquired in 2014 during BRITICE-CHRONO cruise JC-106 on the RRS James Cook using a Kongsberg SBP-120 chirp system with a swept frequency of 2.5–6.5 kHz and a vertical resolution of up to ∼0.1 ms two-way-time as measured from profiles.Systematic noise was present in most seismic profiles as a continuous ringing of the seafloor and other high-amplitude reflections.Due to the ringing and processing adjustments during acquisition, a direct comparison of acoustic amplitudes between profiles is not possible.Differential GPS positioning and motion correction were achieved through the usage of Applanix POS-MV, Seapath200 and CNAV3050 systems.Additional seismic data were acquired during the 2014 CV14007 cruise of the RV Celtic Voyager using a multi-tip Geo-Source 200–400 sparker, and during the 2009 IPY GLAMAR campaign of the RV OGS Explora using a Benthos CAP6600 chirp sub-bottom profiler and a Geo-Source 800 multi-tip sparker.While all seismic data were consulted, only the highest quality seismic data are presented here.Sediment cores were collected using the BGS 6 m vibrocorer and National Oceanography Centre 12 m piston corer, for which accurate positions on the seafloor were acquired using a Sonardyne Ranger ultra-short baseline system.A handheld shear-vane was used to provide information on the undrained shear strength of the material soon after the cores were split.A sound velocity of 1600 m s−1, the bulk average sound velocity determined from Geotek multi-sensor core logger measurements, was used to convert core depth into two-way-time and produce an indicative core penetration diagram on seismic profiles to aid visual correlation and to plot approximate reflector depths, where resolvable, on core logs.Seismic profiles show insets of core locations, where alternating black and red blocks correspond to 1 m core lengths from the seafloor.Seismic-facies were identified based on bounding reflections of distinct changes in acoustic character.These facies were correlated based on similar acoustic character, geometry and stratigraphic position, and were correlated to regional BGS units based on the original descriptions of seismic and sediment core data.Litho-facies were defined based on observed grain size, bounding surfaces and the relative order of similar deposits.We present results from sediment cores correlated to seismic profiles on and adjacent to the megaridges at six sites of interest across the shelf.This allows the identification of nine litho-facies identified from 19 sediment cores and seven seismic-facies identified from seismic profiles which are then integrated within three shelf-wide stratigraphic units, building on the regional framework proposed during BGS mapping based on similar methods.LF9 is the only facies cored below LF8 and consists of medium to coarse sand with shell fragments which displays a fining-upward trend to fine sand.LF8 consists of stiff clay and silt which generally contain fine to medium sand laminations that can in places exhibit deformation, e.g. 33VC.At Ridge 5, LF8 is similar but has a sandier composition.These sediments were recovered from the lower flanks of the megaridges and generally were not penetrated entirely.At Ridge 1, LF8 was penetrated and is up to 80 cm thick.LF7 comprises medium to coarse sand, occasionally with abundant shell fragments which were recovered on the flank of Ridge 4 and not penetrated entirely.At Ridge 3, LF6 is recovered as coarse sand which contains shell fragments and some clasts.LF5 is a medium to coarse sand at its base and sometimes displays subtle fining-upward grain size trends and bedding planes, recovered from Ridges 2, 3 and 5.LF4 is a massive soft clay found in the Celtic Deep underlying LF3.LF3 was cored as a 1.6 m thick soft mud unit with fine sand laminations and was only recovered in the Celtic Deep.LF2 consists of a coarse layer underlying LF1 which consists of medium sand to gravel with shell fragments and some clasts in shelf cores.This layer is generally <40 cm thick across the shelf.LF2 overlies older deposits, observed as the sharp transition between grain sizes, and appears uniform across the shelf apart from the inclusion of some clasts at Ridge 2.LF1a is only recovered in the Celtic Deep where it exists as a distinct shell fragment layer less than 10 cm thick at the base of LF1 muds.Overlying LF1a and LF2, LF1 represents sediments comprising the present day seafloor.These sediments display fining-upward medium sand to clay, or as a mud or medium sand unit with no observable grain size trend.LF1 has thicknesses exceeding 3 m in isolated depressions such as the Celtic Deep, but is generally <1 m thick across the shelf and megaridges.LF1 sediments sometimes display a coarsening upslope trend, as seen in Ridge 2 cores, with the inter-ridge troughs containing finer-grained sediments in comparison to coarser sediment on the upper megaridge surfaces.SF7, imaged at Ridge 4 and Ridge 3, rests upon a sub-horizontal reflection and has an upper surface forming positive features within the megaridges.In the case of Ridge 3, SF7 comprises low to medium amplitude internal reflections which downlap the lower boundary compared to SF7 at Ridge 4, which transitions from having few internal reflections at its base to having a more complex appearance near the top of the unit.SF6, imaged at Ridge 5 and Ridge 2, consists of positive features similar to SF7, but contains sub-horizontal low to medium amplitude reflections which become discontinuous towards the top of the unit and appear truncated against the upper surface.The upper surface of SF6 is irregular towards the midpoint of the megaridges and rises noticeably on the extreme lateral flanks of the unit to form plateaux.SF5, imaged at Ridge 1, has an upper surface which appears sub-horizontal and gently undulating and a lower surface which is highly irregular, while the internal character of the unit appears of low amplitude.SF4 comprises the upper unit of the investigated megaridges across the shelf, with an upper surface forming mounds.This unit can contain complex reflection geometries, generally including clinoforms which sometimes cross the unit throughout its thickness, e.g. Ridge 1.In one instance, Ridge 3 Line E, SF4 displays subtle evidence of channelling in its uppermost section.SF3 is a low amplitude unit which is found between layers of SF2 in the Celtic Deep.SF2 consists of beds of high frequency and medium to high amplitude reflections appearing as sub-parallel and wavy parallel which are visibly truncated against the lower boundary of SF1.SF1 comprises the uppermost seafloor and is generally of low amplitude in depressions such as the Celtic Deep.On the shelf, SF1 drapes the megaridges and inter-ridge troughs, varying laterally in thickness, amplitude and continuity.SF1 commonly appears low amplitude in troughs and becomes discontinuous and of medium amplitude upslope, in places filling depressions in the upper surface of SF1.The litho- and seismic-stratigraphic results reveal the presence of several main units comprising the megaridges, which we correlate to the main units identified during BGS mapping based on the original descriptions produced by Pantin and Evans.These units are: 1) a superficial drape, in most areas comprising a fining-upward succession which we correlate to layers A and B and identify for the first time as a seismically resolved unit; 2) the Melville Formation, a sandy unit corresponding to the bulk of the megaridges; and 3) the Upper Little Sole Formation, which we show to be composed of glacigenic sediments and comprise a lower megaridge unit.These results show that the megaridges consist of three stacked units, rather than a single unit as proposed by Pantin and Evans.This seafloor unit drapes the megaridges and older deposits across the shelf, and in most areas is represented by a distinct seismic unit up to several metres thick that corresponds in cores to a fining-upward succession with a coarser basal layer.At the seafloor, the composition of SF1 varies laterally, but generally consists of fine mud and sand in the inter-ridge depressions and the Celtic Deep, and medium to coarse sand upslope from the megaridge flanks.SF1 varies in thickness, in places thinning below the resolution of seismic data.A discontinuous seafloor seismic unit was also noted in places by Pantin and Evans, who were unable to define a regional unit corresponding to Layers A and B in cores.Here we infer LF1 and LF2 identified in our cores to correspond to Layers A and B of Pantin and Evans, and to Units I and II of Furze et al.In the Celtic Deep, LF1a, existing as a basal lag, is additionally correlative to Unit II from Furze et al. and may have a similar origin to LF2.Beneath the superficial drape, seismic data show that the megaridges on the mid- and outer-shelf, Ridges 2, 3, 4 and 5, mainly comprise two stacked seismic units, although only a single unit is observed within Ridge 1 on the inner-shelf.The upper unit correlates to SF4 and comprises the bulk of the megaridges.SF4 forms prominent mounds with seismically-imaged clinoforms and the top of the unit is composed of LF5, medium to coarse sand with shell fragments, consistent with the description of the MFm by Pantin and Evans.The base of the unit is in most places a strong sub-horizontal or slightly dipping reflection, as described by Pantin and Evans, which commonly coincides with an increase in seafloor slope relative to the lower flanks of the megaridge.Two possible interpretations exist for the position of the base of the MFm in Ridge 5.Our core sites are coincident with those of Praeg et al., who identified a high amplitude reflection on pinger data, and correlated it to the base MFm reflection resolved at a similar depth ∼8 km away on a BGS sparker profile 1978–55.This implies that the laminated and stiff fine sand and mud recovered from vibrocores VC-64, VC-63 and VC-60, corresponding to LF8 in this study, form a drape of glacial sediments at the top of the MFm.An alternative interpretation is presented in Fig. 5, based on the correlation of acoustic facies observed in other megaridges.Ridge 5 displays a similar internal configuration to Ridge 2, where the base of the MFm forms a sub-horizontal surface overlying SF6 and coincides with breaks in slope.At Ridge 5, a dipping reflection which varies in continuity and amplitude, as also described by Pantin and Evans, is coincident with a seafloor break of slope and can be interpreted to delineate the boundary between SF4 and SF6.In Ridge 5, SF4 and SF6 both consist of medium sand at their interface, accounting for the reduced amplitude reflection.The continuous reflection observed by Praeg et al. is here suggested to be part of the internal acoustic character of the unit below the MFm, forming a bed running parallel with the upper boundary of the unit, similar to layering seen within SF6 on the southern flank of Ridge 2.This interpretation contrasts with that of Praeg et al., in placing the lower boundary of the MFm higher up the flank of Ridge 5.The glacigenic sediments from LF8 and vibrocores VC-64, VC-63 and VC-60 are thus interpreted to come from SF6, which is exposed on the lower megaridge flanks.Seismic data show that the MFm overlies a unit that is often exposed on the lower megaridge flanks and is stratified in places, described as SF5, SF6 and SF7.These seismic facies are separated from the MFm by a sub-horizontal to slightly dipping reflection which in places truncates underlying reflections.The lower boundary of these facies either exists as an angular unconformity, truncating the Cockburn Formation of Oligocene to Miocene age, and in places displays channelling, e.g. Ridge 1 and Ridge 5.The stratigraphic position and internal character of the unit are all consistent with the description of the ULSFm by Pantin and Evans, who interpreted it to be confined to the outer-shelf.At Ridge 5, the lower boundary of the ULSFm identified in this study lies at the same depth as in Praeg et al., and all cores recovered from SF6 contain stiff, laminated and fine-grained material, interpreted as glacigenic by Praeg et al. and Scourse et al.BGS core 49/-09/44, recovered on the northern flank of Ridge 3, recovered 2 m of superficial sand and gravel above glaciaqueous muds and subglacial diamict, both of which were originally correlated to the MFm.As core 49/-09/44 cannot be accurately positioned due to the use of the Decca Navigator positioning system, the correlation of MT and MLC with seismic facies is tentative.Our seismic profiles suggest the MT may correlate with SF7 while the MLC appears to correlate to SF4.Neither facies were recovered in three neighbouring cores, 30VC and 31VC up to 15 m away and core 32VC 160 m away, all recovering LF5 instead under superficial sediments.However, the glacigenic MT and MLC from core 49/-09/44 may correlate to SF7, and thus the ULSFm, given their approximate recovery on the northern flank where the ULSFm is exposed near the seafloor as seen in other megaridges.The pebbly and shelly coarse sand of LF6 at the bottom of cores 28VC and 29VC seems to penetrate SF7, yet is distinctly different to those sediments recovered in core 49/-09/44 and from the ULSFm recovered at other megaridges.The results presented here provide new information on sediments of glacial to post-glacial age, sampled within the megaridges across the Irish-UK sectors of the Celtic Sea shelf.This information implies a revision of both the character and the ages of the three main regional stratigraphic units previously identified during mapping by the BGS.In turn, this revised stratigraphic framework allows us to test hypotheses for the formation of the megaridges.Published dates of LF1a and LF2, the basal layer of SF1, from 12 cores recovered across the inner-shelf, extending from the Celtic Deep to the northern megaridges, have yielded a wide spread of radiocarbon ages from 13.9 to 4 ka BP obtained from intertidal molluscs, the oldest of which were obtained from the Celtic Deep.Other published dates show that SF1 in the Celtic Deep has conformable ages from 13 to 3 ka BP.This evidence suggests that SF1 is of post-glacial age on the inner-shelf, and records marine deposition during the late stages of transgression towards the Holocene.On the outer-shelf, the fining-upward character of the unit is consistent with deposition during decreasing energy and rising post-glacial sea level.No published dates are available of this unit from the mid-to outer-shelf, therefore the age estimate of SF1 beyond the inner-shelf is an inference.However, due to the time-transgressive nature of post-glacial marine transgression, the age of SF1 is expected to become older towards the shelf-edge.No dated materials are available from the Melville Formation.However, the possible age of the unit can be constrained by dates from over- and underlying units.The MFm is unconformably overlain by the superficial drape, deposition of which dates from at least ∼14 ka BP on the inner-shelf.The MFm overlies the ULSFm, here shown to contain glacigenic sediments.Across the shelf, these glacigenic sediments have been radiocarbon dated to 27–24.3 ka BP.The reported Late Pliocene or Early Pleistocene age of the ULSFm by Evans and Hughes and Pantin and Evans is inconsistent with Late Pleistocene radiocarbon ages obtained from glacigenic sediments, including LF8, across the shelf.The Late Pliocene to Early Pleistocene age was based on an analysis of foraminifera in muddy sands at the base of a single BGS vibrocore, acquired between ridges on the outer-shelf.The core correlation to the ULSFm was not illustrated by a seismic profile, which were noted to be of low seafloor resolution.One possibility is that the one sample used by Evans and Hughes to constrain the age of the entire ULSFm was recovered from older deposits.Another possibility is that the muddy sand at the base of the core may represent glacigenic sediments, containing reworked foraminifera from older deposits.Here we reinterpret the ULSFm to be of Late Pleistocene glacial age.The oldest age of ∼14 ka BP from the superficial drape is consistent with numerical palaeotidal model outputs of the post-glacial marine transgression, showing a time-transgressive landward reduction in tidal bed stress between 16 and 12 ka BP across the Celtic Sea.This modelled reduction in energy suffers from the uncertainties linked to palaeotidal modelling, such as sea level history and ice extent and chronology inaccuracies, but could explain the fining-upward succession from a basal coarse layer observed in the superficial drape.In this context, prior to 16–12 ka BP, energetic tidal conditions during the peak of transgression, commencing at least 21 ka BP, provide the primary mechanism for the erosion of shelf sediment and the formation of the MFm.As tidal currents reduced in intensity, wave action continued to rework the ridges as water depth increased.Water depths shallower than 145 m, encompassing the upper ridge surfaces across the shelf at present, are exposed to wave action, preventing the deposition of fine muds which are generally found in the inter-ridge troughs as observed for LF1.Reduced water depths would have resulted in the wave energy envelope encompassing the megaridges and their neighbouring troughs entirely, resulting in winnowing and erosion, before focusing on the upper megaridge surfaces due to rising sea level.This wave erosion surface may have overprinted earlier erosion surfaces, such as the lower boundary reflection of the MFm where it is exposed on the lower megaridge flanks.We speculate that the superficial drape could be interpreted solely as the product of wave action during rising sea level.In this scenario, the basal lag could represent a wave erosion surface, being the last high-energy event to occur during transgression before sea-level rose toward its present level, recorded by the fining-upward deposits of LF1.The fining-upward sequence recovered from the upper part of the MFm may represent the sedimentary expression of the modelled reduction in tidal energy during transgression.Subsequent wave reworking of the megaridge surfaces could explain the origin of LF2 in 34VC, through wave conditions partially reworking the upper surface of the MFm to produce a coarser cap of comparable shelly medium sand.Wave reworking during lowered sea level could also account for the shelf-wide angular unconformity at the base of the superficial drape, which truncates strata in the Celtic Deep and clinoforms at Ridge 1.Such clinoforms are imaged within the MFm at Ridge 1, suggesting that this megaridge was formed through a single mechanism.Palaeotidal model reconstructions show tidal currents had maximum bed stresses generally aligned with the megaridge axes, providing such a mechanism for ridge growth.The model outputs suggest that energetic tidal conditions persisted as late as 12 ka BP following deglaciation and had sufficient energy to erode coarse sand, which could explain the significant quantity of coarse material comprising the MFm across the shelf.Cores show that the MFm consists of uniform massive shelly medium to coarse sand, similar to sediments associated with tidal bedforms.Below the lower boundary of the MFm is a reflection, interpreted to represent a coarse layer and one or more Pleistocene erosion phases, that in places truncates glacigenic strata of the ULSFm, e.g. Ridge 2.This surface may represent the initial regional erosion surface produced during the onset of energetic tidal conditions, as suggested for another Celtic Sea ridge.The initial erosion surface, preserved under the MFm, would originally have been regionally extensive before being overprinted by subsequent wave reworking during the formation of SF1, possibly merging both erosion surfaces on the lower flanks and inter-ridge troughs into one coarse layer identified as part of Layer B by Pantin and Evans.This can provide an explanation for the gravel to boulder size sediment reported in Layer B by Pantin and Evans, in that boulders were only found on the lower flanks and in the inter-ridge troughs of the megaridges where the original tidal erosion surface erodes into glacial deposits below.Therefore, LF2 in the inter-ridge troughs would represent a polygenetic erosion surface.The MFm generally overlies positive precursor features of the ULSFm on the mid- and outer-shelf, as suggested by previous observations.The glacigenic sediments of the ULSFm form a base on which the MFm rests, and represent an extension of the ULSFm further north than suggested by Pantin and Evans.The ULSFm is laterally discontinuous between megaridges, forming isolated mounds which are separated from the MFm by a distinct upper boundary reflection, consistent with the onset of energetic tidal conditions in the post-glacial tidal ridge model.This supports the suggestion that the topography of the partially eroded ULSFm may have influenced the orientation and formation of the MFm.However, it is also possible that deposition of the MFm and erosion of the underlying ULSFm occurred simultaneously, with erosion being more pronounced in the inter-ridge troughs.The sedimentary composition of eskers varies both vertically and laterally, commonly containing a core of boulders and cobbles which fines upward and outward towards bedded sand, representing decreasing meltwater pressure in the later stages of development.Meltwater drainage can result in highly variable internal structures, varying from plane- to cross-bedded.This is consistent with seismic observations of cross-bedding within the MFm, and with cores from the upper MFm that contain medium to coarse sand with abundant shell fragments, some displaying a fining upward trend.In addition, eskers can overlie, underlie or contain layers or lenses of subglacial deposits.Thus in the alternative interpretation of Ridge 5 presented by Praeg et al., subglacial and glacimarine sediments were interpreted to represent an eroded carapace at the top of a MFm composed mainly of sand.This was similarly interpreted for Ridge 3 by Praeg et al. where a strong reflection at the same level as subglacial till at the base of core 49/-09/44 was suggested to also record a glacigenic carapace over the MFm.As noted previously, the age of the MFm is constrained by other units to lie between 24.3 and 14 ka BP.The erosion event associated with the base of SF1 has an unknown duration, although it occurred at or before ∼14 ka BP on the inner-shelf.Therefore, the unconformity could represent the product of energetic tidal conditions during the early stages of the post-glacial marine transgression, as was suggested for ridges in the French sector, which palaeotidal reconstructions suggest had commenced by 21 ka BP.This scenario, which suggests that SF1 is older than ∼14 ka BP across the shelf, can most simply facilitate the esker model where the features have survived transgression which only produced SF1 and its underlying unconformity.Therefore, if LF2 is dated to the onset of energetic tidal conditions during transgression, or deglaciation, assuming aqueous conditions allowed for deposition, then this scenario would be consistent with a glacial origin of the MFm.If the MFm is of glacifluvial origin, it implies that the megaridges largely survived transgression and/or are eroded remnants of what were initially much larger features.Erosion of such pronounced features is likely, as palaeotidal model outputs suggest that transgression lasted for several thousands of years and was capable of entraining coarse sand throughout.The survival of the megaridges is thus surprising, unless they were armoured by the development of a coarse lag which could be represented by LF2 on the upper megaridge surfaces.Additionally, in the glacifluvial scenario of the MFm, the MFm-ULSFm boundary reflection, representing an erosion surface, requires a glacial explanation.The initial advance of the ISIS into the Celtic Sea occurred after 25–24 ka BP and ice retreat started from the shelf-edge by at least 24.3 ka BP."The ice margin had reached St. George's Channel by 24.2 ka BP, indicating retreat was rapid.If the megaridges are of glacifluvial origin, this timing implies the large quantity of sediment comprising the MFm to have been deposited during a short residence time of a few hundred years of the ISIS on the shelf.In contrast, eskers are typically observed to be absent in areas of higher ice flow velocities and large and continuous esker generation is favoured during a regime of gradual and stable ice retreat.We showed that the MFm is chronologically constrained by over- and underlying units to have formed between 24.3 and 14 ka BP.This coincides with deglaciation of the shelf, and the ensuing main phase of marine transgression which palaeotidal model outputs suggest was characterised by large tidal amplitudes.Therefore, the constraints on the age of the MFm do not unequivocally allow the differentiation between a tidal or glacifluvial origin for the MFm.In addition, the available geophysical and sample data on the internal character of the MFm can be accommodated by both models.The preservation of the MFm as large glacifluvial ridges surviving a high-energy post-glacial transgression is difficult to explain in relation to palaeotidal models which suggest that energy was sufficient enough to continuously entrain coarse sediment for several thousands of years.Additionally, if the MFm formed in the final stages of ice withdrawal from the Celtic Sea, this would represent a significant quantity of glacifluvial sediment being deposited as eskers within a few hundred years.Eskers are generally formed subglacially, yet the MFm is found in the French sector of the Celtic Sea, well outside defined lateral ice limits on the Isles of Scilly.Therefore, we suggest that the megaridges are less likely to be preserved eskers, and it is more likely that the MFm represents post-glacial tidal deposits mantling a partially eroded glacial topography comprising the ULSFm.Caston suggested that offshore tidal ridges may owe their morphology and orientation to either excess sediment availability in an energetic environment, the remnants of a sheet deposit being preferentially eroded into by high-energy conditions, or an equilibrium state with sediment transport paths in addition to possibly being anchored to an underlying feature.The discontinuous nature of the ULSFm is possibly a result of the high-energy environment modelled to have occurred after the onset of ISIS deglaciation, resulting in the truncation of laterally continuous strata, e.g. SF6 at Ridge 2.These laminations suggest that the ULSFm was originally a continuous sheet.Therefore, the recovery of stiff glacigenic sediments from the remaining ULSFm suggests that such sediments were more readily preserved during high-energy conditions while others were eroded.The MFm may have played a protective role, preserving the remains of the underlying ULSFm from further erosion after the onset of MFm deposition, or stiff sediments produced mounds as erosion commenced, which the MFm anchored to during its formation, or both.In such scenarios, the anchoring of a sand body to an underlying feature can allow ridge growth through the Huthnance mechanism as tidal currents interact with the raised mound.Therefore, the megaridges may owe their orientation and location to inherited glacial properties reflected by the high undrained shear strength of the sediments, resulting in their preservation as mounds.The occurrence of glacigenic material contributing to the bathymetric expression of the megaridges may explain the contrasting morphology of the smaller ridges on the eastern shelf in comparison to the megaridges displayed here on the western shelf, and provide insight into the extension of the ISIS.If the revised stratigraphic model presented here is applicable to similar megaridges on the western shelf, then the underlying glacigenic ULSFm, responsible for the large megaridge sizes, may record the extension of the ISIS to the shelf-edge adjacent to the Goban Spur and merge with the shelf-edge limit suggested by Praeg et al. and the lateral limit on the Isles of Scilly proposed by Scourse et al.Correlation of sediment cores with decimetric-resolution seismic data has provided new insight into the glacial to post-glacial stratigraphy of the Celtic Sea shelf and the link between the linear sediment megaridges and glacigenic sediments.Several key findings are revealed:Across the shelf, cores recovered glacigenic sediments, consisting of massive or laminated stiff muds, which correlate to the ULSFm where it is exposed on the lower megaridge flanks.The ULSFm is thus a Late Pleistocene unit, much younger and more extensive than previously suggested, which forms a precursor glacial topography beneath the investigated megaridges on the mid-to outer-shelf, contributing to their bathymetric expression.The overlying MFm forms the bulk of the megaridges and displays internal bedding and comprises massive medium to coarse sand and shell fragments, in places fining-upward, consistent with either a tidal or glacifluvial origin.The age of the MFm is constrained by published dates from under- and overlying units to between 24.3 and 14 ka BP, encompassing ice withdrawal from the shelf-edge and the period of strong tidal currents modelled during marine transgression.The megaridges and inter-ridge areas are unconformably overlain by a superficial drape consisting of fining-upward deposits of laterally varying character, recording marine deposition over at last the last 14 ka.It can thus be hypothesised that:The undulating topography of the ULSFm in the western Celtic Sea influenced the development, location and orientation of the overlying MFm, and thus the megaridges.The MFm is more likely to be of post-glacial tidal origin as it is unclear how glacifluvial landforms could been deposited beyond currently accepted ice limits and during rapid deglaciation of the shelf, or have survived the post-glacial marine transgression if it achieved the modelled duration and intensity.The unconformity separating the ULSFm and overlying MFm represents the erosion surface produced during the onset of strong tidal currents associated with the early stages of transgression.LF2 represents the product of wave reworking during lowered sea level and diminishing tidal current conditions, overprinting the earlier tidal erosion surface in the inter-ridge areas, while LF1 represents transgressive deposits being reworked by present day conditions on the upper surface of the megaridges.As the presence of the ULSFm influences the size of the megaridges, similar megaridges across the western shelf may also contain a core of glacigenic material, with implications for the extension of the ISIS to the western shelf-edge.This glacial legacy can explain the morphological differences between the megaridges of the western glaciated sector and smaller ridges of the eastern non-glaciated sector of the Celtic Sea.These hypotheses can only be further investigated through similar stratigraphic investigations utilising the integration of high-resolution geophysical data and longer sediment cores of the western megaridges.Further palaeotidal modelling of the Celtic Sea is recommended to include the effect of glacially-influenced bed topography and its evolution in response to energetic conditions during subsequent transgression.
The Celtic Sea contains the world's largest continental shelf sediment ridges. These megaridges were initially interpreted as tidal features formed during post-glacial marine transgression, but glacigenic sediments have been recovered from their flanks. We examine the stratigraphy of the megaridges using new decimetric-resolution geophysical data correlated to sediment cores to test hypothetical tidal vs glacial modes of formation. The megaridges comprise three main units, 1) a superficial fining-upward drape that extends across the shelf above an unconformity. Underlying this drape is 2), the Melville Formation (MFm) which comprises the upper bulk of the megaridges, sometimes displaying dipping internal acoustic reflections and consisting of medium to coarse sand and shell fragments; characteristics consistent with either a tidal or glacifluvial origin. The MFm unconformably overlies 3), the Upper Little Sole Formation (ULSFm), previously interpreted to be of late Pliocene to early Pleistocene age, but here shown to correlate to Late Pleistocene glacigenic sediments forming a precursor topography. The superficial drape is interpreted as a product of prolonged wave energy as tidal currents diminished during the final stages of post-glacial marine transgression. We argue that the stratigraphy constrains the age of the MFm to between 24.3 and 14 ka BP, based on published dates, coeval with deglaciation and a modelled period of megatidal conditions during post-glacial marine transgression. Stratigraphically and sedimentologically, the megaridges could represent preserved glacifluvial features, but we suggest that they comprise post-glacial tidal deposits (MFm) mantling a partially-eroded glacial topography (ULSFm). The observed stratigraphy suggests that ice extended to the continental shelf-edge.
288
REDD+, hype, hope and disappointment: The dynamics of expectations in conservation and development pilot projects
Large scale, internationally-led programs have repeatedly been framed as necessary solutions to forest governance challenges in the tropical Global South.Programs have included integrated conservation and development programs, participatory forest management, payments for ecosystem service and, most recently, reducing emissions from deforestation and forest degradation, and enhancing forest stocks through improved forest conservation and management.These programs are designed to tackle environmental problems such as biodiversity loss and climate change, as well as to support community development challenges.These problems and challenges are defined in such a way that technical, multiple-win and increasingly market-based solutions are required to solve them.The early stages of these new programs, often involving village level pilot projects, are characterized by large amounts of money, resources, attention and high expectations.The reality of these initiatives rarely lives up to the high early-stage expectations, and so subsequent solutions that require new policy models and technical programs are sought.As such, a number of academics have conceptualized these programs as ‘conservation fads’, defined by Redford et al. as ‘approaches that are embraced enthusiastically and then abandoned’.Discussion about the relationship between these international programs and expectations is increasing amongst both practitioners and academics, with expectations defined as imagined ideas about the future that circulate through social interaction.Most recently, the focus of this discussion has been on REDD+, which has stalled at the end of the pilot phase leaving many early expectations unfulfilled.It is argued that the early stages of REDD+ have led to the development of an ‘economy of expectations’, whereby local level processes, realities and visions for the future are altered through involvement with global, market-based conservation programs.As a result, expectations and the management of expectations have been highlighted as one of the biggest challenges of REDD+ pilot project implementation, with practitioners now required to ‘develop strategies to deal with the backlash’.Despite this interest in expectations, there has been little detailed exploration of how they are produced, how they circulate and the impact they have throughout the different stages of conservation and development projects.Thus more understanding is needed about what the science and technology literature calls the ‘sociology’, or ‘dynamics’, of expectations in this context.Exploring the dynamics of expectations is necessary for a full understanding of social change.We posit that this is particularly relevant in relation to pilot projects, which are often used to test new international conservation and development programs at the local level in order to generate quick and tangible results, to influence policy and to generate further donor funds.As such, pilot projects require buy-in, engagement and action from all of the actors involved and so drive social change.Yet they rarely come with a guarantee of continued funding and activity post-pilot.We address this knowledge gap first by reviewing sociology of expectations literature, which is primarily drawn from the field of STS.We identify core themes and characteristics that are relevant to international forest conservation and development programs.We then use this to investigate a case study of REDD+ pilot projects in Tanzania, drawing primarily on in-depth narrative interviews conducted with actors from global to village levels and collected after the pilot projects were phased out.This includes detailed investigation of two pilot project case studies, which provide empirical evidence of contrasting approaches to pilot projects and expectations.Through this analysis, we contribute to a better understanding of the dynamics of expectations in conservation and development practice, as well as the dynamics of expectations more broadly.In doing so, we also contribute to a deeper understanding of pilot projects, and other interventions, as agents and outcomes of social change.This approach contrasts with common, instrumental forms of project evaluations that focus on the impacts of interventions on forest governance or performance against project objectives.As such we also provide new and useful insights to policy-makers and practitioners involved in international forest conservation and development projects.Expectations can be defined as imagined ideas about the future that are produced, circulated and mediated through social interaction, resulting in social change.Actors’ actions and decisions are always made in relation to expected outcomes and consequences.Expectations can be both positive and negative, and both individual and collective, and are therefore both context-specific and related to broader shared or collective visions.Collective expectations develop in relation to shared ‘imaginaries’, which are defined as ‘imagined forms of social life and social order that centre on the development or fulfilment of innovative scientific and/or technical projects’.New conservation and development programs are often framed within a multiple-win rhetoric, towards imaginaries of international forest governance for the benefit of all.And it is argued that market-based solutions such as REDD+ heighten these dynamics due to their emphasis on future speculation and their transnational, abstract nature, which is less aligned with local contexts than previous programs.Early stages of new innovation or technological development both drive and are driven by hyper expectations, or hype, which can be defined as unreasonable and unachievable expectations of what the new innovation can deliver.As such, a sense of urgency ensues, driven by both fear of environmental harm and the imaginaries of future conservation.Newness is fetishized and as such ideas that are framed as being new, different and distinct are favored over the advancement of existing solutions, not least because shortfalls associated with past solutions are erased.As such, Mosse argues that in development ‘the intense focus on the future, on new beginnings, is rarely moderated by an analysis of the past’.These new beginnings often require the use of show or pilot projects to bring new policy to life.Hype and expectations can therefore focus energy and attention on one new solution, becoming a barrier to critical thinking, to alternative solutions and to approaches that favor incremental change.Expectations can be described as being performative in that their existence mobilizes both actors and resources, and as such they provide an important function in the early stages of innovation.They can coordinate and broker relationships between a wide range of actors – both horizontally and vertically across different scales from the global to the local.As collective expectations develop, ‘communities of promise’ build up around them and actors join these discursive communities despite individual uncertainties and reservations, often to ensure that they do not get left behind.In this sense, economies of expectation can develop in which new realities are created.Expectations have thus been defined as ‘forceful presence’.This forceful presence can be seen in the context of global, market-based conservation mechanisms that create new social structures, nature valuations and imaginaries that in turn encourage more activity and higher expectations.There is much discussion in both the STS literature and critical conservation and development literature about the level of intention of raising expectations in relation to their performative role.On the one hand, it can be framed as an inevitable and unavoidable outcome of social interaction and innovation.Once something is enacted, it becomes part of a reality that is both linked to the actor’s original intentions but also combines with other actors and contexts to take on a life of its own that often results in unintended consequences.However, others argue that innovators and policy-makers deliberately raise expectations in order to mobilize resources and enrol actors into communities of promise, particularly in conservation and development where actors such as NGOs and government agencies have to compete for scarce resources, such as donor funds, legitimacy and reputation.Elevated expectations, created by hype associated with early stages of innovation, results in hype and disappointment cycles.Actors’ efforts to sustain expectations are overwhelmed by the reality of underlying issues and so communities of promise collapse.This results in what Mosse refers to as an unintended but inevitable gap between international development policy and the realities of implementation.Repeated cycles of new international conservation and development programs, or ‘fads’ therefore result in repeated hype and disappointment cycles.Disappointment can then lead to outcomes including apportioning blame, disillusionment, damaged credibility of innovators and policy-makers, and adverse effects on future innovations.Such outcomes could include conservation and development NGOs losing their legitimacy, resistance to future projects at the local level and environmental destruction by villagers whose expectations of project involvement have not been met.However, in some cases new cycles of hype provide a protected space for new innovation and past disappointment is forgotten.Although cycles of expectation and disappointment can be conceptualized as inevitable, their impacts and implications are highly contextual, related to the social dynamics of expectations.The initial framing of the innovation by those developing and/or selling it impacts the development of collective and individual expectations.However, expectations are continually mediated by actors’ past experiences, social interactions, networks and activities, and social framings.For example, West finds that villagers engage with projects with the understanding that they are entering into long-term, reciprocal, social relationships with practitioners towards imaginaries of development and progress.This results in disappointment once projects end and these expectations are not met.Expectations, uncertainty and disappointment can be conceptualized as dynamic, continually influencing and being influenced by social discourse and interactions.As such, attempts by practitioners to manage expectations once they have been raised are likely to be unsuccessful.A relationship between actors’ proximity to the production of knowledge, and their levels of uncertainty and expectations, can also be identified.Brown and Michael find that actors closest to the production of knowledge have high levels of uncertainty about the success of the new idea or solution and, as a result, low expectations.Actors furthest away from knowledge production tend to have low uncertainty and therefore the highest levels of expectations.Those closest to knowledge are the source of raised expectations, yet disappointment affects the user groups furthest away from knowledge, highlighting the asymmetrical nature of negative impacts of unrealistic expectations.In Tanzania, evidence suggests that the REDD+ pilot phase fell well short of initial expectations and promises of change.As such, Brown argues that a reworking of economies of expectation is required in order that the uncertainties of those closet to knowledge become more transparent; particularly to those most negatively impacted by hype and disappointment cycles.REDD+ pilot projects in Tanzania are used as an instrumental case study or a bounded case that is explored in detail in order to illustrate an issue of concern.REDD+ is a mechanism developed by the United Nations Framework Convention on Climate Change based on the principle that through carbon markets, or international donor funding, developing countries are financially rewarded for preventing deforestation, protecting forests and hence increasing global carbon stocks.When the initial pilot stages commenced in many countries following the Bali Action Plan in 2007, there were high hopes internationally that REDD+ would achieve multiple wins by contributing to global climate change mitigation targets, biodiversity protection and local forest conservation and development objectives.Large amounts of funding and resources were employed to get countries ‘REDD+ ready’ and pilot projects implemented in order to test REDD+ mechanisms.Critical voices also emerged during the early stages, with actors including academics, indigenous groups and practitioners warning of potential human rights, land tenure and justice issues As the REDD+ ready phase and associated pilot projects continued, it became evident that the mechanism was harder to implement than expected and that global REDD+ funding mechanisms were not yet in place.As such, many projects have stalled, been abandoned, or have evolved into more traditional conservation and development projects that no longer focus on monetary incentives for carbon storage and sequestration.REDD+ therefore makes a timely and relevant case study through which to explore expectations in the early stages of forest conservation and development initiatives.In Tanzania, the REDD+ readiness phase was active between 2009 and 2014.It was supported by US$80 million of bilateral funding from Norway’s International Climate and Forest Initiative and managed by the Norwegian Embassy in Tanzania.Additional national-level strategic support was given by the UN and World Bank Forest Carbon Partnership Facility.The Norwegian funding supported a national REDD+ Task Force made up primarily of government actors, thematic working groups consisting of government, civil society and private sector actors, and a REDD+ Secretariat based at the University of Dar es Salaam.These institutions supported the Vice President’s office in the development of the Tanzanian REDD+ strategy document.Funding was also used to support a large number of research projects and to enable the implementation of nine pilot projects, seven of which reached completion.A mix of international NGOs and well-established national NGOs were chosen to implement the projects, which lasted between four and five years.The objectives of the pilot projects included testing REDD+ mechanisms with communities in a wide range of contexts, getting communities ready for REDD+, delivering widespread stakeholder awareness and involvement in REDD+, delivering REDD+ results such as emission reduction, and supporting national policy-making.The NGOs took very different approaches to piloting REDD+, with some aiming to meet all objectives and fully test the mechanism and others choosing to focus on only a few elements.Two individual case studies were chosen to explore expectations in REDD+ pilot projects in more detail and to reflect two contrasting approaches to piloting.Tables 1 and 2 present the key facts related to these two individual pilot project case studies.It is argued that the pilot projects achieved some objectives, generated useful insights and demonstrated that REDD+ at the village scale in Tanzania is feasible.However aside from investment into the National Carbon Monitoring Centre, continued funding via donor support or carbon finance was not in place when the pilot projects were completed and national plans for a phase two of REDD+ were not clear.As such, most of the pilot projects ended with little scope for continuation through REDD+, although many of the NGOs involved continued working with the communities through other projects and funding.An interpretive, actor-orientated approach to case study research was taken.Methodologically this involves using ethnographic methods to unpack lived experiences from the perspectives of individual actors and actor groups, and emphasizes the interplay between outside influences such as internationally-led interventions, and the different realities, perceptions, social interests, and relationships of actors involved.We agree with Cooper and Pratten that ethnography should draw heavily on individual narrative in order to do justice to participants’ lived experiences.This article is based primarily on 70 in-depth, narrative interviews conducted with a wide range of different actors involved in the pilot projects.These narratives were collected between September 2015 and May 2016 and were selected to reflect a broad range of respondent demographics, characteristics and viewpoints and included international, national, regional, district and village-level actors.Additional ethnographic data was collected during this period, as well as during additional visits to Tanzania between September and October 2014 and March and August 2015, in order to support the narrative interviews and ensure credibility of the research.The two individual pilot project case studies outlined in Tables 1 and 2 were chosen to provide rich data or ‘thick’ description of project level and village actor and experiences, and in order to reflect two very different approaches to piloting.Data was collected in two villages involved in each pilot project.They are referred to as K1 and K2 in Kilosa and R1 and R2 in Rungwe for confidentiality reasons.These villagers were selected following key informant interviews with national and district NGO representatives, regional and district government representatives and academics, and consulting NGO project documents.They were selected as villages that had been most fully involved in the pilot projects.Table 3 summarizes the data collected.Most of the data, including the narratives, was collected after the pilot projects had ended, with the objective of gathering reflections of the whole pilot process, as well providing insights into the impact of pilot projects beyond their completion date.It is noted that the expectation narratives represent the actor framings at that moment in time, which are likely to be mediated by what actually happened during and since the pilot projects.It was also noted that at the start of data collection, the lead researcher and research assistants were automatically linked to REDD+ by villagers, particularly in Kilosa.We tried to overcome this by spending time prior to each interview explaining our position as independent academic researchers, however we note that a belief that we may have been able to directly influence REDD+ may have influenced some of the responses.The data was analyzed in two phases: inductively then deductively.Firstly, narrative content analysis was used in order to understand the experiences of actors and the meanings they attributed to these experiences.Storylines around expectations were identified within each narrative, with storylines defined as part of a narrative that allows actors to ‘give meaning to specific physical or social phenomena’.The actor storylines were then compared and analyzed inductively to find patterns.During the second phase, these narratives were analyzed using the aforementioned dynamics of expectations concepts.When reflecting back on the early stages of the REDD+ pilot projects in Tanzania, national and international actors identified high levels of hype or hyper expectations, which are highlighted by Brown as being a core characteristic of early innovation.International actors saw REDD+ as an opportunity for Tanzania to establish its position internationally as a leader in REDD+ knowledge and practice.Expectations of continuation post-pilot were identified among national actors, including government officials and NGOs.These expectations of continuation included more funding from donors, a national level REDD+ program spearheaded by the government, and continued funding for communities via carbon markets, and were largely related to the ‘opportunity for communities to benefit, to take carbon as one of the products of the forest’.1,There were also expectations that REDD+ would provide a solution to forest conservation in Tanzania and become a source of much-needed, ongoing, financial support for the forestry department.One government official reflected that ‘people thought ‘ah okay, the forests are now safe because of REDD”… even my director had that notion.’2,REDD+ was framed as a multiple-win solution to forest governance issues, producing collective expectations of a market-led solution towards imaginaries of abundant resources and well-protected forests.This was despite the fact that funding was secured for a pilot phase only, demonstrating the over-inflated expectations inherent in the hype during the early stages of new innovation.A sense of urgency was also identified by national actors when reflecting on the early stages of the projects, which was cited by Embassy employees as influencing the decision to have NGOs lead the pilot projects.One of the actors involved in the development of the pilot projects reflected that ‘the whole international thing’3 driving REDD+ meant that the pilot projects began before key actors such as NGOs and government officials were fully aware of what REDD+ involved.Part of the original donor strategy was to build on the existing PFM tradition in Tanzania but despite this, much of the emphasis was put on the new elements of REDD+, particularly in relation to carbon payments.The NGOs ‘were encouraged to include front-loaded payments within their budgets to test payment and benefit sharing arrangements in the expectation of making longer-term carbon sales’.This aligns with Brown and Michael’s argument that by emphasising newness, innovation gains more traction, funds and attention, and that this in addition drives higher expectations.This also allowed the REDD+ project to be seen as a new beginning and so failures of past programs could be overlooked.This early-stage hype and associated expectation influenced and informed the activity of many of the national actors during the early stages of the REDD+ pilot projects.This included development of a national media campaign to raise awareness of REDD+, and the framing of the pilot projects by the NGOs: this forest will pay you .Not only for one year, but will pay you continuously!,I mean this is a bank account.You are saving money and getting interest."So I think an expectation that OK by the time the project is ending, we will have our project document, we will have our process verified, and we've qualified to be paid.4",In Kilosa, a range of actors including villagers, local leaders, district government and local NGO staff reflected on high expectations at the start of the process, during which they were visited by the Task Force, the Embassy and the NGO, and were involved in the FPIC process.These expectations were both in relation to outcomes of the pilot project itself, such as village education and development, and assumptions of future benefits of REDD+, such as ongoing carbon payments and improved local climate conditions.During the early stages, negative expectations related to the project were also identified among villagers in both K1 and K2.These included worries that ‘these Europeans have come from their home countries to come and steal our land’,5 that people would be moved from their farms and that wild animals would be introduced to the area.As such, strong positive and negative expectations existed alongside one another during the initial stages of the Kilosa pilot projects.Factors such as negative past experiences of government conservation programs and proximity to strict national wildlife parks drove these negative expectations, or fears, illustrating the mediation of expectations by actors and contexts.Narratives of actors involved in the REDD+ pilot project in Rungwe reflect very different expectation dynamics at the start of the project.At this time expectations in relation to carbon payments began to develop among regional, district and village government actors as a result of national media campaigns, attendance of meetings on REDD+ and visits from the Task Force.However, the implementing NGO decided not to focus on REDD+ specific mechanisms such as trial carbon payments, preferring to focus on research and livelihood activities.This was largely as a result of concerns about ‘making promises to communities that you can’t deliver’6 and maintaining their legitimacy among communities with whom they have a longstanding relationship.The NGO did not go through a village-wide consultation process at the start of the project, gaining consent from village leaders, and did not focus on widespread participation in the livelihood projects.Broader village participation was encouraged in the education programs, which framed the project and forest conservation as being about reducing the risk of drought, floods and rising temperatures,7 without directly focusing on carbon.As a result, awareness of and participation in the REDD+ pilot project among actors outside of village government and committees was low, and village level actors spoke of very few expectations of the pilot project.Expectations can therefore be conceptualised as a product of the framing of those who are ‘selling’ the new idea; in this case the NGOs in Kilosa and Rungwe, who took very different approaches to the pilot projects.The performative function of expectations can be identified through national level and Kilosa actor narratives, driving and being driven by the aforementioned international and national level hype, sense of urgency, and assumptions of a REDD+ future.Actors were enrolled into ‘communities of promise’, both horizontally at the national level, and also vertically across regional, district and village levels in Kilosa.At the international and national level, these discursive communities of promise developed despite personal uncertainties of a number of actors.These uncertainties were largely related to the unknown nature of carbon financing mechanisms central to REDD+ .Differences between lower personal levels of expectations and much higher collective expectations at the time were identified.Reflections of a number of national level actors suggest that collective expectations were performative in that they enrolled actors into communities of promise and project engagement despite their personal concerns.There were also suggestions that they resulted in an uncritical approach to piloting, as identified by Brown,‘I sometimes feel guilty that I was part of it."You know, some people just preaching like priests, the way they preach about God and Jesus Christ and all those kind of things, but without having a critical analysis about what it really means.'8",At the international and national level uncertainty was in fact performative, acting as one of the driving forces behind the choice to pilot, in order to avoid ‘policy-making in a void’.9,The framing of REDD+ as new, unknown and filled with future possibilities mobilized a large amount of funding and activity and led to the aforementioned perceived need to pilot to bring this new policy to life.In Kilosa, expectations were also performative and can be seen to be both the cause and outcome of change within the district and the villages.Communities of promise built up around the expectations of village development, ongoing carbon payments, and improvements to local ecosystem services.Actors within these communities of promise, who included the Task Force, district officials and local leaders, reassured villagers that they would not be moved from their farms, that wild animals would not be brought into the area, and that the ‘future is bright and REDD’.10,Expectations were then in turn influenced by the early stages of project activity, which included villagers receiving their first trial payment, the building of the office and the establishment of some of the livelihood activities:‘What made me change is that I received education that they will not keep animals again, we will just conserve forest and water sources only… also another thing is after seeing that they were supporting the construction of this office and also they promised us they will sell the carbon dioxide and we can get money that will help to conserve our forest and do village developments.’11,Although village-level actors had been made aware of the project timescales during the FPIC process, longer term expectations related to village development and carbon payments began to rise.This in turn led to expectations becoming what Van Lente calls ‘forceful presence’.An‘economy of expectation’ developed, with new social structures, discourses and activity emerging, further driving collective expectations of considerable change.As part of the CBFM land planning process, Village Land Forest Reserves were established, the size of which was decided by village committees.However, due to the requirements and promises of REDD+, the communities were encouraged to ‘take on larger areas of forest under reservation than they would otherwise have done’.12,Once the VLFR had been gazetted, committees and leaders in both villages asked people with farms in the reserve areas to leave.This illustrates the performative role of expectations in the displacement of villagers that has been highlighted in relation to other conservation and development programs.In K1, where the relocations affected more people than in K2, this has resulted in conflict between village leaders and people refusing to move from their farms in the VLFR.This conflict was a central part of many actor narratives in K1, with villagers split between those who support the moves and those who feel it was unfair.This conflict was continuing at the time of data collection, with threats of violence reported by both parties and farmers being taken to court.13,In contrast to the experience of Kilosa, very little changed for villagers as a result of the REDD+ pilot projects in Rungwe.Among policy-makers, project implementers and other national-level actors, different framings of intentionality in relation to expectations can be identified, which aligns with different framings in the STS literature.Some non-NGO actors, who were not directly involved with implementation of pilot projects, reflect Brown and Michael and Sung and Hopkins in suggesting that expectations were raised intentionally in order to change behavior among local actors, with one national actor claiming that:‘Some took the whole concept of carbon credit… as a way to encourage communities to engage in forest management, and to me that was a false promise’.14,Conversely, other actors, including the implementing NGO in Kilosa, framed expectations as being an unintended but unavoidable consequence of piloting REDD+.For example, NGO practitioners reflected that it was hard to communicate the complex concept of REDD+ to communities in a way that ensured their full understanding but didn’t raise expectations.This aligns with the view of Konrad that expectations are an inevitable product of social interactions and processes.FPIC is one such process, which is intended to deliver full disclosure of all aspects of the project including benefits, challenges and information about carbon and the carbon markets, in order that communities are equipped and empowered to accept or reject the project.The final report commissioned by the donor claimed that FPIC ‘generated many advantages, among which managing expectations and mitigating future risks were the most important’.It is argued that the fact that some villages in Kilosa rejected the project demonstrates the effectiveness of FPIC.However, a number of actors reflected that in reality despite its good intentions, the FPIC process actually increased expectations among villagers:‘We studied one criteria called prior informed consent.One is to be willing without being influenced based on what he sees in the village.But actually they were influenced by being told that REDD will bring you this money.’15,Some NGO practitioners raised concerns with FPIC, challenging its ability to be effective in communicating the complexity of REDD+, echoing broader discussions about the limitations of the instrument.These reflections on FPIC, along with other project elements implemented in good faith such as trial carbon payments, expose an inevitable link or trade-off inherent in piloting.This is a trade-off between fully piloting new initiatives, which involves securing high levels of awareness, engagement and participation, and raising expectations.The comparison between the Rungwe project where the NGO did not achieve high levels of awareness and engagement but experienced few of the negative impacts of expectations, and the Kilosa project in which the NGO achieved high levels of awareness, engagement and expectations, emphasizes the need for recognition of this trade-off and its potential consequences for villagers.This trade-off can be positioned as a product of the broader dynamics of conservation and development.Actors such as NGOs are required to compete for scarce resources, which requires them to sell future success to donors and recipients alike, or as one NGO representative put it ‘this is the way the system works – we always write overoptimistic proposals because you demand it from us!’16,Innovative projects that showcase the new mechanism fully and achieve high levels of awareness and involvement are judged to be a success, with the final REDD+ pilot project evaluation reports judging the Kilosa project to have been much more of a success than the Rungwe project.Raising expectations among villagers may not have been intentional, but it is nonetheless an inevitable consequence of fully piloting new programs, particularly in relation to market-based mechanisms that are built around speculative future benefits.The actor narratives at the national level reflect a general pattern of rising expectations that then fell significantly over time as the reality of issues and challenges became clear.Lack of political will among government officials, lack of donor support post-pilot and low carbon prices were identified by national and international actors as the main causes of the decline in expectations.This pattern follows the hype and disappointment cycles identified in the STS literature.A number of national actors spoke about their disappointment that REDD+ had not lived up to its high expectations.Most of the disappointment however was expressed in relation to villagers.When data was collected between six and 18 months after the end of the pilot projects, only a few national actors spoke about continued expectations of REDD+, and none spoke about experiencing ongoing negative personal impacts.National and international actors had moved on to other projects and programs, and many commented that they had not engaged with REDD+ for some time.At the time of data collection in Kilosa, the pilot projects had been completed over a year previously and there were no plans for continuation of the REDD+ mechanism.The villagers received only one trial payment of the expected two, largely as a result of issues with measurement, and the project had not got to a stage where it could be verified.One NGO employee explained that in regards to REDD+ ‘Kilosa’s luck has faded out.’17,Despite this, village level actor narratives did not wholly reflect a hype and disappointment cycle and different narratives could be identified.Firstly, some actors did not feel any disappointment due to their perception that the project had bought many benefits to them and the village as a whole.These actors were predominately those who had been heavily involved in the project, whether as village leaders, committee members or livelihood project participants.To them the project had ‘woken up’18 the villagers and had brought much-needed development and education, and improved forest conditions.A second group of actors, comprised of those less involved in the project and those affected by the farm relocations in K1, expressed a strong sense of disappointment in the project:"‘I personally don't feel good… before I thought well of them, that maybe our village is going to benefit, but for now I see this MKUHUMI issue hasn’t any benefit to me’.19",This disappointment was largely in relation to the lack of continued carbon payments, a feeling of injustice that the project only benefitted a few people, and the continuing conflict over farm relocations.For some villagers these land issues were framed as a core part of the legacy of the project, especially when reflecting that ‘if they told us they were taking farms away we would have said no ’.20,The contrasting ways in which the project was framed within actor narratives reflects their contrasting experience, but may also be influenced by the way in which they perceived themselves in relation to the project, expectations and the researcher.By framing the project as highly successful, those who benefitted most from it were able to legitimise their roles within the economy of expectations, shoring up their position in relation to future projects.Similarly, the narratives of those who felt they had not benefitted from the project reflect their experiences, including their struggles within the economy of expectations and the desire to benefit more in the future.These narratives were told in light of ongoing expectations with regards future carbon payments, which had continued despite the pilot project ending:"‘…they haven't given up, but you find that when we go to the public meetings they normally discuss that we were told that we'd be paid every year. , "They ask] what's going on?", "Therefore we normally answer them that after it is being measured it's taken to the world market there and they have their process of discussing it so that the money can be paid… it takes time…’", ‘So are you still expecting the second payment?’,‘Yes, that is our hope, because that is what they had promised us’21,West states that ‘people make claims when something is at stake’.Through the process of narrative interviewing, village actors may have been making claims over any future carbon payments, thus positioning themselves in relation to the economy of expectations.The negative impacts associated with hype and disappointment cycles can therefore be seen to be asymmetrical, as one national NGO representative reflected:"‘For NGOs it's annoying when you lose money and maybe have to lay off some staff but they're professional and they'll go off and get another job somewhere else.This is the way the world works."But those communities that we went out to and said 'hey this is a new opportunity – and now we can’t make it happen for you.'",That I think is really bad.’22,Although some Kilosa villagers felt they benefitted throughout the project through things such as trial payments, per diems and training, sacrifices were made in anticipation of future benefits via carbon payments.These sacrifices included people being relocated from farms and being permitted from continuing with certain livelihood practices in the VLFRs.It is also worth noting at this point, that by not piloting the carbon payments, the NGO in Rungwe were able to avoid many of the negative impacts of the hype and disappointment cycle, with one village leader reflecting that ‘if everybody would have known about it would have been a problem."It's a good thing they didn't know this’.23",This is not to say that the approach taken by the Rungwe NGO was without issue, in fact a number of concerns were identified by the villagers, including in relation to low levels of project participation and a range of concerns were identified by villagers.Nonetheless, within this analysis of expectations, the case of Rungwe provides an interesting contrast in which donor funds were used largely to expand existing activity.This comparison brings up issues of responsibility and accountability for expectations and disappointment, which will be discussed in more detail in Section 5.6.However, hype and disappointment cycles can have further impacts, including the apportioning of blame to certain actors, damaged credibility and resistance to future innovations.At the national level actors directed their disappointment in a number of ways.Some blamed the fact that the donors ‘walked away’.24,Other actors blame the ‘top-down’ approach of REDD+ ‘convincing people to do what they want them to do’,25 which for some actors included criticism of a lack of resources allocated to district and local government.A number of national actors, including national government officials, NGOs and academics were critical of the use of pilots in the future and some reflected that maybe the NGO implementing the Rungwe project took the right approach, avoiding expectations and using the money to continue existing work.However Borup et al. argue that criticism and disillusionment following hype and disappointment can quickly be pushed aside in the face of a new innovation and new hype.In the future-oriented world of conservation and development where actors have to compete for scarce resources, the hype of new programs that come with promises of multiple-win solutions and donor support may override the critical learning from the REDD+ pilot process.In Kilosa, the longer-term impacts of the hype and disappointment cycle of the REDD+ pilot projects were still not fully evident at the time of data collection.However, actor narratives indicated that the experience of the REDD+ pilots had not led to resistance to future projects, although there was a desire for future projects to be done differently among those who felt disappointed.As one K1 farmer still in dispute with village leaders over farming in the VLFR explained:"‘It's not that we refuse the projects, we accept the projects to come. "They should come but we must make sure we've sat down and plan for that project, together with the village government.’26",This perspective may to some extent be a product of the fact that this was the first large international forest conservation project implemented with these villages.In situations where multiple previous projects have come and gone, and more hype and disappointment experienced, more evidence of resistance can be found.In relation to electronic technology, Konrad finds that hype and disappointment can lead to damage to the credibility and legitimacy of innovators.In Kilosa, those most disappointed with the project largely blamed local leaders for project failures, as opposed to the implementing NGO.As such it appears that the credibility of the NGO remains intact, which may be due to the fact that they have maintained a presence in the villages and have introduced a new sustainable charcoal project.NGOs face a significant challenge in situations such as this, maintaining their credibility and legitimacy among village level actors while engaging with ever more uncertain global mechanisms such as REDD+.The way that different actors and actor groups framed and understood expectations in relation to the REDD+ pilot projects depended on their own individual circumstances, and factors such as their past experiences, social context, personal values and the different ways in which they view or know the world.In Kilosa the villager narratives suggested that the fact that the REDD+ pilot project was the first major donor-funded project, and as such an unknown entity, influenced perceptions.This can be evidenced through the high expectations that surfaced for actors during the early project stages, as well as the fear that the ‘country had been sold’.27,It could also be the case that in Rungwe the long history of conservation and development interventions and the longstanding relationship between the villages and the NGO contributed to the low impact and expectations there.Actors in Kilosa also framed project concepts with their own ways of knowing, for example the framing of the process of ‘harvesting of carbon air’.28,This framing of carbon as tangible and sellable subsequently influenced ongoing expectations in relation to payments.The experience that people had during the project itself also influenced the way they framed expectations and disappointment.As we have previously identified, those most closely involved with the project put less emphasis on the lack of continued trial payments and as such experienced less disappointment.Conversely, those less involved in the livelihood projects or those who had experienced negative personal impacts focused more on the unfulfilled promises of the project.In K2, where the NGO had brought in a new sustainable charcoal project, MKUHUMI was framed by some as continuing under a different guise.This in turn impacted expectations, with one villager explaining that in the future he expected that ‘this MKUHUMI will just be changing its name’29 but would keep going.We can therefore see that expectations are continually influencing and being influenced by social interactions and experiences and as a product of the economy of expectations.This evidence also shows that when reflecting on expectations and disappointment, actors re-frame their experience in light of what actually happened and in light of their personal experience.Practitioners need, therefore, to be mindful that no matter how they frame pilot projects to communities, the social dynamics and economy of expectations will be unpredictable, making expectations unmanageable once raised.NGO practitioners involved in the Kilosa pilot project described how they tried to manage expectations around REDD+ and carbon payments as the project developed.One NGO practitioner described how they tried to focus on ‘the conservation parts and other co-benefits that they received’, but noted that despite these efforts ‘…we really could not control that – there were a few members who… really had high expectations.’30,This further emphasizes the aforementioned trade-off between raising awareness and raising expectations.Brown and Michael argue that actors with close proximity to knowledge production have higher levels of uncertainty and lower expectations, while actors further away from knowledge production have low uncertainty and high expectations.Those closest to the production of REDD+ and pilot project knowledge reflected on low expectations and high uncertainty, which as we have discussed was cited by some as rationale for piloting, and low expectations.This included actors with international links, including from the UN, the donor, international NGOs, universities and consultancies.‘I guess like everybody I still really am not sure that I think is going to work at the national level.I think that it’s a rather distant pipe dream and I very much thought so at that point .’31,Among government officials at national and district scales, who can be seen to be further from the production of knowledge around REDD+, there is some evidence of higher expectations and lower uncertainty, particularly in relation to continuation post-pilot:‘You can pilot and you can forget.But our idea was to do something and then… repeat from there… After knowing what really works you do something afterwards’32,However despite this, members of the Task Force were concerned about the speed at which the pilot projects were unfolding, challenging the testing of benefits at the village level when there was uncertainty as to whether there would be REDD+ benefits long-term.Local NGO project implementers reflected on their personal uncertainties and described how they tried to communicate them to villagers:‘Even us ourselves we were not sure about this carbon credit.We were explaining to that this is something new so we are not sure.Even myself, I have been asking that hmmm where will this lot of money come from.Where?’33,However as we have discussed, expectations rose quickly among villagers, despite attempts to manage them.As such, the Kilosa villagers, who were furthest from the production of knowledge, had the highest expectations and the lowest levels of uncertainty, thus aligning with the pattern identified by Brown.West finds that villagers engage with all conservation and development projects on the understanding that they are entering into long-term, reciprocal social relationships with practitioners.Even though the villagers in Kilosa were told that the project was time-limited, it appears that they did indeed perceive their involvement as longer term, which it is likely will also be influenced by the nature of the REDD+ mechanism and its emphasis on future benefits.In light of the analysis in this paper, issues of accountability in relation to expectations in conservation and development policy and practice are raised.This is particularly salient in relation to transparency, responsiveness and liability, which are defined as three of the five dimensions of accountability.In this context, transparency can be seen to be concerned with how project uncertainties can be better communicated to those furthest away from knowledge production, such as Kilosa villagers.This would require a significantly increased level of caution at the start of projects to reduce hype.The challenge for this, however, is that the ‘success’ of projects relies on hype, raising expectations and the enrolment of actors into communities of promise.This again speaks to the need for the trade-off between piloting and raising expectations to be seriously considered by conservation and development policy-makers and practitioners, which includes challenging the discourse of ‘needing’ to pilot new program ideas.Responsiveness refers to whether stakeholder expectations have been met, and liability is concerned with whether consequences were faced by implementing organisations for any shortfalls.The NGO implementing the Kilosa pilot project, were responsive to the expectations of the donor by fully testing the REDD+ mechanism and delivering on project objectives.It can also be argued that the NGOs were liable in relation to their donor accountability, which included analysis of their performance in relation to project objectives.As such, the NGO in Rungwe were criticised for not fully testing the REDD+ mechanism and were challenged by the donors for their choice to use the money to continue with their ‘core business’34 instead of pushing the REDD+ agenda; a choice the NGO took partly due to fears around village-level expectations.The fact that the NGOs have continued to work with communities after the REDD+ pilots through new funding and projects demonstrates their responsiveness to the needs and expectations of the villagers.However, broader accountability for the fact that the REDD+ pilot projects did not meet villager expectations has not been taken, or formally discussed, by the donors and policy-makers who have driven the REDD+ agenda internationally and in Tanzania.Similarly, liability for the disappointment for unfulfilled expectations at the village level has not been taken.This therefore highlights the need for more accountability to be taken by those closest to the production of knowledge for the expectations of those furthest away from knowledge production, such as villagers.This echoes wider calls for a shift in how accountability is dealt with in conservation and development policy and practice.By applying concepts from the sociology of expectations to the case study of REDD+ pilot projects in Tanzania, we have contributed new insights into the dynamics of expectations in the context of conservation and development pilot projects.By exploring expectations in this way we have also contributed new insights into the understanding of pilot projects, and interventions more broadly, as agents and outcomes of social change.The case of REDD+ in Tanzania demonstrates the important role of hyper expectations in new international conservation and development programs, driving and being driven by a desire for new multiple-win approaches to forest governance, a perceived need for speed, and high estimations of future success.These expectations can be seen as being highly performative, mobilising resources and driving communities of promise among conservation and development professionals.We therefore add insights into the growing critical discussion of conservation fads, by unpacking the performative role of expectations in this process.High levels of uncertainty existed among those closest to the production of knowledge, yet instead of promoting caution, this uncertainty contributed to a perceived urgency to test the mechanism and drove the ‘need’ to implement pilot projects.This process can be seen to be a product of what Lund et al. refer to as conservation and development ‘logic’ that ‘continuously produces and feeds off the development and testing of new policy models.’,Through exploration of two very different REDD+ pilot projects, we have identified a trade-off between fully testing pilot projects and raising awareness, and raising expectations at the village level.Comparing these two NGOs is not done with the intention of judging or evaluating the NGO approaches or the projects themselves, rather it provides an interesting comparison in relation to expectations.In Kilosa, where the NGO achieved high awareness and high participation in the pilot project, we have shown how an economy of expectations developed.Expectations were raised through project activity, including through well-intentioned activities such as FPIC and testing benefit-sharing mechanisms.Expectations then became a forceful presence, leading to significant social change, including people being relocated from farms.Expectations interact and are mediated by local realities, and so are difficult to manage once raised.A hype and disappointment cycle was identified in Kilosa and expectations have continued to impact villagers after the pilot project finished and the international and national actors have moved on.Conversely in Rungwe, where the market-based aspects of REDD+ were not tested, there were few expectations and so little evidence of disappointment.Perhaps these two cases reflect the different approaches that the two NGOs take in relation to the challenge of maintaining legitimacy with village level actors while engaging with ever more uncertain international pograms in the competition for funding.Our findings therefore highlight some core issues for conservation and development and support calls for more critical reflection of how conservation is pursued, particularly in relation to how new international programs such as REDD+ are managed.Expectation and disappointment cycles can be conceptualized as an unintended consequence of piloting new international conservation and development programs, particularly in relation to future-oriented, market-based programs such as REDD+.However, although unintended, expectations are inevitable, which the trade-off identified in this research demonstrates.The negative outcomes of hype and disappointment cycles are asymmetric; produced by those closest to the production of knowledge and yet impacting those furthest away from knowledge production the most.This is particularly salient in relation to pilot projects, which are framed as a short-term test by international actors but seen by local actors as being the start of a longer-term, reciprocal relationship.Accountability for expectations is therefore needed in conservation policy and practice, particularly on the part of those closest to the production of knowledge, such as policy-makers and donors.This includes the need for more transparency around uncertainty from the start, more responsiveness to villager expectations and liability being taken for unfulfilled expectations.To this end, we challenge the discourse of ‘needing’ to pilot, which prioritizes awareness, impact and innovation without fully considering the potential negative impact of unfulfilled expectations.
We explore the dynamics of expectations in international forest conservation and development programs, and the impacts and implications of (unfulfilled) expectations for actors involved. Early stages of new international conservation and development programs, often involving pilot projects designed to test intervention concepts at village level, are characterized by large amounts of resources and attention, along with high expectations of success. However, evidence shows that these early expectations are rarely fulfilled. Despite this repeated pattern and growing engagement with expectations in critical conservation and development literature, little is known about the dynamics of expectations in conservation and development pilot projects. We address this knowledge gap first by exploring concepts from the sociology of expectations. We then unpack expectations in a case study of REDD+ pilot projects in Tanzania, using extensive qualitative data reflecting the perspectives and experiences of a wide range of actors involved. Our study finds that expectations play a performative role, mobilizing actors and resources, despite uncertainty identified among policy-makers and practitioners. We also find that once raised, expectations are dynamic and continually mediated by actors and social contexts, which conflicts with attempts to ‘manage’ them. We argue therefore that a trade-off exists between fully piloting new initiatives and raising expectations. We also argue that failure to address this trade-off has implications beyond pilot project objectives and timelines, which are experienced most acutely by village communities. We argue for more critical engagement with expectations and the embedding of accountability for expectations in conservation and development practice. Our findings also challenge the discourse of ‘needing’ to pilot, which prioritizes awareness, impact and innovation without fully considering the potential negative impact of unfulfilled expectations.
289
Morphological variations in southern African populations of Myriophyllum spicatum: Phenotypic plasticity or local adaptation?
It is widely accepted that aquatic plants are plastic in their responses to environmental variables, and their morphology can be extremely variable between populations and/or between seasons.Changes in plant morphology and physiology between populations of the same species are often linked to both physiological stresses, such as limited resources, and to physical/mechanical stresses such as wave action or current.These species responses are usually driven by adaptive mechanisms, such as phenotypic plasticity or local adaptations that allow them to adapt to the different climatic and environmental stresses to which they are exposed.Local adaptation is a genetic change, primarily driven by natural selection on a local scale, where specific characters of a plant that enhance its fitness are selected for in a novel environment, while phenotypic plasticity is the ability of a single genotype to respond with changes in phenotypic characters that will better suit the population to the prevailing habitat conditions.The wide distributional range of many aquatic species is often coupled with relatively low genetic variation of individuals within populations, but high variation between populations, probably linked to clonal or vegetative reproduction.In many cases, aquatic plants are thought to have a “general purpose genotype” usually characterised by low levels of genetic variability but capable of adapting to a diverse range of environmental conditions through phenotypic plasticity.There are two forms of phenotypic plasticity that can be classed as either physiological plasticity, where the responses have a physiological end point, such as changes in photosynthetic capabilities; or as morphological plasticity where the responses are manifested as a change in morphology.These plastic responses, both physiological and morphological, are important for the survival of a species in a multitude of different environments over the wide geographical ranges in which they are found.Understanding the mechanisms that drive changes in the phenotype of aquatic plants can prove useful in gaining insights into the genetic diversity and evolutionary history of the species in a region.Morphological differences in introduced populations of aquatic plants are thought to be primarily driven by phenotypic plasticity because of the relatively low levels of genetic diversity and short time spent in the region.In a study of three invasive submerged macrophytes in New Zealand, Riis et al. concluded that the primary adaptive strategy of all three species was phenotypic plasticity due to the low levels of genetic diversity, coupled with the relatively short time period since the first introduction of any of the species.The oldest introduction was Elodea canadensis Mitch., and at just over 100 years, is considered too young for the development of local adaptations, especially given the lack of genetic diversity within and between populations in New Zealand.Local adaptations are driven by the process of natural selection, which result in different genotypes adapted to local conditions and are likely to express differences in morphological characters over much longer time scales.A prerequisite for local adaptations to take place between populations is a relatively diverse gene pool within populations for natural selection to act upon.In the case of introduced species, this can be achieved through multiple introductions from different source populations, and local adaptation can be considered an important adaptive mechanism for the successful invasion of a species.Myriophyllum spicatum L. is considered an invasive species in southern Africa, however, there are questions as to how long this species has been present in the region.Understanding the drivers of the morphological differences between populations can infer how long this species has been here.There are three distinct varieties or growth forms of M. spicatum which are found in different regions that have very different climatic conditions.The differences in the morphology are so great that it was initially thought that there were at least two species of Myriophyllum in southern Africa, however, pollen analysis confirmed a single species.The first variety is characterised as large and robust with large leaves and relatively thick stems and is found in the Vaal River, Northern Cape, South Africa.This is the only population to be recorded as problematic in South Africa.The second variety is characterised as delicate small plants, with small leaves and highly branched, thin stems.It is found growing in the subtropical environment in Lake Sibaya, KwaZulu-Natal, South Africa.The third variety is large plants, and similar to the first variety in growth form, but the internode length is very short so the leaves become tightly packed leading to a bottlebrush type appearance, and is found in the high altitude regions, including the Amathola Mountains, Eastern Cape and the KwaZulu-Natal Midlands, South Africa.These varieties in southern Africa can be identified in the earliest herbarium specimens from the regions where they originate, for example, the Vaal River variety was first collected in 1897, the Lake Sibaya variety in 1966 and the high altitude variety collected in the Mooi River in 1894.These morphological characteristics are still present in the populations found in these biogeographic regions today.The aim of this study was to determine whether the morphological differences between three populations of M. spicatum in southern Africa are driven by phenotypic plasticity or local adaptations through underlying genetic variation.If the morphological differences between the populations are driven primarily by plastic responses to environmental conditions, then plants grown under the same conditions would respond in a similar way and their morphologies would converge.However, if the differences are local adaptations within the species, then the morphology of the plants from the different populations would not converge and the varieties would remain distinct from each other.If the driver of these morphological differences is local adaptations, then it would suggest that the populations have been isolated with limited gene flow for a considerable time within the different biogeographic regions.The initial stock plants from the three populations of M. spicatum used in this experiment were collected from wild populations within a two week period during the beginning of April 2013.The three populations collected were 1) ‘Vaal’, collected from Vaalharts Weir, Vaal River, Northern Cape; 2) ‘Sibaya’, collected from Lake Sibaya, KwaZulu-Natal and 3) ‘Hogsback’, collected from the Klipplaat River, Eastern Cape.Specimens were collected from all three localities on previous surveys and lodged in the Selmar Schonland Herbarium.The plants were returned to a greenhouse tunnel at Rhodes University, where they were acclimated to the greenhouse conditions by floating sprigs freely in borehole water for a period of at least four days prior to the commencement of the experiment.A total of 40 sprigs from each population were cut to 10 cm growth tips with no branches, and initial morphological measurements were taken from each sprig.The morphological measurements that were taken included the stem diameter, internode length, leaf length and number of leaflet pairs on each leaf.All measurements were taken at the 5th internode to standardise the position from where the measurements were taken on each individual and between varieties.In comparison to the experiments of Santamaría et al. and Riis et al., the environmental conditions that may have played a role in determining the morphological characters were all kept constant viz. sediment type, temperature, light and photoperiod.The sprigs were then planted in two growth conditions; a low nutrient, pond sediment only treatment, and a high nutrient pond sediment treatment fertilised with 30 mg N/kg from Multicote® 15-8-12 N:P:K 8–9 month formulation fertiliser.The sprigs were randomly planted into seedling trays which held approximately 100 ml of treatment sediment in each pot, and then placed into a plastic pond which contained borehole water to a depth of 50 cm.The seedling trays were arranged in a random block design to rule out any possible location effects of light and temperature on the plants in the pond.The same morphological measurements were taken again after an eight week growth period which is sufficient time to observe differences in growth responses, and were compared between populations at both nutrient levels.The morphological measurements, including stem diameter, internode length, leaf length and number of leaflet pairs, between the three populations of M. spicatum in each nutrient treatment at the beginning and at the end of the experiment were compared using a GLM Repeated-Measures ANOVA followed by a Tukey Post-Hoc test to identify homogenous groups, in Statistica V.12.To determine the similarity between the three populations of M. spicatum at the start of the experiment and 8 weeks later grown under the same conditions, a Principal Component Analysis in Primer V.6 was performed using the morphological characters and plotted to visualise the results.By the end of the experiment, the plants from each population showed a response to the different growing conditions in their morphology when compared to the starting measurements within populations.The stem diameter for all three populations remained unchanged under low nutrients but were significantly larger under high nutrient conditions = 18.435, P < 0.001).The internode length showed a similar trend with no change at low nutrient conditions but significantly longer for the Vaal and Sibaya population under high nutrient conditions, while the Hogsback population significantly decreased in internode length at high nutrient conditions = 5.0747, P < 0.001).Under low nutrient conditions the leaf length was significantly smaller for all three populations, while under high nutrient conditions it remained unchanged = 19.692, P < 0.001).The number of leaflets remained unchanged for all three populations irrespective of nutrient level = 0.4126, P = 0.838).The growth pattern of each population relative to the other populations, however, did not change based on nutrient condition despite differences between nutrient treatments.The stem diameter under both nutrient treatments was always larger for the Vaal and Hogsback populations compared to the Sibaya population = 18.435, P < 0.001).But there was a significant increase in the stem diameter for the Vaal and Hogsback population at high nutrient conditions, while the Sibaya population remained unchanged = 18.435, P < 0.001).While there was no difference between the internode length for both the Vaal and Sibaya population under both nutrient levels, the Hogsback population was always significantly smaller ranging between 0.5 and 0.57 cm depending on nutrient treatment = 5.0747, P < 0.001).Only the Vaal and Sibaya populations showed an increase in internode length under high nutrient conditions = 5.0747, P < 0.001).Irrespective of nutrient level, the Sibaya population had significantly smaller leaf lengths than both the Vaal and Hogsback populations which ranged between 2.22 cm and 2.76 cm depending on nutrient level = 19.692, P < 0.001).There was no difference between the leaf lengths of each variety, except for the Sibaya population which had a significantly longer leaf under high nutrient conditions.The Vaal population always had significantly fewer leaflet pairs than both the Sibaya and Hogsback populations which ranged between 12.47 and 13.71 leaflet pairs = 0.4126, P = 0.838).The leaflet pairs remained unchanged between nutrient treatments = 0.4126, P = 0.838).The PCA shows no overlap in the groupings between the populations at both the start of the experiment when the plants were first collected from the field populations and after the eight week experimental period, grown under the two nutrient treatments.The PC1 accounts for 59.7% of the variation and the PC2 accounts for 27.3% of the variation.Despite the same growth conditions, nutrient treatments, sediment type, water depth and light, the three different populations of M. spicatum did not converge in their morphologies.This suggests that the differences in the morphological varieties are driven by local adaptations and southern Africa has different genotypes of M. spicatum that have adapted to their current environmental conditions over a long period of time.This is fairly common in terrestrial plants with wide distributions, which are often characterised by large genetic variation and often specialised to local environmental conditions which allows them to cover such wide geographic and climatic regions.On the other hand, aquatic plants are often characterised by low levels of genetic variation which makes phenotypic plasticity extremely important, and while the development of a “general purpose genotype” capable of surviving in a wide range of environments is common, this is not always the case.Several studies on aquatic plants have shown that although phenotypic plasticity is important, local adaptations are possible and do play a role in the survival and fitness of some species across a multitude of environmental conditions.The different M. spicatum populations that were grown under the two nutrient treatments did show some degree of plasticity and a response to the growing conditions.All three genotypes that were grown under the lower nutrient condition adopted a significantly smaller size in most of the morphological characters measured.This was not surprising as several aquatic plant species respond to limiting nutrient conditions.For example, Riis et al. reported that Lagarosiphon major,Moss and E. canadensis had reduced shoot diameter and leaf width under low nitrogen and phosphorous conditions, while individuals of Ranunculus peltatus Schrank were smaller when grown under low nutrient conditions, than individuals grown under high nutrient conditions, the latter tending to have long branching shoots.Barko suggested that not only nutrient composition, but also the interaction with nutrients and sediment type, play a role in the growth form of M. spicatum, while Barko and Smart showed that light plays a role in the morphology of aquatic macrophytes.Strand and Weisner also indicated that light and water depth play a role in the morphological characteristics of M. spicatum where plants that are light limited are usually longer and more branched.The drivers of the subtle morphological changes in the current study indicate that nutrient level was the most important, as this varied across treatments, but the potential effect of light and water depth on the morphology of M. spicatum was not tested here.The findings from the present study are in contrast to the situation in North America where M. spicatum was introduced in the 1940s.Plants from different regions in Canada and the USA grown under the same conditions, side by side in a greenhouse, responded by converging in their morphologies, which Aiken et al. attributed to a common clonal origin of the North American material.This is despite the recent discovery of two genotypes of M. spicatum in North America.It is possible that during the study by Aiken et al., plants from the same genotype were inadvertently selected.Riis et al. had similar findings when three introduced submerged aquatic plants, L. major, Egeria densa Planch. and E. canadensis were grown under similar conditions.Initially the plants had different growth forms or morphologies, and physiologies that were presumably adapted to the conditions to which they were exposed in the wild populations.The morphologies and photosynthetic rates of all the species reverted to a similar point or growth form after a seven-week growing period.In addition to this, they tested the genetic diversity between the populations using amplified fragment length polymorphisms, which resulted in very little difference between the populations, suggesting that each species originated from a single introduction.This suggests that for introduced species that lack genetic diversity, phenotypic plasticity may be the most important factor driving the differences between populations of the same species growing in different climatic or environmental conditions.The three genotypes of M. spicatum identified in southern Africa are similar to populations from the European range, where in a study by Aiken et al., populations from England and the Netherlands showed slightly different morphologies when grown under the same conditions.This suggests that these populations also exhibit subtle local adaptations.The different populations of M. spicatum in southern Africa are presumably locally adapted to the conditions where they are found, however, this does not rule out the relative importance of phenotypic plasticity for these populations to adapt to changing conditions.In a transplanting experiment, Santamaría et al. transplanted Stuckenia pectinata populations from different regions of Europe.Their results suggest that there were strong local adaptations and the performance of transplanted individuals was much lower in the novel environment than when grown at the source location.However, despite the local adaptations, the different populations of S. pectinata also showed a certain degree of phenotypic plasticity, suggesting that local adaptation and phenotypic plasticity may work synergistically.The study by Santamaría et al. was within the native range of S. pectinata which suggests a long evolutionary history in the region and local adaptations are not surprising due to the relatively high genetic diversity in native populations compared to introduced populations.In many introduced aquatic species, including Eichhornia crassipes,Solms-Laub., E. densa and Alternanthera philoxeroides,Griseb., genetic variation is low between populations, likely linked to their clonal reproduction and the adaptive mechanisms are probably linked to phenotypic plasticity rather than local adaptations.The evolution of locally adapted populations requires an interaction between divergent selection and other evolutionary forces such as natural selection and gene flow.The development of locally adapted populations of M. spicatum in southern Africa suggests that the populations are sufficiently isolated that there is little or no gene flow between them.This isolation could be geographic as there are significant distances, over major catchments, between the populations, or it could be reproductive, as sexual reproduction is not considered very important for M. spicatum, or a combination of both, which would further isolate the populations.This compares to North America, where M. spicatum is characterised by two genotypes with overlapping distributions, however, there is little evidence of sexual reproduction between them in the field populations.The development of these locally adapted genotypes also suggests that there could be a relatively high genetic diversity of the populations in southern Africa.What is unclear is whether this diversity has resulted from multiple introductions or a long enough evolutionary history in the region for genetic mutations to occur.It is possible for genetic differentiation to occur quite rapidly, for example, despite the low levels of genetic variation, E. densa in New Zealand is showing signs of genetic differentiation between populations in less than 100 years since it was first introduced.This could suggest that the evolution of locally adapted populations in E. densa could occur quite rapidly in New Zealand, under the right conditions despite the low genetic variation inherent in so many species.The results from the present study point to local adaptation and not phenotypic plasticity as the more likely driver of the different morphological variations of the M. spicatum populations from southern Africa that were tested.This does not rule out the importance of phenotypic plasticity in shaping the morphology of these species in the environments where they occur, and probably explains why they exist in a wide variety of habitats in southern Africa.The presence of local adaptations in southern Africa suggests that the populations have been and still are isolated and it is unlikely that there is much genetic mixing between the systems where these populations are found.This is not surprising as the populations are separated by major catchments, geographical barriers such as mountains and climatic zones, all of which make dispersal between the populations extremely difficult.Future in depth genetic studies of the populations of M. spicatum within southern Africa could shed light on this.
Variability in aquatic plant morphology is usually driven by phenotypic plasticity and local adaptations to environmental conditions experienced. This study aimed to elucidate which of these drivers is responsible for the morphological variation exhibited by three populations of Myriophyllum spicatum L. (Haloragaceae), a submerged aquatic plant whose status as native or exotic within southern Africa is uncertain. Individuals from three populations on the Vaal River (Northern Cape), Klipplaat River (Eastern Cape) and Lake Sibaya (KwaZulu-Natal) were grown under two nutrient treatments (high: 30 mg N/kg sediment and low: sediment only), while all other variables were kept the same. Morphological characteristics were measured at the start of the experiment to obtain a baseline morphology, and again eight weeks later. By the end of the experiment, the individuals from each population had responded to the different growing conditions. In most cases, the individuals from each population were significantly larger under the high nutrient treatment (Stem diameter: F(5,86) = 18.435, P < 0.001, Internode length: F(5,86) = 5.0747, P < 0.001, Leaf length: F(5,86) = 19.692, P < 0.001). Despite these differences in nutrient treatments, the growth pattern of each population remained true to the original starting point indicated by the lack of overlap between populations in the PCA groupings. This suggests that local adaptations are responsible for the differences in morphology between populations of M. spicatum, but shows that phenotypic plasticity does play a role as evidenced by individual responses to the different nutrient conditions. The development of these local adaptations within southern Africa suggests that the populations have had a long evolutionary history in the region and are relatively isolated with little reproductive mixing.
290
The impact of urbanisation on nature dose and the implications for human health
The global urban population has risen dramatically over the last 100 years, with rural-to-urban migration responsible for the majority of this growth.This shift is predicted to continue, with 60% of people estimated to be residing in cities by 2030.The move from rural to urban environments affects people’s lives in many ways.Some of these effects are positive, with urbanisation supporting, for example, economic growth and development along with a range of beneficial social outcomes.At the same time, cities are crowded, polluted and more stressful than rural areas, and the competition for space means there is little room for nature.This, in combination with increasingly busy modern lifestyles, may be leading to a decline in experiences of the natural world.Any decline in experiences of nature associated with an urbanising population could lead to a reduced knowledge of, and support for, environmental issues.Arguably more pressingly, urbanisation is now considered one of the most important health challenges of the 21st q, being associated with an increase in chronic and non-communicable conditions such as obesity, stress, poor mental health and a decline in physical activity.The decline in experiences of nature could be a direct contributor to these issues given the breadth of health and wellbeing outcomes that have been associated with nature exposure.This includes reduced all-cause mortality and mortality from cardiovascular diseases, improved healing times, reduced respiratory illness and allergies, improved self-reported well-being and a reduced risk of poor mental health, and improved cognitive ability.Understanding how experiences of nature change with urbanisation, and how this affects health and well-being, is a critical knowledge gap that will assist in planning for an increasingly urban future.A widely held assumption is that population movement from rural to urban landscapes will inevitably result in a decline in experiences of nature.However, this may not necessarily be the case.In the UK, for example, 87% of households have access to a domestic garden, and policy recommends that every home should be within 300 m of an accessible natural green space.This might suggest that most people should be able to maintain some exposure to nature.However, the social landscape of urban environments is infinitely complex.Exposure to nature has an important behavioural component with people choosing how often and how long they interact with the natural world.A number of studies have now demonstrated that access to nature alone is insufficient to determine or predict its use - instead, factors such as feelings of connectedness to nature or socio-demographics are much stronger indicators.Understanding the differences in nature experiences between rural and urban populations will be a key step in unpicking exactly how urbanisation affects experiences of nature.A nature-dose framework distinguishes three dimensions of nature exposure, namely its frequency, duration and intensity.Each element of exposure is likely to be mechanistically tied to different types of health and wellbeing outcomes.For example, spending time in your garden just once per week is associated with reduced levels of depression, and similarly visiting a public park for just 30 min a week is linked with reduced levels of depression and of high blood pressure.The mechanistic pathway to these outcomes may be associated with attention restoration, where mental fatigue is relieved by undemanding nature experiences.Higher levels of vegetation around the home has also been associated with better mental health; this may be driven by greater incidental exposure on a day-to-day basis.Intuitively, incidental nature exposure will be greater in rural areas because of the greater variability of and access to greenspaces.While this has been attributed, in part at least, to explaining rural-urban differences in health, this premise has not been tested.One possibility for a difference in nature associated health outcomes between rural and urban areas is not the time that people intentionally interact with nature, but that incidental nature exposure is higher in rural areas, and this might influence the health gains across different dimensions of dose.In this study, we first explore the differences in exposure to nature between populations of people living in rural to increasingly urbanised environments.Is urbanisation really associated with decreasing nature dose?,Second, we examine the differences in health outcomes for these populations, considering exposure to nature as a key potential predictor.Do any apparent differences in exposure to nature associated with urbanisation impact on population health?,To address these questions, we used data from c.3000 survey respondents across the UK, measuring the frequency and duration of nature dose as time spent in their garden or public green spaces, and intensity of dose as the quantity of vegetation around the home.We focused on the association of nature with four domains of health for which there are plausible mechanistic pathways linking nature exposure to health, including mental, physical and social Weinstein et al., 2015) wellbeing, and physical activity.We surveyed 3000 people across the UK, aged 18–70 years to obtain information on health and experiences of nature.This survey was delivered online through a market research company to their existing market research database of potential respondents.The survey was administered over a two-week period in May 2016 as this is a time of reasonably mild weather when respondents are more likely to engage with nature.For a full copy of the survey see Shanahan et al.Participants provided written consent at the beginning of the online survey, and were compensated with a nominal fee.The survey was stratified to ensure equal numbers of respondents living in a rural and an urban setting, by first asking ‘do you consider your home to be in a rural or urban setting?’,Once 1,500 surveys were completed in each category, any further respondents in the same category were unable to continue with the survey.The survey took approximately 20 min to complete, nature dose questions were asked before health questions to avoid any potential priming effects of a person’s stated health status on self-reported nature dose.The survey assessed: 1) respondents’ weekly doses of nature, 2) their orientation towards nature, 3) measures of health across multiple domains, and 4) socio-demographic information.Respondents were requested to provide a full UK postcode so that their neighbourhood could be characterised.Each respondent generated three measures of nature dose: frequency and duration and intensity.Respondents were told that public green spaces included ‘for example, parks, countryside, playgrounds, picnic areas or golf courses’.Frequency of nature dose was estimated based on the respondents’ self-reported frequency of more than ten minutes spent within their garden in the last week, and how frequently they passed through public green spaces.Survey respondents selected the usual frequency of garden use or public green space visitation from: never, less than once a week, once a week, 2–3 days a week, 4–5 days per week, 6–7 days per week.To estimate the number of visits a week the mid-points of selected categories were chosen, with < once per week and never being assigned a score of 0.To estimate the frequency of garden and public green space visits we summed these two scores, to give a numeric scale of 0 to 13 visits a week.Duration of nature dose was estimated based on self-reported total time spent within their garden and public green spaces within the last week.Survey respondents selected the total time spent in their garden in the last week from the categories of: no time, 1–30 min, 31 min to 1 h, >1–3 h, >3–5 h; >5–7 h, >7–9 h, >9 h. Duration of public green space visits was calculated as respondents were asked to name up to seven places they had visited for the longest period in the last week, and select from the following categories how long they spent there: no time, 1–30 min, 31 min to 1 h, >1–2 h, >2–3 h; >3–4 h, >4 h. To estimate the total duration that respondents intentionally visited green space in the previous week, the mid-points of selected categories were chosen before summing scores to give a numeric scale of 0–41.5 h per week.Intensity of nature dose was measured as neighbourhood green cover within a 250 m buffer around the centroid of each respondent’s postcode.This is the distance that was considered to influence what can be seen or experienced from a person’s home on a day-to-day basis.Only those respondents who provided a full UK postcode were included in analyses involving this variable.We utilised the Landsat 8 land cover maps; this dataset includes the Normalised Difference Vegetation Index at a resolution of 30 m from across the UK.The NDVI index for each pixel was examined, and a threshold of 0.2 separated vegetated from non-vegetated pixels.We then calculated nature dose intensity, as the percentage of vegetated pixels within the 250 m buffer.Survey participants also completed the Nature Relatedness Scale Nisbet, Zelenski, & Murphy, 2009), which assesses individual differences in connections to nature.This scale requires participants to complete a series of questions that assess the affective, cognitive, and experiential relationship individuals have with the natural world.Participants rate 21 statements using a five-point Likert scale ranging from one to five.Responses to each of the 21 questions were scored and then the average was calculated according to the system outlined by Nisbet et al.A higher average score indicates a stronger connection with nature.The scale has been demonstrated to differentiate between known groups of nature enthusiasts and those not active in nature activities, as well as those who do and do not self-identify as environmentalists.It also correlates with environmental attitudes and self-reported behaviour and appears to be relatively stable over time and across situations.Respondents provided self-reported information on four health domains.Mental health: A measure of depression was generated based on the depression component of the short version of the Depression, Anxiety and Stress Scale.On a four-point scale respondents rated the extent to which seven statements applied to them over the previous week.To calculate the degree of severity relative to the wider population, these scores were summed, before banding as normal, mild, moderate, severe, or extremely severe.Physical health: Respondents scored their own general health on a five-point scale from very poor to very good.This scale is related to morbidity and mortality rates and is a strong predictor of health status and outcomes.Social cohesion: Respondents’ perceptions of social cohesion were estimated based on three previously developed scales that measure trust, reciprocal exchange within communities and general community cohesion.The average score across questions for each scale was calculated, resulting in a continuous score from the highest to lowest perceived social cohesion.The average scores from each scale were then summed to provide a scale from highest to lowest.Positive physical behaviour: Respondents provided a self-reported indication of physical activity, specifically the number of days they exercised for a minimum of 30 min during the survey week.We collected information about socio-demographic variables that could influence decisions around green space use, including participant’s age, gender, personal annual income, their highest qualification, the number of hours worked a week, and the primary language spoken at home.As a potential confounder of recent nature exposure, we asked respondents relatively how much time they spent outdoors in the previous week.We obtained an estimate of the socio-economic disadvantage of the neighbourhood in which each respondent lived using the Index of Multiple Deprivation.The IMD is an average of indices for separate domains of deprivation, and is provided at the postcode scale.Two approaches were employed to measure the level of urbanisation surrounding a respondent’s home.Actual rurality-urbanity: We used a vector layer of Edina Digimap, the Ordnance Survey MasterMap Topography Layer, to calculate the number of building polygons within a 1 km buffer surrounding the centroid of a respondent’s postcode.We then summed the area of these polygons, to calculate the percentage building cover within the buffer.Perceived rurality-urbanity: To unpick the perceived rurality or urbanity of the home, beyond the survey stratification of half the respondents living in rural and half in urban areas, respondents were asked ‘on a rural to urban scale of 1 to 10, where do you place where you live?,’.All data extraction and analyses outlined here were performed in QGIS v2.14 and in R v3.3.First, we explored how each dimension of nature dose varied across the two measures of urbanisation.Of the dependent variables, nature dose frequency was approximately normally distributed, whilst nature dose duration was log-transformed and a logit function was applied to the proportion of nature dose intensity so that they were approximately normally distributed.We built Linear Models to examine the relationship between each element of nature dose and potential predictors, including a measure of actual and perceived urbanisation, socio-demographic and life circumstance variables.We fitted a quadratic function to the actual rurality-urbanity.We used the ‘MuMIn’ package to produce all subsets of models based on the global model and rank them based on AICc.Following Richards and to be 95% sure that the most parsimonious models were contained within the best supported set, we retained all models where ΔAICc <6.We then calculated averaged parameter estimates and standard errors using model averaging among the retained models.Second, we examined relationships between each health outcome as a response variable and potential predictors, including measures of rurality-urbanity of the home, socio-demographic variables, self-assessment of health, social cohesion and physical activity.We used cumulative link models for depression and self-assessment of health, linear regression for social cohesion and for physical activity.The frequency and duration of nature dose were correlated, so to avoid issues associated with multicollinearity we generated four predictor model sets for each health response: i) rurality-urbanity and socio-economic variables; ii) rurality-urbanity and socio-demographic variables plus frequency of nature dose; iii) rurality-urbanity and socio-demographic variables plus duration of nature dose; iv) rurality-urbanity and socio-demographic variables plus intensity of nature dose.In models ii-iv for each health response we tested for an interaction between each measure of rurality-urbanity and nature dose.If the interaction was not significant it was dropped from the model.We then model averaged as above.The proportion of respondents living in each country within the U.K. was comparable with the wider population.There was an overrepresentation of female respondents, of respondents earning <£10,399 per year, and of respondents who worked no hours a week.Relative to the wider U.K. population there was an under-representation of respondents >70 years and who considered themselves to be in very good health.Across the neighbourhoods of all 3000 survey respondents there was an average vegetation cover of 65.5%, and built cover of 13.2%, with most respondents having access to a private garden.Quadratic regression outperformed higher order polynomial regression in describing the relationship between the three measures of nature dose and actual rural-urbanity.Nature dose frequency and duration were highest in rural areas, before steadily decreasing until urbanisation attained levels typically associated with the suburbs.Further increases in urbanity produced little or no change in nature dose.Nature dose intensity also decreased with increasing urbanisation, but with the relative decrease in dose slowing at higher levels of urbanisation.All three dimensions of nature dose increased with a respondent’s age, but decreased with their social deprivation.Finally, the frequency and duration of nature dose increased with nature orientation, in people who were retired, and with people who spoke a European language in the home, while dose duration and intensity increased with respondent’s income.We found that population levels of depression increased with urbanisation, but so did physical health, while urbanisation did not influence social cohesion or physical behaviour.There was a positive relationship between all four health outcomes and frequency and duration of nature dose.Frequent visits to green spaces in the more urbanised population were associated with further improvements to mental health, while the same respondents who spent longer in green spaces saw greater gains to their positive perceptions of social cohesion and positive physical behaviour.Finally, dose intensity was associated with increased positive perceptions of social cohesion, and this effect was more pronounced as urbanisation increased.We demonstrate that the environment around the home is an important predictor of nature dose, with people living in more rural areas tending to have more frequent weekly exposure to nature.Critically, once a certain level of urbanisation is met, there is no further change in nature dose across the population with increased urbanisation.Instead, a person’s orientation towards nature was a key driver of the frequency and duration of nature dose, and improvements across three health domains.Second, we present differences in the health gains from dose dependent on the rurality to urbanity of the home.People in heavily built up neighbourhoods with a low nature dose tended to have worse mental health and lower perceptions of social cohesion, while also being less likely to engage in positive physical behaviour.However, these people also had the potential for the greatest gains from either more frequent visits to, or spending longer in, nature respectively.Heavily urbanised areas tend to have reduced levels of vegetation, therefore greening of these neighbourhoods is likely to produce the greatest improvements in people’s perceptions of social cohesion.Here we reveal that across all three dimensions of nature dose, namely frequency, duration and intensity, dose is greatest in rural areas.Dose then decreases with increasing housing density and perceptions of urbanisation, until people live in the equivalent of the suburbs of a medium sized town.Beyond this level of urbanisation, dose intensity continues to decline, albeit at a slower rate.This did not translate to a parallel decline in the frequency or duration of dose.This downward trend of dose intensity is consistent with that found in other studies, though the nature of the relationship has varied.For example, Shanahan et al. show a similar curve between green space visitation and measures of tree cover, while Coldwell and Evans indicate a linear relationship between visitation and urbanisation; such differences could be caused by the use of different measures of urbanisation, and differences in city design.These relationships all suggest a strong behavioural component to engagement with nature.Indeed, an orientation towards nature was the strongest predictor of the frequency and duration of dose, accounting for almost two-thirds of the explained variance in the model.In urban populations people with an increased nature orientation typically visit public green spaces and their gardens more regularly, travel further to do so and are more likely to engage in resource provisioning for garden wildlife.Further, in the second analysis we also found some evidence that an orientation towards nature was a predictor of better mental health, social cohesion and positive physical behaviour.This result held even after accounting for the potentially confounding effects of nature dose.This may be an indication of the broader health benefits gained from a deeper connection to the natural world, with nature connectedness being positively associated with life satisfaction and happiness.Our study provides further evidence of the health inequities between rural and urban environments in the UK.We found that people in more built up areas were more likely to perceive that they had better physical health, but were increasingly likely to suffer from depression compared with their rural counterparts.People in urban areas generally have better access to health care, but are exposed to increased levels of pollution, overcrowding and stress which are known to impact negatively on mental health.We did not find associations between urbanisation and perceptions of social cohesion, or physical behaviour.We found that people who choose to spend time in nature more often, and for longer are healthier across multiple dimensions of health.Our results add support to previous studies conducted on urban populations, that explored the relationships between nature dose and health, but importantly we show that it is also possible to detect these positive associations with health in more rural populations.We found that the benefits to physical health, social cohesion and improved physical behaviour from frequent visits to greenspaces occurred independent of the environment around one’s home.This is important for population health, because it indicates that even people with less access to greenspaces can gain similar benefits from regularly spending time in nature, should they choose to do so.As an increasingly urbanised population wrestles with multiple demands on their time, behavioural health interventions are likely to be more successful in promoting short frequent visits to green spaces than longer ones.Importantly, on average respondents in more urbanised areas had poorer mental health than their rural counterparts, while those who visited green spaces more regularly had better mental health.It is therefore conceivable that an increased frequency of dose provides a protective factor against the increased stress and mental fatigue associated with urban living.We demonstrate that although the duration of dose was positively associated with all four health outcomes, across two health domains the benefits from spending longer in nature were greater in the urban population.On average respondents in more urbanised areas who spent no time outdoors had the lowest perceptions of social cohesion, while those who spent nine or more hours in greenspaces had the most favourable perceptions of their community.A potential explanation is that the increased density of people in urban areas means that there is greater potential for positive interactions between neighbours, with greenspaces being locations that facilitate these interactions.Finally, respondents in more heavily urbanised areas who exercised more regularly were more likely to do so in greenspaces than those who engaged in similar amounts of exercise in less urbanised areas.This could be due to a possible higher use of other types of exercise location by respondents in less urbanised locations, or through higher levels of exercise associated with work activities.Finally, we found that people in towns and cities had a better sense of community when there was more greenspace around the home.This may be because of the greater availability of places to socialise, so facilitating community life.We did not find any relationships between dose intensity and the other three health domains, suggesting these health metrics are less related to available nature around the home based on the method of measurement used here.The coarse area-based measure of nature is likely to be at best a limited surrogate for the complex experiences that people have with individual components of nature.Species richness and abundance will vary significantly between greenspaces and nature experiences, and this is likely to influence any associated health outcomes.For example, there is evidence that visiting greenspaces in the countryside provides different wellbeing benefits from those gained from spending time in urban greenspaces.Instead a better measure of intensity, but one that would require a completely different methodological approach to the one taken here, would be to measure dose intensity experienced throughout a participant’s daily life.Ideally, this approach would account for indirect, incidental and intentional experiences not only around the home, but also when people are moving around the landscape, such as walking to the shops or visiting the countryside.As emerging technologies of personalised activity monitors, such as GPS trackers, eye-tracking glasses and electroencephalography continue to advance, these exposures will become increasingly understood.This study uses a cross-sectional design, which inevitably has both advantages and limitations.The main advantage is that it allows the simultaneous analysis of multiple risk factors.The limitation is that the design cannot definitively establish a cause-effect relationship.However these pathways are becoming increasingly well-developed in other studies.This study also relied on self-reported data, which may lead to common method bias.Thus, additional studies using more objective health indicators, such as stress cortisol and heart rate could provide more in-depth understanding.The improvements in model quality with the addition of nature dose variables were low particularly for the mental and physical health responses.This maybe because either the influence of doses of nature on health is small, or because health is a complex issue with multiple drivers and although we controlled for key socio-demographic covariates known to influence health, the impacts of life events are difficult to control for.Further, the benefits of contact with nature may vary across socio-economic groups, cultures and environments.Indeed, because there was an overrepresentation of respondents on low incomes, and of those who work no hours per week, caution must be applied when drawing conclusions applicable to broader populations.The improvement in model quality with the addition of nature variables found here was comparable or less than that of similar studies.However, given the numerous contributing factors towards health and the economic and social cost of poor health, any detectable effect of nature dose has the potential to lead to significant savings towards the prevention and treatment of ill health.We show that people in urban areas had a reduced exposure to nature across three dimensions of nature dose compared to their rural counterparts.However, regardless of opportunity to access greenspaces around the home, people with an increased orientation towards nature typically choose to visit greenspaces more often and for longer.There was also some evidence that those with a greater orientation to nature have better mental health, social cohesion, and physical behaviour, even after accounting for nature dose."This result highlights the importance of supporting the development of a connection to nature across a person's life-course.This study paves the way for future research to establish how behavioural interventions can promote engagement with everyday nature.We have no competing interests to declare.This research was conducted with approval from the Bioscience ethics committee of the University of Exeter.Participants provided written consent at the beginning of the online survey.
The last 100 years have seen a huge change in the global structure of the human population, with the majority of people now living in urban rather than rural environments. An assumed consequence is that people will have fewer experiences of nature, and this could have important consequences given the myriad health benefits that they can gain from such experiences. Alternatively, as experiences of nature become rarer, people might be more likely actively to seek them out, mitigating the negative effects of urbanisation. In this study, we used data for 3000 survey respondents from across the UK, and a nature-dose framework, to determine whether (a) increasing urbanisation is associated with a decrease in the frequency, duration and intensity of nature dose; and (b) differences in nature exposure associated with urbanisation impact on four population health outcomes (depression, self-reported health, social cohesion and physical activity). We found negative exponential relationships between nature dose and the degree of urbanisation. The frequency and duration of dose decreased from rural to suburban environments, followed by little change with further increases in urbanisation. There were weak but positive associations between frequency and duration of dose across all four health domains, while different dimensions of dose showed more positive associations with specific health domains in towns and cities. We show that people in urban areas with a low nature dose tend to have worse health across multiple domains, but have the potential for the greatest gains from spending longer in nature, or living in green areas.
291
Fine particle retention and deposition in regions of cyclonic tidal current rotation
Coastal and shelf seas cover a small fraction of the ocean but are of utmost importance and value.Sediments in these regions act as valuable resources and support the majority of global benthic biogeochemical cycling of organic matter.Sediment composition influences a range of biogeochemical and physical parameters.Biogeochemical processes depend on sediment type, varying between advective sediments with low organic content and cohesive sediments with high organic content.Sediment type influences physical processes in shelf seas through modification of bed friction, thus impacting dissipation of energy, and sediment mobility.It also influences benthic habitats and community structure.Understanding the overall structure and functioning of shelf seas, including their response to human and climate pressures, thus requires an understanding of sediment composition, transport, and deposition mechanisms.While sand and gravel benthic sea floor composition in shelf seas is relatively predictable with bed shear stress controlling their distribution, mechanisms of mud dispersal and retention are still not fully understood.Recent work has illuminated the influence of high energy episodic events to mud deposit shape and location, and to the movement of mud on and off of the continental shelf.Zhang et al. showed storm waves on the Iberian shelf resuspended fine sediment that was redistributed by a transient oceanic frontal current.Cheriton et al. observed internal waves on the California coast suspended fine sediment from the shelf slope which traveled in nephloid layers to feed a mud deposit on the Monterey Bay shelf.Internal waves and tides are likely an important mechanism for sediment transport on all continental slopes.Anthropogenic influences on mud deposits also exist.Trawling is capable of inducing gravity flows near steep topography to move mud from the shelf edge to deeper regions.Episodic events have been shown to dominate mud transport on narrow shelves and across longer timescales, repeated episodic events cause transport of fine sediment across a shelf.For broad shelves, ocean tides can also generate large currents and tidal processes are important.For example, tidal resuspension is frequent in the Celtic Sea.Low bed shear stress and sediment-transporting residual flows are typically considered to be the hydrodynamic processes required for fine sediment deposition and retention in such systems.Shelf sea circulation provides pathways for fine sediment movement, and convergence of these residual currents can create regions of high fine sediment concentration, while tidal resuspension can be frequent.Despite the study of mud deposits on many shelves, the capability to predict mud deposit location and spatial extent is limited.Ward et al. successfully predicted coarse sediment composition in the Irish Sea and Celtic Sea using numerically modeled bed shear stresses and bed samples.However, they under predicted sediment grain size in a Celtic Sea region of low bed shear stress and over predicted it in the eastern Irish Sea where bed shear stress is not very low but a mud deposit is present.Other authors have turned to machine learning and spatial statistics to predict benthic sediment composition.In the Northwest European shelf seas, Stephens and Diesing found mud was present where the shelf seas were more than 50 m deep.Wave orbital velocities become smaller with depth, so wave-generated bed shear stresses increase with shallower water.The implication is a spatial gradient in mud resuspension, whereby mud can be resuspended at shallower depths and moved to deeper depths where it is less likely to be resuspended.Moriarty et al. observed this trend on the Waipaoa Shelf of New Zealand.Sediment transport in shelf seas is closely linked to circulation and depends on erosion and deposition, processes which are all dependent on boundary layer dynamics.The water column in a shelf sea has a surface and benthic boundary layer.The surface boundary layer is generated by wind and waves, while the benthic boundary layer is generated by the oscillatory flow due to tides over a rough bed.Differences in these controls lead to differences in benthic boundary layer thickness.Wave boundary layers are typically limited in height to a few centimeters but are important to sediment transport due to their relatively high sediment concentration, sometimes resulting in sediment gravity flows.In comparison, tidal benthic boundary layers reach tens of meters and can also drive large sediment flows.Boundary layers are regions of enhanced turbulence and are important in a range of bio-physical processes - including controlling scalar fluxes into sediments or resuspension via periodic turbulence and influencing phytoplankton transport to benthic organisms.In shelf seas where tidal currents are elliptical, the direction of current rotation also influences the benthic boundary layer thickness.Prandle showed with an analytical solution that depending on latitude, tidal benthic boundary layers could not fully develop when rotating counter to the Coriolis force because the timescale to fully develop the flow is longer than the tidal period."Simpson and Tinker made measurements at two locations in the Celtic Sea with opposite rotation to confirm Prandle's prediction.This thinner boundary layer has been suggested to influence retention of cohesive muds in the Nephrops norvegicus fishing grounds in the Celtic Sea.If this is the case, retention of pollutants such as microplastics west of Ireland and radioactive sediments in the eastern Irish Sea would also be influenced by the rotational direction of tidal currents.We present the hypothesis that the suppressed boundary layer in cyclonic tidal currents aids the deposition and retention of fine sediment, and is an important mechanism to consider in shelf sediment dynamics, and therefore of pollutant, carbon, or nutrient retention.Using model data we examine the relationship between tidal current polarity and muddy benthic sediment, demonstrating that high mud concentration sediment on the Northwest European shelf are found only where currents are cyclonic.We demonstrate that this pattern cannot be replicated considering only bed shear stress, depth, and a sediment pathway.We then explain the physical processes responsible for the relationship between fine sediment and cyclonic tidal currents.By applying a boundary layer predictor which accounts for ellipticity and scaling it by depth we create a metric to show where rotational effects will influence boundary layer dynamics.Then, by reversing the ellipticity in the predictor, we observe which mud deposits might not exist in their current form if not for the direction of tidal currents, and which are influenced by rotational effects in the presence of low bed shear stress and/or deep water.This manuscript presents a background to continental shelf sediments and hydrodynamics, including boundary layer effects of cyclonic tidal currents.The relationship between ellipticity and muddy sediment on the shelf is presented, focusing on four regions of the Northwest European shelf and the shelf in general.We show that depth and bed shear stress alone cannot account for the distribution of muds.The physical controls on the ellipticity - mud relationship are explored through the boundary layer effects, and then the relevance is depicted with a parameterization of the boundary layer thickness normalized by depth."Currents on continental shelf seas are primarily driven by tides and the effect of Earth's rotation.Prandle analytically derived a tidal current profile in the presence of the Coriolis force, showing that the prevalence of tidal rotation with Coriolis or against Coriolis influences the height of the tidal benthic boundary layer.This benthic boundary layer is on the order of tens of meters, and regardless of the tidal current rotation is much larger than the wave boundary layer that extends tens of centimeters, if not less.In the Northern Hemisphere where ω > f and f is positive, δ+ is small compared to δ−.In the Southern Hemisphere the opposite is the case.To estimate boundary layer thickness on the Northwest European shelf, Soulsby used a depth-averaged tidal model and found c = 0.075 based on measurements by Pingree and Griffiths.Using these values, and for Urms = 0.75 ms−1 and H = 75 m, and CD = 0.0025, the structure of the boundary layer as modified by cyclonic tidal current rotation is clear.Values of u*, c, and CD given in Soulsby show that the height of the benthic boundary layer in a cyclonic tidal current is reduced compared to a rectilinear boundary layer, and in the anticyclonic case the limit on boundary layer height is controlled by the water depth or stratification, not rotational effects.Observations by Simpson and Tinker in the Celtic Sea showed that where e = 0.6 the benthic boundary layer was limited to 20 m above the bed while at e = −0.6 the boundary layer extended to 70 m above the bed, the height of the pycnocline.The Northwest European shelf seas consist of the North Sea, Irish Sea, Celtic Sea, English Channel, and the shelf west of Ireland and Great Britain.The shelf seas have an M2 dominant tide and are generally less than 200 m deep, with much of the shelf only submerged after the 120–135 m eustatic sea level rise of the last deglaciation.Sand and gravel dominate benthic sediment composition, but mud deposits of varying geographic extent are found in the Irish Sea, Celtic Sea, west of Ireland, and in the North Sea.Many of these mud deposits are commercially important fishing grounds for Nephrops norvegicus.Mud deposits in the northern North Sea are of early Holocene origin, perhaps forming during different hydrodynamic conditions of a lower sea level or as a deglaciation effect.The western Irish Sea mud belt is present under a seasonal baroclinic gyre.In the eastern Irish Sea, the mud patch remains depositional as evidenced by radioactive sediments from nearby Sellafield, a nuclear decommissioning site on the west coast of Northern England whose nuclear materials history dates to the 1950s.We obtained the distribution pattern of benthic sediments around the United Kingdom from the British Geological Survey DIGSBS250 dataset.These data are given as polygons of sediments classified by a Folk 15 triangle plus bedrock, diamicton, and two mixed sediment types.A Marine Institute of Ireland dataset uses a Folk 7 classification of six sediment types plus bedrock to collate and standardize data from various sources, including those which have been ground-truthed and those relying on VMS data from fishing vessels, and an assumption of the relationship between N. norvegicus habitat and mud content.For analysis, we consider here gravels to be sediment with composition >30% gravel, sands to be <30% gravel and with a ratio greater than 1:1 in the sand to mud ratiomS,S, gmS, and gS in the Folk 15 triangle), and muds to be <30% gravel and less than 1:1 sand to mudM,sM, and gM in the Folk 15 triangle).High mud percentage sediment is considered here to have a <1:9 sand to mud ratio and be <5% gravel, corresponding to mud and slightly gravelly mudM) in the Folk 15 triangle, which are both classified as mud in the Folk 7 triangle.Marine Institute Folk 7 data are included here in maps, but not in the comparison of ellipticity to bed sediment type because the data are a compilation with varying levels of confidence and some patchy spatial coverage.To examine the physical controls on benthic sediment composition at the shelf scale, hydrodynamic characteristics, such as bed shear stress and ellipticity, are obtained from ocean model outputs.We use the Proudman Oceanographic Laboratory Coastal Ocean Modelling System, which was developed to model the dynamics of the Northwest European shelf and has been extensively validated for that purpose.The three-dimensional baroclinic hydrodynamic model is coupled to the General Ocean Turbulence Model to model ocean turbulence and to the shallow water version of the WAve Model.The overall modeling system is applied to the whole Northwest European shelf at high resolution and simulations were conducted for a full calendar year to integrate over seasonal timescales.One-way nesting within an Atlantic Margin Model provided offshore boundary conditions for water elevation, currents, temperature and salinity.The Atlantic Margin Model is in turn forced from the Met Office Forecast Ocean Assimilation Model and tidal forcing consists of 9 constituents.Atmospheric forcing for the high-resolution shelf model provided hourly wind velocity and atmospheric pressure, along with three-hourly cloud cover, relative humidity and air temperature.The model bathymetry was taken from the Northwest European shelf Operational Oceanographic System with a minimum depth of 10 m applied to prevent stability problems caused by wetting and drying on the coast.Residual currents, bed shear stresses, and values of turbulence parameters are calculated from a baroclinic simulation coupled to the wave model.Bed shear stresses are obtained from the near-bed velocity assuming a near-bed logarithmic layer.Analysis of model data for bed shear stress gives 90% exceedance values.These values are computed at each spatial point where they are the 90% intercept of the cumulative distribution of time-varying stress over the full year.Ellipticity is calculated from a tide-only simulation, which was found to agree with results from the baroclinic simulation with waves and therefore used to focus on tidal processes.Values show good agreement with ADCP measurements made in the Celtic Sea for a different year.To maintain consistency with Soulsby, ellipticity is calculated from the depth-averaged M2 tidal current component using tidal harmonic analysis.To calculate Ua in Eq., depth-averaged currents were rotated into principle flow direction and the largest rotated current was defined as Ua.In this way the boundary layer height was determined by all tidal constituent currents, not just the M2 currents, even though they dominate on the shelf and determine here the rotational direction.To match sediment spatial polygon data and gridded hydrodynamic model data, the grid points located within each sediment polygon type were selected to compare sediment, stress, ellipticity, and bathymetry data.The domain where sediment and model data are compared is shown with dotted lines on the maps in Figs. 2 and 3.Numerical model results for the Northwest European shelf seas show that the M2 ellipticity across the shelf is often positive at locations with benthic mud deposits.West of Ireland, in the Celtic Sea, and in the northern Irish Sea, regions where ellipticity is highly positive are present, and in the northern North Sea M2 ellipticity is slightly positive where a large mud deposit is present.Bed shear stress varies across the shelf.High bed shear stress regions have been shown to correspond to coarse sediments.High bed shear stresses are primarily due to tidal velocities, though wave stresses are high in some regions, e.g. on the southeast English coast.Some regional lows match the locations of mud deposits, but low bed shear stress and mud distribution do not generally have the same spatial pattern.The M2 ellipticity at each grid point within a BGS sediment classification reveals muds are rarely found where ellipticity is negative.Looking at all the sediment types shows the tidal ellipticity in the shelf seas is more likely to be positive than negative, as shown by the histogram of all data points.Gravels are found where ellipticity is positive and negative.Sands are similarly found where ellipticity is both positive and negative.The distribution of muddy sediment, however, is skewed toward positive ellipticity, with nearly the entire distribution of high mud concentration data points located in shelf locations where ellipticity is positive.The histograms normalized by all sediment types show that the sand fraction dominates the Northwest European shelf.The mud percentage of the shelf sediments is small compared to sand, but with a clear bias toward positive ellipticity.Near e = 0 a dip in the sand fraction exists with a rise in the gravel fraction.Rectilinear flow has e = 0, so these correspond to areas of high bed shear stress in narrow channels and inlets.Much of the Northwest European shelf seas have positive ellipticity, so we investigate other processes relevant to fine sediment transport and deposition to question whether the observed relationship between ellipticity and mud is important.Here we focus on bed shear stress and on residual flows.Fig. 6 shows bed shear stress in four regions overlain with the direction of the residual surface currents and outlines of fine sediment deposits.In the Atlantic Ocean west of Ireland, the Aran Grounds N. norvegicus fishery is located in a large mud patch.Bed shear stresses are low across the entire area, not only where muds are present.Surface residual currents show northward flow of the Irish coastal current.Fine particles carried in the residual current are likely sourced from the River Shannon, which drains the largest watershed in Ireland.No convergence of a surface residual exists and there is little spatial variability of bed shear stress to explain the fine sediment spatial heterogeneity.In the northern Irish Sea, two mud deposits are present.Spatial variability of bed shear stress here agrees with the presence of both the western and eastern mud deposits.In the eastern Irish Sea, the spatial distribution of low bed shear stress matches that of muddy sediment such that bed shear stresses are lowest where muds are found.Fine particles from estuaries are transported northward by surface residual currents as demonstrated by a particle tracking modeling study.Here, the residual transport and low bed shear stress may qualitatively explain the presence of finer sediment without needing to consider the rotation of tidal currents.However, Ward et al. over-predicted the sediment grain size in this region, suggesting that the magnitude of bed shear stress, though locally low, may not be small enough to quantitatively explain the presence of muds.In the western part of the northern Irish Sea, modeled bed shear stresses show low values exist where muds are present in the Western Irish Sea mud belt.Spatial agreement exists between our numerical model and that of Ward et al., and in this region Ward et al. was more successful here than in the eastern part of the northern Irish Sea in reproducing the spatial distribution of the fine sediment deposit.The residual flow directions are highly varied, with evidence of surface currents from the north and from the Irish coast, with some circulation apparent over the deposit.Here, a seasonal baroclinic gyre is present, and has been identified as a retention mechanism over this mud deposit.In the Celtic Sea, mud is present in a patch centered around 6.25° W, 51.25° N.The Marine Institute dataset shows mud farther out on the shelf, but the BGS dataset only gives a few small mud patches there, so the focus here is the more northerly mud deposit.Bed shear stresses are low across a large region of the Celtic Sea extending from the mud patch to the coast of Ireland, and hydrodynamic modeling efforts erroneously predict dominance of fine particles across this entire region.The River Severn feeds into the Bristol Channel and drains a large watershed through a muddy estuary, making it a potential source of fine sediment to the Celtic Sea mud deposit.Residual currents exhibit complex spatial structure.Nevertheless, mud pathways inferred here by residual surface currents can be distinguished not only between the Bristol Channel and the mud patch, but also to and from the southeast coast of Ireland.The surface residual velocity arrows show some indication of a retentive gyre around the mud patch in the Celtic Sea here and in previous measurements, which may influence sediment retention.Overall, this suggests that additional processes help constrain the mud patch to its confined location.A large mud deposit is located in the northern North Sea.Similar to west of Ireland, low bed shear stress regions extend much beyond the mud deposit.The early Holocene nature of these mud deposits suggests that locating a sediment source and pathway may not be relevant here if this mud deposit is no longer active, though the Dooley current is visible in the residual flow over the mud deposit.Slightly north of the mud and sandy mud, some convergence of surface residuals occurs, but not in the region of the finest benthic sediments.The known early Holocene origin of this mud deposit poses the question, why has mud remained in distributed patches within this region?,The regional focus demonstrated the spatial variability of bed shear stress in locations with mud deposits.Here we present a comparison of depth and bed shear stress with ellipticity for all data points within our domain.Depth and bed shear stress are not independent variables as high stresses are more likely found at shallow depths and low stresses in deep waters, but we examine both variables across sediment type here to compare to benthic sediment predictions.Comparing M +M to all sediments shows that muds are found across a range of depths on the Northwest European shelf, though are largely absent shallower than 50 m, in general agreement with the depth limit for muds found by Stephens and Diesing for the Northwest European shelf seas.Data points near the 10 m limit are found in the Bristol Channel where high sediment supply and estuarine processes coexist, along the Belgian Coast, and in shallow areas of the Western Scottish Islands.The cluster of points between 30 and 40 m depth and e between 0.54 and 0.64 are found in the eastern Irish Sea mud deposit.Other values shallower than 50 m are found on the edge of the western Irish Sea mud path, and in coastal areas within the islands of Scotland.Bed shear stress values show considerably less agreement with predictions for muddy sediment.Muddy sediment is not found at very high bed shear stress, but are also found above what Thompson et al. predicted for shelf muddy sediment critical erosion threshold.Points near e = 0 at the highest bed shear stress are those shallow locations described the preceding paragraph.The points within the eastern Irish Sea mud deposit are visible above other bed shear stress values between e = 0.54 to 0.64.The shelf-wide data shows that bed shear stress and depth dependencies are not sufficient to explain fine sediment distribution on the continental shelf since bed shear stress is in most locations above the critical erosion threshold.The cyclonic location in the model corresponds to the location of site A in a Celtic Sea study, and the anticyclonic location corresponds to site I in the same study, with locations shown on Fig. 3b.In this study, the benthic sediment at site A was characterized as sandy mud and at site I was characterized as muddy sand.The strength of the tidal currents at the two locations was similar.The benthic boundary layer of limited thickness will influence the presence of fine particles in two ways: by promoting deposition and aiding retention.Particles are maintained in suspension by the balance of vertical turbulence and particle settling.Given the same water column height and surface forcing, a larger portion of the water column with cyclonic tidal current rotation has low turbulence, upsetting any equilibrium between settling and turbulence, and thus favoring deposition.The second mechanism is the limit on vertical excursion of resuspended material.Particles eroded and resuspended are not likely to move vertically above the benthic boundary layer because above the boundary layer they will find insufficient turbulence to remain in suspension, thus trapping fine particles in the benthic boundary layer.Conversely, if the benthic boundary layer is large, particles can move farther up into the water column where currents are larger and more likely to transport fine particles across or off the continental shelf, e.g., to 60 m above the bed versus 20 m above the bed in the water column shown in Fig. 8.The cyclonic e = 0.86 virtual mooring is located within the Celtic Sea mud patch described in Section 4.1.3, and corresponds to a site investigated as part of a seasonal and spatial study of benthic biogeochemistry.In situ erosion experiments and short-term velocity measurements showed that the muddy bed at this location is highly erodible across seasons, and bed shear stresses from tidal currents are often above the critical erosion threshold.Furthermore, trawling of the N. norvegicus grounds disturbs the bed, preventing consolidation of the mud deposit.Similar trawling impacts have also been documented in the Irish Sea mud deposits.The limited boundary layer here acts to trap these resuspended muds – whether resuspended by currents, waves, or anthropogenic means.Farther west in the Celtic Sea, where Ward et al. predicted the presence of fine sediment in the lower bed shear stress environment, the tidal current ellipticity becomes slightly negative.Without the limiting rotational influence, the benthic boundary layer here occupies a larger fraction of the water column suggesting that fine particles are less likely to settle and those on the bed if resuspended may move higher in the water column where the possibility of transport is more likely.To look at the shelf-wide benthic boundary layer reduction and its relationship to mud deposits, we plot the normalized boundary layer thickness, δ* given by Eq. for the entire shelf.This formulation developed from the analytical model of Prandle includes the effects of ellipticity, currents, and depth.The benthic boundary layer thickness predictor does not give all of the dynamical information provided by numerical modeling of Kz over the water column, but allows us to focus specifically on the combined effects of currents, depth, and ellipticity.Values of δ* > 1 have been set to 1, and in these regions tidal currents are sufficient to create a benthic boundary layer that covers the entire water column.Where δ* < 1, a combination of u, H, and e limit the boundary layer thickness.Small δ* is seen in the Aran Grounds, Celtic Sea, northern Irish Sea, and northern North Sea, as well as near the Scottish coast and in the Norwegian trench.The spatial structure of δ* agrees well with the spatial distribution of mud deposits on the shelf, highlighting that mud deposits exist at locations with thin benthic boundary layers.Based on the approximations of c and CD, the Aran Grounds mud deposit exists were the benthic boundary layer is ≤10% of the water column.In the Aran Grounds, muds as well as biofouled microplastics are retained in the sea floor.The deposition and retention mechanism for negatively buoyant biofouled microplastics will be similar to that of sediment, suggesting the influence of the limited boundary layer may extend beyond trapping of muds.The spatial distribution of the eastern Irish sea mud matches nearly perfectly the δ* contours, and good agreement is seen in the western Irish Sea.In the eastern Irish Sea, Ward et al. over-predicted sediment sizes, but adding the boundary layer effects of cyclonic tidal current rotation could explain this discrepancy through an additional physical mechanism limiting transport of fine particles.Radioactive sediments from nuclear facilities at Sellafield confirm that the region is depositional for locally sourced material.In the western Irish Sea, Fig. 10b) shows a reduced boundary layer from the combined influence of depth-averaged tidal currents, ellipticity, and depth.The importance of the seasonal stratified gyre here alongside the other influencing factors is difficult to quantify.In the Celtic Sea, the tidal boundary layer is limited to 10–20% of the water column.Similar to the western Irish Sea, a stratified gyre there may also be of secondary importance to mud retention.In the northern North Sea, δ* is also smaller where muds are present.Recent work has shown that episodic events are capable of transporting large quantities of fine sediment.These events include storm induced wave-enhanced sediment-gravity flows, resuspension by internal waves, and resuspension by trawling, all coupled with a transport mechanism for these resuspended sediments.Storm effects to redistribute muddy sediment on the Iberian shelf have been observed and modeled as a combination of WESGF with storm-induced currents, providing a high concentration region and a residual flow to create a large sediment flux.These episodic WESGF are seen to be persistent in sediment records.Internal wave has also been seen to suspend muddy sediment on the Monterey Bay shelf edge in the US state of California, providing a mechanism for muds transported off the shelf to move landward through suspended nephloid layers.On the Spanish and French shelves of the Mediterranean Sea, trawling suspends sediment on the shelf edge, and where this occurs proximate to steep canyons, a sediment-gravity flow can be induced to create a large offshore flux of fine sediment.These mechanisms are varied, but all exhibit an episodic nature.The mechanism of fine sediment deposition and retention described in this paper is likely to be small on a short-term basis compared to these other episodic events shown to redistribute fine sediment.However, the process described is persistent, so if a large redistribution of sediment by storms occurs only infrequently, a smaller but continuous background of enhanced sediment deposition where the benthic boundary layer is thin may still have a similar impact on a shelf deposit.Measurements of suspended sediment concentrations, along with settling velocities and residual currents would be needed over the full tidal boundary layer to quantify the sediment flux in regions of limited benthic boundary layer, whether the process of boundary layer suppression is by ellipticity or another factor.Conversely, interaction between storm conditions and thin benthic boundary layers may be the mechanism that releases fine sediment from these regions.Storm winds can cause a surface boundary layer that reaches the benthic boundary layer.In these conditions the mechanisms for retention in regions of cyclonic tidal currents would no longer be retentive - potentially providing an escape path for materials trapped under calm conditions.Spatially, the episodic processes to distribute muds all occur near the shelf edge.There, high energy from internal waves or surface waves is likely to be greater than on the middle of a large shelf.Transport of trawled sediment in the Mediterranean relied on canyons to act as a conduit to move fine sediment from the shelf edge to deeper regions, and internal waves on the Monterey Bay shelf were resuspending fine sediment that had already been transported over the shelf edge.The Northwest European shelf seas are a low energy environment compared to these shelf edges and others with frequently studied mud deposits.Away from the shelf edge, high energy events are less likely, and the importance of limited tidal benthic boundary layer mechanisms on fine sediment deposition and retention may be of greater importance.If this is the case the mechanism described here may be most important in other large shelf seas where mud deposits are found, such as the Yellow and Bohai Seas and the Patagonian shelf.Comparing sediment composition maps and a hydrodynamic numerical model, we have shown here that in the Northwest European shelf seas, fine benthic sediments occur in locations with cyclonic tidal ellipticity.We have suggested that the physical control on this relationship is the influence tidal current rotation has on limiting the thickness of the tidal benthic boundary layer.Using a boundary layer thickness predictor, spatial agreement between mud deposits and limited tidal benthic boundary layer thickness was shown to exist in the Northwest European shelf seas.This work has shown that a relationship exists between muddy benthic sediment and cyclonic tidal currents in the Northwest European shelf seas.Cyclonic tidal currents, rotating opposite the direction of the Coriolis force, form a smaller tidal benthic boundary layer than anticyclonic currents.This creates a mechanism for enhanced deposition of fine sediment as a greater fraction of the water column has low turbulence above the thin benthic boundary layer and fine material can settle.Once on the sea floor, the thin benthic boundary layer can also limit the movement of resuspended sediment which should be vertically limited by the boundary layer thickness and unable to reach larger residual currents higher in the water column.This mechanism is persistent, though future work is necessary to quantify the resulting sediment fluxes and relate it to other mechanisms of fine sediment dispersion on continental shelf seas.Sediment data are available through the Marine Institute and British Geological Survey.Model data are available at channelcoast.org/iCOASST.
Benthic sediments in continental shelf seas control a variety of biogeochemical processes, yet their composition, especially that of fine sediment, remains difficult to predict. Mechanisms for mud or fine sediment deposition and retention are not fully understood. Using sediment data and a hydrodynamic model of the Northwest European shelf seas, a relationship is shown to exist between fine benthic sediment composition and regions of cyclonic tidal current rotation. The reduced thickness of cyclonic tidal benthic boundary layers compared with the anticyclonic case promotes deposition of fine sediment and trapping of resuspended material. Adding the effects of the benthic boundary layer thickness, as influenced by ellipticity or not, sheds some light on the limitations of approaches only focusing on bed shear stress and sediment pathways to predict the location of mud deposits. A tidal boundary layer predictor that includes ellipticity alongside tidal current magnitude and depth was shown to spatially agree with maps of mud deposits.
292
Quantifying the behavioral and economic effects of regulatory change in a recreational cobia fishery
Andrew M. Scheld: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Writing - original draft, Writing - review & editing.William M. Goldsmith: Conceptualization, Investigation, Writing - original draft, Writing - review & editing.Shelby White: Investigation, Writing - review & editing.Hamish J. Small: Conceptualization, Investigation, Writing - review & editing, Funding acquisition.Susanna Musick: Funding acquisition, Investigation, Writing - review & editing.Cobia are a widely distributed coastal pelagic fish species found throughout tropical and subtropical Atlantic, Indian, and western Pacific oceans.They are a large, long-bodied fish, growing to over five feet in length and having a maximum weight of well over 100 pounds.The species is a popular recreational target throughout the U.S. South Atlantic and Gulf of Mexico.Commercial exploitation remains limited however, as cobia are typically solitary other than during spawning aggregations.In U.S. waters, cobia are managed as two separate stocks, distinguishing between Atlantic and Gulf migratory groups, with a boundary set at the Florida-Georgia state line.A stock assessment completed in 2013 indicated that the Atlantic migratory group was not overfished and that overfishing was not occurring, though a decline in spawning stock biomass since the early 2000s was noted.In 2015 and 2016, recreational harvests far exceeded annual catch limits for the Atlantic group, triggering accountability measures that closed the fishery early in federal waters.The majority of Atlantic cobia are caught in state waters however, limiting the effectiveness of federal regulations.In 2017, the Atlantic States Marine Fisheries Commission approved an interstate fishery management plan for the Atlantic migratory group.In addition to setting state-specific annual soft targets for harvests in Virginia, North Carolina, South Carolina, and Georgia, the FMP established a one fish per person bag limit and a 36″ fork length minimum size.States were allowed to implement alternative management measures provided they were deemed to have equivalent conservation value.In March of 2019, cobia was removed from the federal Coastal Migratory Pelagic Resources FMP and management authority for the Atlantic migratory group transitioned from the South Atlantic Fishery Management Council to the ASMFC.From 2013–2017, recreational anglers in Virginia took on average 225,600 trips per year targeting cobia.Harvest associated with these recreational trips accounted for 39 % of all landings of the Atlantic migratory group during this period.Cobia are a popular recreational target during the summer months when they congregate in the Chesapeake Bay to spawn.They are caught by anglers using a variety of methods, though bottom fishing and sight casting are thought to be the most common.The Virginia Marine Resources Commission restricted cobia harvests through a one fish per person bag limit and a 40″ total length minimum size limit during the 2017 Virginia cobia season, which ran from June 1st through September 15th.It is recognized that current recreational regulations implemented in both state and federal waters are subject to change as managers seek to balance conservation and use of the resource.While cobia is a popular target species throughout its domestic range, there has been little research investigating the motivations, preferences, and values associated with recreational cobia angling.Multiple studies have considered cobia as part of gamefish species aggregates during comprehensive investigations of recreational value or when focusing on a particular species of management relevance.Results from these studies are of only limited use in management and regulation of cobia however, because value estimates or behavioral predictions unique to cobia angling cannot be identified.Still, prior research has indicated that gamefish species aggregates generate larger net benefits when compared to other species groups, suggesting that estimates of value and preferences with respect to individual species within the broad group may be important to fishery managers and recreational stakeholders.Indeed, several economically important species targeted in the South Atlantic and Gulf of Mexico have been the focus of studies quantifying angler preferences and value, such as dolphinfish Coryphaena hippurus and king mackerel Scomberomorus cavalla, red snapper Lutjanus campechanus, and Atlantic bluefin tuna Thunnus thynnus.A better understanding of the preferences and decision-making by anglers in the recreational cobia fishery would help facilitate consideration of angler benefits and satisfaction in resource management decisions, while also enhancing the ability to predict behavioral responses to potential changes in fishery or regulatory conditions.Regulatory changes can elicit behavioral responses by anglers that are difficult to forecast, modifying trip expectations and outcomes that affect the desirability or utility associated with recreational fishing and, by extension, angler well-being.Several revealed and stated preference approaches can be used to estimate angler preferences and analyze behavioral response in the context of regulatory change.Challenges in defining anglers’ choice sets, as well as the ability to evaluate preferences across a broad suite of attributes and attribute levels—including novel regulatory combinations—have led many researchers to utilize stated preference methods in investigations of anglers’ regulatory preferences.Discrete choice experiments, where respondents are presented multiple hypothetical choice alternatives and asked to select those they most prefer, are a common approach.In these applications, regulations are typically included as attributes of hypothetical alternatives along with catch-related aspects of a fishing trip.Respondents thus make decisions by comparing regulatory and non-regulatory aspects of each potential trip.This approach may present anglers with unfamiliar or confusing choice scenarios as preferences for regulations are most likely tied to their resulting impacts on allowable harvest and may, or may not, be independent of this harvest impact.Indeed, researchers have occasionally noted counterintuitive results with respect to regulatory preferences, possibly arising from respondent misinterpretation.Including regulations directly in choice scenarios as trip attributes may therefore confound estimation of angler preferences, suggesting that a more nuanced approach is necessary to understand regulatory response.Changes in regulations and fishery conditions may lead to shifts in trip-taking, directed trip-level fishing effort, as well as species targeted.Target species substitution in response to fisheries management decisions can undermine policy objectives as anglers reallocate effort across an array of available target species, possibly resulting in unintended and unforeseen outcomes with broad ecosystem effects.This behavior also influences the resulting economic effects of a policy change realized by individual anglers and local businesses that depend on the recreational sector.Anglers target a wide variety of seasonally-available recreational species in the Chesapeake Bay.It is not known whether anglers currently targeting cobia would switch to target another species were regulatory or fishery conditions to change, though such behavior is plausible and could be consequential in terms of both the management of alternative target species as well as its effects on anglers and fishing communities.This study sought to improve our understanding of angler preferences, values, and behavior in the recreational fishery for Atlantic cobia within the context of regulatory change.Changes in cobia regulations were hypothesized to affect trip-level utility, possibly leading to changes in fishing behavior and angler net benefits.To investigate angler response to changes in trip attributes and regulatory context, a survey instrument was developed that included a series of hypothetical choice scenarios.Rather than incorporating regulations directly into choice alternatives, the survey included a variety of regulatory treatments that modified species targeting tradeoffs across individuals in the sample.Following estimation of angler preferences using a mixed logit model, changes in angler welfare resulting from changes in regulations were explored under a variety of available target species scenarios.In what follows, we first describe survey development and implementation before discussing our modeling approach and main findings of the research.An online survey containing questions related to recreational fishing behavior, expenditures, and preferences, with a focus on cobia, was developed in collaboration with Virginia recreational anglers and managers at the VMRC during the spring and summer 2017.Two focus groups were held during survey development.The first focus group took place in May 2017 and was used to review an initial paper draft of the survey and discuss question wording, structure, layout, and also assess angler comprehension of questions and survey material.Once the online survey instrument was developed, a second focus group was held in August 2017 to evaluate survey performance across multiple platforms and further review material.In total, eight anglers participated during survey development focus groups.Following the second focus group, the online survey instrument was further refined before being finalized in October of 2017.The final survey included 18–28 questions, depending on within-survey responses, in addition to four choice scenarios, where hypothetical fishing trips were described and respondents were asked to select the alternative they most preferred.The survey was approved by William & Mary’s Protection of Human Subjects Committee.Choice scenarios for the survey were developed and organized primarily to enable estimation of preferences associated with cobia angling and target species substitution.Each hypothetical choice occasion included two fishing trips, with each trip targeting one of three species: cobia, red drum, or summer flounder; the latter two species being common targets of recreational anglers in the lower Chesapeake Bay during the summer.Four trip-related attributes—target species, catch, average weight of catch, and cost, each with three levels—were included when generating trip alternatives.Values of catch and average weight of catch represented species-specific low, medium, and high estimates that were determined through conversations with recreational anglers and by evaluation of recreational catch data.For red drum, the largest average weight corresponded to an adult red drum while low and medium values were sizes typical of juveniles.An efficient experimental design was developed using macros in SAS software that combined candidate trip alternatives into choice scenarios, maximizing design balance and orthogonality subject to user-specified constraints.Restrictions were added when generating choice scenarios such that trip alternatives that either both targeted red drum or both targeted summer flounder were not compared to one other, nor were cobia trip alternatives for which one trip clearly dominated the other.Twenty choice scenarios were generated from the full factorial design and grouped into one of five blocks containing four choice scenarios each, a number suggested to not be cognitively burdensome in previous surveys of recreational anglers.Each of the five DCE blocks was combined with each of seven cobia regulatory scenarios, developed in conjunction with VMRC to reflect a realistic range of regulations, for a total of 35 different survey versions.Individual respondents thus only saw one set of potential cobia regulations within and across all four choice scenarios contained in a single survey.Summer flounder regulations and red drum regulations were held constant within and across all choice scenarios and survey versions to reflect regulations used in Virginia in 2017.Regulations for all three species were included as text presented above each choice scenario, compelling the respondent to consider trip outcomes as opposed to regulatory environment.Legal harvest was included as a fifth derived attribute.Trip alternatives presented the average weight of catch in pounds while regulations specified minimum and maximum lengths in inches.This was done to disassociate trip attributes from regulatory context, though it necessitated the conversion of weights to lengths to determine legal harvest.Species-specific length-weight relationships from the literature were used for conversions.To reflect variation in the length-weight relationship, as well as in the size of individual fish caught on a particular trip, it was assumed that catch lengths followed a normal distribution with a standard deviation equal to 15 % of the average length of catch.Legal harvest was calculated by assessing the distribution of catch lengths for a particular trip, given average weight, and determining the percent within legal size limits.This percentage was then multiplied by catch and rounded to the nearest whole number.Legal harvest was equal to the number of fish within legal size limits that was less than or equal to the bag limit.The survey frame included all individuals who held a 2017 Virginia cobia permit and had provided email and valid mailing addresses.Managers acknowledged that there were likely some anglers who fished for cobia in 2017 without obtaining the required permit, given that the program was in its first year.It is also possible that there were anglers who had previously targeted cobia, when regulations were more liberal, but did not fish for the species or obtain a cobia permit in 2017.As our study aimed to accurately capture the preferences and behavior of anglers who had targeted cobia, it was decided to also include a stratified random sample of individuals with email and valid mailing addresses who held a Virginia saltwater recreational fishing license but not a cobia permit, stratified by state of residency.The final survey frame included residents of 43 states, the District of Columbia, and the US Virgin Islands; however, the majority of individuals were residents of Virginia and most non-residents were from close neighboring states.Email and mailing addresses for cobia permit and saltwater recreational fishing license holders were obtained from the VMRC.The survey was implemented online using the survey platform Qualtrics.An initial email invitation containing a link to the online survey was sent on October 27, 2017.This was followed by a postcard approximately two weeks later that contained the survey web address and a Quick Response code that could be scanned to access the online survey.A final email reminder was sent on December 11, 2017.Due to the relatively large survey frame and mixed-mode invitation, it was determined that providing a unique survey link or code for each individual would not be practical.A restriction was created such that each unique Internet Protocol address could only respond to the survey once, reducing the possibility that one individual could respond multiple times.The survey closed on December 20, 2017; individuals who had begun the survey before this time were allowed up to one additional month to finish.The approved research protocol did not allow collection of individually identifiable information as questions regarding an individual’s fishing behavior could be viewed as sensitive if linked to their recreational permit holdings.Respondents were asked how they learned of the survey however, and those who indicated channels other than the invitation email or postcard were removed from subsequent analyses.Several steps were taken to ensure the survey collected data from a representative sample of cobia anglers.Average responses to questions on angler trip-taking and demographics were compared with data collected through a recent large national survey as well as data collected from cobia anglers by state managers.Additionally, previous research has noted that more avid anglers may be more likely to respond to recreational fishing surveys, which can affect estimation of angler preferences and willingness-to-pay.We analyzed responses to questions on trip-taking and cobia trip expenditures in relation to survey response date, hypothesizing that more avid anglers would be more likely to respond earlier.Specifications of the preference model were estimated including response day and state of residence as interaction terms with hypothetical trip costs and cobia targeting to evaluate whether early responding individuals or those living outside Virginia held different preferences.Finally, responses to several questions were tested for significant differences across versions of the survey containing different hypothetical cobia regulations.Categorical responses were evaluated using chi-squared tests while continuous responses were tested for differences across survey regulatory treatments using one-way analysis of variance tests.In each choice scenario, respondents were asked to select their most preferred option from the following four alternatives: “TRIP A”, “TRIP B”, “Target a different saltwater species”, or “Do not go saltwater fishing”.Trips A and B potentially differed across five dimensions: species targeted, catch, average weight of catch, legal harvest, and trip cost.No specific attributes were associated with the options “Target a different saltwater species” and “Do not go saltwater fishing”.Species-specific regulations were provided above presented choice scenarios and remained constant across the four scenarios shown to an individual.In, the probability that individual n chooses alternative i is a function of observable attributes of alternative i, attributes of all alternatives included in j, and preference parameters.The mixed logit is a flexible functional form that enables modeling of heterogeneous individual preferences through selection of a mixing distribution f, which is used to characterize the distribution of preferences across a population.Choice probabilities integrate over this preference distribution.The log-likelihood in sums the natural logarithm of choice probabilities over N individuals, T choice occasions for each individual, and J alternatives for each choice occasion.The binary variable dntj was equal to one when individual n on choice occasion t chose alternative j, and zero otherwise.The utility associated with choice alternatives was thought to depend on trip characteristics and the regulatory environment.Four dummy variables were constructed to evaluate species targeting preferences: cobia, juvenile red drum, adult red drum, and summer flounder, each of which equaled one for choice alternatives targeting these species and zero otherwise.Adult red drum was considered separately as it is a catch-and-release only fishery that is frequently targeted in geographically distinct areas.The alternative “Target a different saltwater species” was treated as the reference level and an alternative specific constant was therefore included for the option “Do not go saltwater fishing”.The impacts of changes in cobia regulations were evaluated through a series of regulatory dummy variables interacted with the cobia dummy variable.These interaction terms equaled one for cobia choice alternatives in survey versions under a particular regulatory change compared to status quo management and zero otherwise.This model structure allowed for shifts in average cobia targeting preferences in response to regulatory change.Preferences associated with catch, average weight of catch, and legal harvest were estimated separately for trips targeting cobia and non-cobia species by interacting cobia and non-cobia trip dummy variables with these trip attributes.Red drum and summer flounder trip attributes were considered jointly as attributes characterizing non-cobia trips given our research focus on cobia as well as an experimental design that did not present anglers with choice scenarios where both trips targeted the same non-cobia species.Before constructing cobia and non-cobia trip attribute interaction terms, catch, average weight of catch, and legal harvest variables were standardized by species using z-score transformations.This was done to control for differences in scale across attributes and target species.Coefficients for species dummy variables therefore captured the utility associated with an average trip, while coefficients for catch, average weight of catch, and legal harvest attribute interaction terms measured the utility associated with a one standard deviation increase in these variables.An additional dummy variable was added to capture potential non-linear effects associated with legal harvest of cobia.This term was equal to one when a trip targeting cobia had zero legal harvest and zero otherwise.Note that choice alternatives for trips targeting juvenile red drum or summer flounder had non-zero legal harvest in all instances.An additional model was also estimated for comparison, removing catch, average weight of catch, and legal harvest covariates and instead including total weight of catch and total weight of legal harvest.Species targeting dummy variables, the no-trip ASC, and attribute interaction terms were included in the mixed logit model as normally distributed random parameters.Trip cost was included as a non-random variable, which served to increase model stability and facilitate straightforward calculations of willingness-to-pay.An additional ASC for “TRIP A” was included to control for factors related to the presentation of choice alternatives but unrelated to attributes of the alternative itself.Due to its construction as a derived attribute, legal harvest was correlated with catch, average weight of catch, and, for cobia, cobia regulations.The mixed logit model returns parameter estimates identifying independent effects that measure changes in trip utility corresponding to changes in an individual trip attribute, holding all other attributes fixed.This means that angler preferences estimated for catch, average weight of catch, and regulatory terms should be interpreted as being independent of changes in legal harvest, as well as vice versa.For example, the coefficient on cobia catch measures the utility associated with a one standard deviation increase in trip catch of cobia, holding legal harvest constant.Conversely, the coefficient on cobia legal harvest measures the utility associated with a one standard deviation increase in legal harvest of cobia, holding catch constant.In this context, cobia and non-cobia catch parameters measure preferences for additional catch that cannot be legally retained, while legal harvest parameters measure preferences associated with relaxation of binding regulations.Similarly, preference parameter estimates for cobia regulations measure the shift in average cobia targeting preferences under a particular regulatory change, independent of the effect of that regulation on legal harvest.Two additional models, one excluding legal harvest and another excluding cobia regulatory interaction terms, were estimated to better understand the effects of this correlation on preference parameters.Results from these models are included in the Supplementary Material.Two sets of random draws of model parameters were constructed subsequent to estimation of the mixed logit model.First, 10,000 random draws were taken from a multivariate normal distribution with a mean and variance-covariance matrix set to model estimates, following the procedure originally proposed by Krinsky and Robb.Each of the 10,000 draws was then used to make an additional 1000 draws from a multivariate normal distribution with a mean set to model coefficients of fixed parameters together with the means of random parameters and a variance-covariance matrix that captured the estimated preference heterogeneity.This resulted in a sample of 10,000,000 preference parameters incorporating both statistical, or sample, uncertainty as well as preference heterogeneity.Expected mean values and 95 % confidence intervals associated with WTPs and changes in angler welfare resulting from changes in cobia regulations were calculated using the 10,000 Krinsky-Robb draws.The set of 10,000,000 parameter vector draws, which captured the full extent of variation in angler preferences, was used in characterizing WTP heterogeneity.WTP in was calculated by summing the partial utilities associated with trip attributes a, subtracting the ASC for the no trip alternative, and dividing the resulting value by the negative marginal utility associated with trip cost.This measure corresponded to the maximum amount that would be paid for fishing trip j.WTP values were calculated for average 2017 trips targeting each species as well as for cobia trips under hypothetical regulatory changes.The Marine Recreational Information Program collects data on angler effort, catch, and harvest by species.These data were used to calculate average values for catch, harvest, and average weight of harvest associated with private boat trips in Virginia during 2017, where cobia, red drum, or summer flounder were the primary species targeted.As weight information is only collected for landed fish, average weight of harvest was used as a proxy for average weight of catch.Cobia legal harvest attribute values were adjusted for hypothetical regulatory scenarios assuming changes in legal harvest within our experimental design, arising due to changes in regulations, would be proportionately similar to expected changes in legal harvest for hypothetical average 2017 trips.Under status quo regulations, average legal harvest was 0.59 cobia/trip within choice scenarios presented in the survey.This value increased to 0.77 cobia/trip and 0.82 cobia/trip when bag limits were increased or the minimum size limit was decreased, respectively.Average legal harvest decreased to 0.50 cobia/trip when the minimum size limit was increased, and to zero cobia/trip under catch and release regulations, however.Hypothetical cobia regulatory scenarios were therefore assumed to increase legal harvest per trip by 31 % for a bag limit increase and 38 % for a minimum size limit decrease, but decrease legal harvest by 15 % for a minimum size limit increase and 100 % in a catch and release only fishery.Since average legal harvest values indicated many cobia trips with zero harvest, the zero legal harvest dummy variable was adjusted to reflect the assumed proportion of zero harvest trips.Mean WTPs for marginal increases in cobia catch, average weight of catch, and legal harvest were also calculated.These values were constructed by dividing preference parameters for these attributes by the negative coefficient on trip cost.Trip attribute WTPs, which measured WTP for a one standard deviation increase in each attribute, were then re-scaled to their respective units by dividing by attribute standard deviations.WTP for legal harvest of cobia with respect to the first fish was calculated by adding to the marginal value, the WTP associated with non-zero legal harvest.Changes in angler welfare in response to changes in cobia regulations were assessed under four available target species scenarios: 1) cobia, red drum, summer flounder; 2) cobia, summer flounder; 3) cobia, red drum; and 4) cobia only.This was done to evaluate regulatory impacts across a broad suite of recreational fishery conditions that may affect substitution behavior.To simplify analyses, target species scenarios including red drum considered targeting of juvenile red drum only.In all scenarios, the set of available choice alternatives was restricted to taking an average quality trip targeting one of the available species or doing something other than saltwater fishing.The survey included questions asking respondents about average trip expenditures when targeting cobia as well as how these costs compared to those when targeting other species.Responses to these questions were used to derive reasonable approximations of average trip costs used in.All statistical modeling and data analyses were performed in the statistical software R.The mixed logit model was estimated using the “mlogit” package.The function “mvrnorm” contained in the “MASS” package was used in constructing multivariate normal random draws of parameter vectors.Email and postcard invitations were distributed to 10,000 individuals.During the eight-week survey window, 2698 individuals visited the survey site and 2535 answered at least one question.From this sample, 1646 individuals indicated they had targeted cobia within the last five years and had been invited to participate in the survey through the invitation email or postcard.Responses from individuals who did not indicate targeting cobia or learning of the survey through the invitation email or postcard were removed from all subsequent analyses.Of these respondents, 90 % were residents of Virginia.The majority of non-resident respondents were from the neighboring states of Maryland and North Carolina.The average birth year of respondents was 1966 and approximately 60 % of individuals had completed an associate’s, bachelor’s, advanced, or professional degree.There was considerable variation in reported personal annual pre-tax income, however 60 % of respondents indicated incomes of $75,000 or greater.Survey respondents reported taking 26 recreational fishing trips on average during the previous year and three recreational cobia trips during the 2017 season.Respondent demographics reported here were similar to those reported by Brinson and Wallmo, who note that respondents in their national saltwater angler survey on average fished for 25 days over the last year, were 53 years old, had completed an associate’s degree or higher, and had household annual incomes greater than $60,000.Additionally, Jiorle reported that in 2017 1882 anglers indicated taking 4,969 cobia trips.There were no significant differences across survey versions in any of the considered demographic or fishing-related variables.Respondents reported variable cobia avidity, with 15 % indicating they took zero cobia trips in 2017 while 10 % responded they took 10 or more.When asked how many cobia trips per month respondents would take under ideal weather conditions and 2017 regulations, 55 % selected 0–2 trips/month, 31 % chose 3–5 trips/month, and 13 % indicated they would take six or more cobia trips per month.The primary reason for recreationally targeting cobia selected by respondents was that cobia “provide a good fight/are fun to fish for”.A majority of respondents also stated that they target cobia because they enjoy eating them.The primary mode indicated when targeting cobia recreationally during the 2017 season was private boat.A small number of individuals had targeted cobia from shore or aboard a for-hire vessel at least once during the 2017 season however.The primary fishing method reported when targeting cobia was chumming or bottom fishing.Sight fishing was also common and a small group of respondents indicated no primary method or switching between methods depending on conditions.No significant correlation existed between survey response day and the number of cobia trips taken in 2017 or average cobia trip expenditures, indicating early responding individuals did not appear to be more avid anglers.The median level of average trip expenditures for individuals on recreational trips targeting cobia was $140, with 80 % of respondents indicating average trip costs between $40 and $420.Boat fuel made up the largest share of trip costs, followed by fuel for a car or truck, food and drink from convenience stores, chum, and live bait.Average trip expenditures differed across individuals depending on their stated primary fishing method, with individuals primarily sight fishing spending more on average, compared to those who primarily chum or bottom fish, fish from a pier or beach, or have no primary cobia fishing method.Individuals who were not residents of Virginia were found to spend more on cobia trips due to increased expenditures on lodging, fuel for a car or truck, and food and drink from restaurants.Most individuals who had targeted cobia owned a boat they had used for this purpose.Though of those anglers with private vessels targeting cobia, many fewer had fishing towers installed on their boats.Approximately half of respondents indicated that fishing for cobia was about the same cost or less expensive when compared to other inshore or nearshore species they target.An equal percentage indicated that targeting cobia was more expensive.In the subsequent analyses calculating changes in angler welfare arising from changes in cobia regulations, we considered two potential cost scenarios: 1) fishing trip expenditures are equivalent across target species and set to the median reported cobia trip costs of $140; and 2) targeting cobia is 25 % more expensive than targeting red drum or summer flounder, and thus costs of $140 and $112 were applied.Results from the former cost scenario are presented below while those of the latter are included in the Supplementary Material.Given differences in expenditures between resident and non-resident anglers, angler welfare estimates are best interpreted as representative of Virginia residents.Cobia trips were presented to respondents in choice scenarios at approximately twice the frequency of either summer flounder or red drum trips.Conditional on the number of times an option was presented and a choice made, trips targeting cobia were selected most frequently, followed by summer flounder, red drum, and targeting a saltwater species not described in either trip option.The no saltwater fishing trip alternative was selected least frequently.In total, 6214 choice scenario responses were analyzed using the mixed logit discrete choice model.Several variables included in the model were found to be important factors affecting decision-making, and comparison of the full model with an intercept-only model using a likelihood ratio test indicated inclusion of covariates significantly improved model fit.The average respondent preferred trips targeting cobia to those targeting summer flounder or red drum.All standard deviations associated with random parameters for species’ dummy variables were statistically significant, indicating heterogenous targeting preferences.Species targeting preferences were also found to be influenced by cobia regulations.Aside from a bag limit increase, which had no significant effect on cobia targeting preferences, all regulatory changes decreased the utility associated with targeting cobia.It is worth reiterating that these effects were independent of regulatory impacts on legal harvest.For example, the probability that an angler would select a trip targeting cobia, over one targeting red drum or summer flounder, under average catch conditions, a trip cost of $140, and zero legal harvest, ranged from 0.20 to 0.56.Several other variables in the model were also found to be statistically significant.For the average respondent, increases in cobia or non-cobia catch, average weight of catch, and legal harvest were positively related to trip utility.In general, anglers derived more utility from increases in cobia trip attributes as compared to attributes for trips targeting non-cobia species.Significant heterogeneity was found with respect to preferences for all cobia trip attributes.Preferences for cobia and non-cobia catch were especially variable, suggesting substantial heterogeneity within the sampled population for increases in trip catch that could not be legally retained.Comparison across trip attribute coefficients, which were estimated using standardized covariates, indicated that changes in average weight of catch elicited that largest changes in trip utility.Preferences for legal harvest of cobia were found to be non-linear, with the first fish harvested producing substantially more trip utility as compared to the second.As expected, trip cost was negative and highly significant.The positive and significant parameter on “TRIP A” indicated that individuals tended to choose this option more frequently, irrespective of trip attributes.Including this parameter in the model ensures that estimates of other parameters are not confounded by factors related to the presentation of choice alternatives.Several additional specifications of the choice model were explored to better understand angler preferences.Models allowing for shifts in trip cost preferences for non-resident respondents and individuals responding early to the survey indicated angler WTP did not depend on these factors.Additionally, non-resident and early responding anglers were not found to have stronger cobia targeting preferences compared to residents or those responding to the survey later on.Removing legal harvest from the mixed logit model led to increases in magnitude and significance of cobia and non-cobia catch parameters, as these variables now captured the marginal value of increases in trip catch irrespective of changes in legal harvest.Under this specification, restrictive regulations were also found to cause greater disutility due to corresponding reductions in legal harvest.Models removing cobia regulatory terms and including total weight of catch and harvest produced results consistent with findings presented in Table 3, but had slightly weaker fits to the data.Predicted choice probabilities estimated using a multinomial probit model, which relaxed the restriction of error independence, were found to be strongly correlated with predictions from the mixed logit model.Across all model specifications evaluated, preference parameters appeared robust in sign, significance, and relative magnitude.Mean WTPs under status quo 2017 regulations for average trips targeting juvenile red drum, summer flounder, and cobia were $408.94, $517.08, and $576.85, respectively.Considering the full possible range of targeting preferences within the sampled population led to wide distributions of WTP values.Changes in cobia trip attributes and regulations were found to significantly affect trip WTP.Mean WTPs for an additional cobia caught and a one-pound increase in average weight of catch were $24.89 and $4.16, respectively.WTP for the first cobia harvested was $158.84, whereas subsequent legal harvest was valued at $34.89/fish.Mean WTP for an average trip targeting cobia increased under a bag limit increase to $590.31 but decreased under a minimum size limit decrease to $551.63.Restrictive regulatory changes further reduced cobia trip WTP to $519.68 under a minimum size limit increase and $414.78 under a catch and release only recreational cobia fishery.The large decrease in trip WTP under the catch and release only regulatory scenario was due to the direct influence of regulatory environment on trip utility as well as the reduction in legal harvest.Under a catch and release only recreational cobia fishery, 11.08 % of the full WTP distribution, which incorporates both sample uncertainty and preference heterogeneity, was less than zero.This suggests that the hypothetical regulatory change could make some portion of the population averse to targeting cobia at even nominal cost.Restrictive regulations were found to lead to statistically significant decreases in angler welfare.Across four available target species scenarios, reductions in angler welfare ranged from losses of $29.77 to $56.58 per trip for a minimum size limit increase and losses of $62.20 to $158.68 per trip for catch and release only regulations.Losses in angler welfare were found to increase as fewer or less desirable alternative target species options were available due to reduced target species substitution possibilities.In scenarios where summer flounder or red drum were available, target species substitution was found to be the dominant behavioral response to changes in cobia regulations.When cobia was considered a more expensive targeting option, the welfare effects of regulatory changes were slightly reduced as trips targeting alternative species now produced slightly higher net benefits.We developed and implemented a survey of recreational cobia anglers to evaluate preferences and decision-making under a variety of regulatory conditions.Responses to recreational fishing trip choice scenarios were modeled using a mixed logit specification, estimating heterogeneous angler preferences associated with targeting red drum, summer flounder, and cobia, as well as cobia and non-cobia trip attributes.Model findings indicated that, of the three species, cobia was most preferred under status quo regulations, followed by summer flounder, and red drum.Additionally, cobia trip attributes were found to be more important to anglers when compared with trip attributes for non-cobia species.Finally, angler behavior and resulting welfare impacts were shown to depend on both cobia regulations and alternative target fishery conditions.In this analysis, WTP was estimated for recreational trips targeting juvenile red drum, summer flounder, and cobia under a variety of regulatory conditions.Our trip WTPs were calculated using average 2017 values for catch, harvest, and average weight of harvest as reported by MRIP.Lew and Larson estimated WTP for resident and nonresident saltwater sportfishing trips in Alaska targeting Pacific halibut Hippoglossus stenolepis, Chinook salmon Oncorhynchus tshawytscha, and coho salmon O. kisutch.The authors found that for Alaska residents fishing from private boats, trip WTP ranged from $246 to $444 for single species trips and up to $718 for multispecies trips.Despite obvious differences in target species and the angling population surveyed, trip WTP estimates provided here are generally similar to those presented in Lew and Larson.Lew and Larson did however observe significant differences in trip WTP for resident and non-resident anglers.Non-resident anglers in our sample were found to have higher travel expenses associated with cobia trips but did not appear to exhibit different WTP or cobia targeting preferences.The majority of non-resident anglers responding to this survey were from close neighboring states, owned boats used to target cobia, and were similar to resident anglers in terms of primary fishing mode, motivations, and commonly targeted species.Nevertheless, future research should carefully consider differences in trip expenditures, opportunity costs of time, and fishing preferences in relation to travel distance and site access opportunities, which generally differ between resident and non-resident anglers.Cobia trip attribute WTPs were also estimated in this study.Johnston et al. performed a meta-analysis of 48 studies reporting WTP estimates for increased recreational catch of both fresh- and salt-water species.Our WTP estimate for an additional cobia caught is close to the mean reported in Johnston et al. and is well within a conservative estimate of their range.Note, however, that the coefficient on catch from the model presented in Table 3 corresponds to an increase in catch without an increase in legal harvest.Anglers were found to have a higher WTP for unconditional increases in catch.This study found that WTP for an additional cobia caught was ∼16 % of the WTP estimated for the first cobia harvested, yet ∼71 % of the WTP estimated for subsequent harvest.Goldsmith et al. report WTP for increases in catch of Atlantic bluefin tuna to be worth 35–74 % of WTP for increases in harvest, speculating that this may be because the species is both a highly desirable food fish as well as a valuable gamefish.In this study, 90 % of cobia anglers indicated targeting cobia because they “provide a good fight/are fun to fish for”.A large WTP for the first cobia harvested suggests that consumptive aspects of cobia fishing may be the primary source of derived value.However, a sharp decline in WTP for subsequent harvest indicates diminishing marginal returns, as has been found in other studies, and further suggests non-consumptive aspects may be relatively more important past this first-fish threshold.Several prior studies have examined angler preferences and values with respect to fisheries management and recreational regulations using stated preference approaches similar to those applied here.In these studies, regulations typically have been included as attributes in choice alternatives or anglers have been asked to rank or vote on management options directly.The experimental design used here varied regulations across but not within surveys, leading to species targeting preferences that were conditional on regulatory environment.Preferences for cobia regulatory options were estimated by evaluating shifts in the distribution of cobia targeting preferences under different regulatory treatments.This approach was used because it presents individuals choice scenarios that closely resemble decision-making occasions most anglers have some degree of familiarity with.It was also useful in disentangling preferences for regulations from their resulting impacts on legal harvest.However, it should be noted that respondent heterogeneity across regulatory treatments may confound estimates of regulatory preferences, and it is not recommended that this strategy be applied in surveys distributed to small and/or highly heterogenous populations.We found no significant differences in average responses across different regulatory treatments for several demographic and fishing-related variables, suggesting observed shifts in cobia targeting preferences were most likely the result of changes in cobia regulations.While other studies including regulatory attributes within choice experiments have generally found anglers prefer less restrictive regulations, here we found that anglers primarily preferred status quo management.As our model identified regulatory preferences as distinct from changes in legal harvest, these findings indicate that anglers value both consumptive aspects of a recreational trip as well as the regulatory context of that consumption.This was apparent when evaluating cobia targeting probabilities associated with identical zero-harvest trips occurring under different regulatory regimes, where changes in the minimum size limit or the introduction of catch and release only regulations reduced the probability of targeting cobia.Regulatory preferences found here may be related to concerns regarding future harvest opportunities or stock conservation.It is also possible, and perhaps more likely, that these preferences are the result of aversion towards change from a status quo baseline.Preference for the status quo is in part due to loss aversion, a behavior originally acknowledged in prospect theory.Loss aversion could further explain the asymmetric welfare effects found for restrictive versus liberalizing regulations, where the former were seen to significantly reduce angler welfare while the latter had no significant effects.Changes in fishing or regulatory conditions that affect the utility derived from recreational angling may cause individuals to substitute between different recreational activities or among alternative target species.Researchers investigating recreation substitution have frequently sought to identify alternatives yielding similar levels of benefits when compared to those derived from recreational fishing or targeting a particular species.The utility framework used here, conversely, considered substitution a behavior resulting from shifts in expected benefits among recreational alternatives due to changes in fishery or regulatory conditions.Analysis of the welfare effects resulting from changes in cobia regulations revealed that anglers were predicted to substitute alternative target species in response to shifts in cobia regulations.This behavior impacted potential losses resulting from restrictive regulations, which were larger when fewer or less desirable alternative targets were available.Substitution behavior might therefore be considered a loss mitigation strategy, and increased quality or availability of substitution possibilities would thus reduce the welfare effects associated with changes in an individual fishery.As a result of substitution behavior between alternative target species in response to regulatory change, cobia management decisions have the potential to undermine management of other species.Similarly, targeting pressure on cobia might also be expected to be influenced by conditions in other, substitute fisheries.The popularity of cobia as a recreational target in Virginia has increased substantially over the last two decades, with average annual directed private or rental boat trips from 2009 through 2018 nearly double the average level from a decade earlier.Concurrent with this increase has been a 47 % reduction in recreational catch of summer flounder and a decline in directed effort of over 50 % during the last five years.While it is unknown whether or not recent increases in cobia targeting are related to effort reductions in summer flounder, possibly due to decreases in summer flounder size and abundance and increasingly restrictive regulations, the analysis presented here suggests this explanation is at least plausible.Additional research is needed to identify both historical and predicted target species substitution patterns, which could ultimately inform ecosystem-based fisheries management policies.Anglers were found to strongly prefer recreational fishing to non-fishing alternatives, selecting “Do not go saltwater fishing” on only 2.41 % of choice scenarios.Though anglers responded to marginal changes in trip costs as expected—strongly preferring lower cost trips—the range of trip costs considered in choice alternatives was relatively low in comparison to trip costs reported by anglers, and could have been seen as unrealistic to anglers who typically spend several hundred dollars per cobia trip.A higher or broader range of trip costs used in hypothetical fishing trip choice scenarios would likely have resulted in more frequent selection of the no fishing alternative.The reference alternative, “Target a different saltwater species”, was meanwhile chosen 10.44 % of the time.“Target a different saltwater species” included no associated trip attributes and was therefore open to respondent interpretation.It is possible that respondents viewed this alternative as a trip targeting cobia, red drum, or summer flounder when a particular choice scenario presented trips that did not include these species.This would introduce correlation into unobserved components of alternative specific utility, and is thus a potential limitation of our survey design.Choice probabilities estimated using a multinomial probit model, which relaxes the restriction or error independence, were found to generally agree with estimates from the multinomial logit model however.Estimates of changes in angler welfare were provided at the fishing trip level for hypothetical trips of average quality.Determining aggregate, fishery-wide changes in welfare would require consideration of anglers’ recreational demand at the seasonal or annual level.Counterfactual cobia trips under hypothetical regulations were constructed assuming catch and average weight of catch would not change, and also that changes in legal harvest would be proportionately similar to those included in the experimental design of our survey.Detailed modeling of counterfactual fishery outcomes was beyond the scope of work considered here, but is perhaps a place for future research.
Fisheries economists typically assume recreational anglers make decisions that maximize individual angler utility, which may depend on fishery and regulatory conditions. Under this framework, changes in regulations can lead to target species substitution by anglers in response to shifts in expectations of trip utility. A stated preference survey was developed and distributed to recreational cobia (Rachycentron canadum) anglers in Virginia to explore the effects of regulatory change on angler decision-making, species targeting, and resulting economic outcomes. The survey included a series of hypothetical choice scenarios, where respondents were asked to select their most preferred alternative after being presented with different fishing trips targeting cobia, red drum (Sciaenops ocellatus), or summer flounder (Paralichthys dentatus). Seven regulatory treatments of the survey were distributed, providing anglers a variety of species targeting tradeoffs. A mixed logit model was used to estimate angler preferences associated with hypothetical trip attributes and regulatory environment. Changes in angler welfare resulting from changes in cobia regulations were then assessed. Anglers were found to prefer targeting cobia to red drum or summer flounder under status quo management. Increases in catch, average weight of catch, and legal harvest of cobia were also found to provide anglers greater improvements in trip utility compared to increases in these attributes for trips targeting red drum or summer flounder. The economic effects of regulatory change were asymmetric because restrictive regulations were found to reduce angler welfare whereas liberalizing regulations had no significant effects. Increased availability of alternative target species was found to dampen the negative welfare effects of restrictive cobia regulations due to predicted target species substitution by anglers.
293
Carbon-cryogel hierarchical composites as effective and scalable filters for removal of trace organic pollutants from water
A wide range of contaminants are continuously released into waterways as a result of anthropogenic activity, many of which are not effectively removed by current waste water treatment processes.Problem contaminants commonly comprise agrochemicals, pharmaceuticals, hormones, plasticisers, flame retardants, personal care products and food additives.These organic compounds are distributed across different environmental systems, and the most polar are bioavailable and may persist in the aqueous phase.These contaminants are now ubiquitous, and even at very low environmental concentrations can potentially cause ecological changes and contaminate water supplies with unknown consequences for human health.There is therefore a pressing need to purify water to safe levels.A range of technologies have been proposed to remove trace organic contaminants from waste and other waters, including chemical oxidation and improved adsorption methods, often using activated carbons.Indeed, in advanced waste water treatment processes, Granulated Activated Carbon is used as a relatively low cost bulk adsorbent and is recognised as an industry standard.However, the adsorptive capacity of GAC diminishes over time, and it shows relatively poor adsorptive uptake for a range of more polar organic contaminants such as estrogens, pharmaceuticals and some pesticides.This has led to a number of studies examining the potential of “designer” activated carbons, where control over the pore architecture is attempted to target particular dissolved species, to improve polar contaminant removal.In addition, use of hierarchical porous materials has been suggested to enhance mass transfer properties and provide more effective adsorption of trace contaminants from water.The present work explores the design of a flexible and scalable hierarchical composite filter comprised of phenolic resin-derived carbon microbeads with optimised micro-, meso- and macroporous structure embedded in a monolithic, flow permeable poly cryogel for the removal of small size, polar pesticides.Atrazine, and malathion,thio]butanedioate, log Kow 2.36–2.89; WHO, 2004), which enter waterways via runoff from agricultural land and have been frequently detected in surface or ground water, are used as model compounds to assess and optimise the water filters.Application and scalability are also discussed.The sources and purities of the chemicals used are provided in the Supporting Information S1.GAC was kindly provided by a UK water company.Statistical treatment: a t-student significance test, with P 0.05, was used to assess the adsorption of atrazine by PVA scaffolds without carbon beads; compare the content of carbon surface functional groups in the different carbons synthetized; and compare the viability of cells when treated with fractions of water filtrate.Spherical activated carbon microbeads were obtained from Novolac phenol-formaldehyde resin with an average molecular weight 700–800 D following the technology patented by Tennison et al.Briefly, micro-, meso- and macroporous phenolic resin-derived carbon beads were prepared by pyrolysis and physical activation in carbon dioxide.Beads were tailored by dissolving Novolac resin and cross-linking agent hexamethylenetetramine in varying quantities of ethylene glycol, and macroporous resins respectively).The resin suspension was then heat-cured in mineral oil.After removing the ethylene glycol by hot water washing or vacuum drying, the resin beads were ready for processing into carbons.The amount of carbon loss during activation with CO2 was 34% and 55% in carbon beads denoted in this work as TE3 and TE7, respectively and the particle size studied ranged from 40 to 250 μm.The structural features of the carbon beads are summarised in Supporting information S2.Monolithic macroporous polymer “scaffolds”) produced by the authors, also referred to as cryogels, were prepared by chemical cross-linking of polyvinyl alcohol at −12 °C and used to support the carbon beads.PVA was dissolved at a concentration of 5% w/v in boiling water and cooled to room temperature.Hydrochloric acid was mixed with the PVA solution and kept in an ice bath for 30 min.Glutaraldehyde solution and carbon beads were added to the cooled PVA solution with stirring.The mixture was poured into ∅3 × 100 mm glass tubes.The sealed tubes were placed into a cryostat at – 12 °C overnight.The cryogels were rinsed with water, then placed in sodium cyanoborohydride solution overnight to reduce the residual aldehyde groups, and finally rinsed with 4 bed volumes of water.The cryogels containing carbon beads were ∅3 × 85 mm in size when non-hydrated and were used in this configuration for flow through studies.The stress-strain curve obtained for the composite carried out immediately after a filtration test is shown in Supporting Information S4.Adsorption tests were carried out in batch mode under equilibrium conditions using an orbital shaker at 90 rpm, 25 °C for 48 h or magnetic stirring.To assess breakthrough curves in the high capacity carbon-cryogel prototypes using realistic volumes of water at laboratory scale, it was necessary to filter highly contaminated water.Hence, pesticides concentrations were high relative to environmental concentrations.Ethanol was added to stabilise the contaminants in aqueous solution and prevent their precipitation, which would cause overestimation of the removal capacity of the developed prototypes.Batch adsorption tests were also carried out at more environmentally relevant contamination levels of 2 μg atrazine/L.For determination of the diffusion coefficient of atrazine and kinetics of its uptake, carbon beads, or carbon-cryogel composites cut into cubes of 2 mm sides containing an equivalent mass of carbon beads, were incubated with spiked water.Details on the quantification of adsorption capacity and pore diffusion kinetics are given in Supporting Information S3.Contaminated water was filtered continuously through the carbon-cryogel composite or a column packed with only carbon beads until saturation, using a peristaltic pump equipped with silicone tubing Fractions of the filtrate were collected for analysis.Selectivity for removal of atrazine and malathion was studied with: ultrapure water containing ethanol; a “model” water with high total carbon prepared by dissolving potassium hydrogen phthalate in aqueous solution and ethanol; and water from the Ouse estuary.The composition of the estuarine water before spiking was 19.2 mg/L total carbon; 12.7 mg/L inorganic carbon; 6.5 mg/L total organic carbon; Na+ 9320 mg/L; K+ 514 mg/L, Mg2+ 1261 mg/L and Ca2+ 339 mg/L as major ions.The pH was 8.1.The methodology used for the water analysis is given in the Supporting information S5.To assess adsorption at low concentrations, breakthrough curves were also determined using 2 μg atrazine/L spiked ultrapure water.Atrazine and malathion were quantified using d5-atrazine as an internal standard.Quantification was by LC-MS.Chromatographic and MS operating conditions are provided in the Supporting Information S5.The dimensions of atrazine and the minimum pore volume in a 3D carbonaceous structure for its adsorption were modelled using the Build interface in Spartan ’10 software.Geometry optimisation was achieved using the semi-empirical quantum chemistry method PM6 from an initial structure calculated using molecular mechanics.The toxicity of ultrapure water filtered through the carbon-cryogel composites was tested on heterogeneous human epithelial colorectal adenocarcinoma cells.Cell viability was assessed in vitro using the lactate dehydrogenase and MTS assays.The cell culture methods and materials used are described in Supporting Information S6.A graphite-like lattice of activated carbon contains a cloud of delocalised electron density that favours the adsorption of organic compounds, especially if they are aromatic, via π−π interaction.Pores of appropriate size can add additional interaction between the pollutant and the carbon, and can significantly enhance removal, which is particularly poor for polar and small organic molecules.With atrazine as an example of a ubiquitous micropollutant in surface water, we have modelled and synthesised porous carbon sorbents for its optimal removal from water.The dimensions of the optimal pores in the carbonaceous material that would present the highest interaction with atrazine and produce a state of minimal energy were derived using molecular mechanics and semi-empirical methods and are shown in Fig. 1.Large micropores and mesopores in the low size range are predicted to be the preferential sites for the adsorption of the study contaminant.For standard GAC used in the waste water industry, the maximum adsorptive capacity for atrazine was 52.2 ± 6.8 mg/g.The surface area of GAC determined from the nitrogen adsorption-desorption isotherm was 552 m2/g.Given the dimensions of atrazine estimated in Fig. 1, approximately 18% of the surface area of GAC was covered with the herbicide at equilibrium conditions.The nitrogen adsorption-desorption isotherm also revealed that the GAC was mainly a microporous sorbent, with pores of 1.1–2.0 nm width, and hence appropriate for the removal of the herbicide according to the model shown in Fig. 1.This model assumed that a size-exclusion effect found for bulky compounds would not occur.However solvation could increase the effective solute dimensions and hinder access to micropores.Aside from well-developed microporosity, the GAC has very low mesoporosity, in the range of 2.1–3.8 nm, and macropores were not detected.The absence of transport pores hinders the accessibility of the micropores, which can explain in part the low adsorptive capacity observed under high concentrations of atrazine.To maximise pesticide removal, porous carbon microbeads with larger surface area and higher meso- and macropore volume compared to GAC were prepared.The porosity was primarily controlled by the amount of the pore former, ethylene glycol, which is entrapped in the liquid form within the phenolic resin when it solidifies in contact with hot mineral oil.Washing out the entrapped ethylene glycol creates the porous structure in the polymeric precursor of activated carbon.In contrast, in the absence of the pore former, the oligomer chains cross-link, increase in weight and precipitate.Secondly, the activation stage, which was carried out at 900 °C in a CO2 atmosphere, caused an increase of the number of micropores and the specific surface area and a slight widening of the meso- and macropores, depending on the activation degree, starting from 30% burn-off.The phenolic resin-derived carbon obtained by the described process has a SBET of 534 m2/g when non activated, for a particle size of 125–250 μm.This increases to 1057 m2/g at 30% activation; 1432 m2/g at 47% activation; 1703 m2/g at 52% activation and 2018 m2/g at 65% activation.Phenolic carbons freshly activated in carbon dioxide had a point of zero charge of 10, indicating that at typical environmental water pH the surface of the carbon would be positively charged.Analysis of the surface chemistry of the carbon beads was carried out using Boehm titrations.The range of phenolic carbons beads prepared did not have significant differences in their surface chemistry.Total carboxylic functions were 0.58 mmoL/g as determined by back titration after reaction with Na2CO3.Total acidic functionalities were 1.16 mmoL/g as determined by back titration after reaction with NaOH.Hence, the phenolic OH content was 0.58 mmoL/g.From the range of phenolic carbon prototypes synthesised, those with clear differences in their mesoporosity were tested for the adsorption of atrazine in a column flow through experiment.Phenolic carbons beads, named as TE3 and TE7, had abundant microporosity, low and high meso- and macroporosity, and an SBET of 1057 and 1703 m2/g, respectively.The phenolic carbon beads with the highest active surface area and pore volume gave the greatest uptake of atrazine: 419 mg atrazine/g upon saturation.We previously showed that TE3 and TE7 effectively removed the emerging contaminant metaldehyde.To examine the effects of SBET on the uptake of atrazine, TE7 carbons were synthesised with smaller particle size, which had a 15% larger SBET: 1942 m2/g.In this case, the maximum adsorptive capacity for atrazine increased to 641 mg atrazine/g carbon.This maximum adsorptive capacity would give a coverage of 1238 m2/g with atrazine according to the dimensions of the herbicide shown in Fig. 1.This represents 64% of the surface area of the carbon covered, and indicates that there are specific sites on the carbon where the pesticide adsorbs.This contrasts with the 18% surface coverage found for GAC.The TE7 carbons were found to be optimal: they have higher porosity and smaller particle size than GAC, both characteristics are considered important for the removal of atrazine and potentially for micropollutants of similar size.High adsorptive capacities were found.Fig. 2 shows the structural properties of the carbons and their adsorption capacity under flow-through conditions.The pore size distribution diagrams show that while the size of the micropores is similar in all the carbons, the size range of meso- and macropores increases from TE3 to TE7 mainly due to the larger amount of the pore former used.Although the prepared carbon beads were highly efficient for atrazine removal compared to conventional GAC, their relatively small particle size makes their use in conventional large scale water treatment problematic.In addition, use of small packed beads may significantly increase back pressures in column filtration mode.To overcome this, we examine here the effects of incorporating the beads into a macroporous polymer scaffold, specifically a PVA-derived cryogel or macroporous polymer structure.This scaffold allows the preparation of elastic filters of various geometries and facilitates their repeated use for water filtration.The large macropores of the polymer gel matrix, approximately 100–200 μm in diameter, allow flow through with low flow resistance, as well as keeping the active carbon surface accessible for adsorbing contaminants from water.Under the filtration conditions used, a 7% reduction of the flow rate was observed when using the composite.In contrast, no reduction was observed when the filtration was carried out through a microbead-packed column with the same dimensions as the composite.Fig. 3 shows a SEM image of the composite, and illustrates the spherical carbon beads held by a thin PVA film and the macropores in the gel.Hence, the filter is a hierarchical material; it contains carbon beads with a range of macro-, meso- and micropores that play an important role in contaminant removal within a macroporous scaffold.The effect of coating part of the carbon surface with a polymer film on the adsorption kinetics was investigated by comparing the adsorption of atrazine on “free” carbon beads with that on the carbon-cryogel composite.The adsorbent and the atrazine solution were put in contact using two approaches: manual shaking for 10 s, with samples then left to incubate at room temperature without further agitation; under magnetic stirring.While the adsorption was faster in the free carbon beads in the absence of agitation, the adsorption capacity of the carbon beads embedded in PVA was not noticeably suppressed by the PVA film since the final concentration of atrazine in solution was approximately the same in both free carbon and carbon-filled PVA systems.These static conditions were assayed to observe the maximum difference in diffusion between the “free” beads and the PVA-embedded beads.When the atrazine diffusion experiment was carried out under magnetic stirring, conditions which did not modify the structure of the particles, the adsorption was approximately twice as fast compared to the non-agitated samples.There was no significant adsorption of atrazine by PVA scaffolds without carbon beads."The Young's modulus of elasticity of the composites studied was 13.1 ± 0.8 MPa, and the loading achieved at 75% compression was 19.6 ± 2.1 N, conditions that did not break the monolith.Fig. 4S, Supporting Information, displays the stress-strain curve of the composite, showing elastic behaviour under compression at moisture conditions representative of its state during filtration.Under the pressure exerted at 39 h−1 flow in the flow-through experiments, the carbon-cryogel composites were compressed by up to 23% In contrast no compression was observed in a column packed with beads only.The purification performance of cryogel-carbon composites was tested at 39 h−1 for the purification of atrazine the composition of which before spiking is described in Materials and methods).The 39 h−1 flow rate used was selected to exceed flows typically experienced in water treatment plants, to provide a more realistic flow-through scenario.Results are shown in Fig. 5 and Table 1.Testing the developed filter materials with water containing a high content of organic matter, and also inorganic substances in the case of estuarine water, mimics the effect of possible competing species at high concentration on atrazine removal.Tests with spiked water had TOC values an order of magnitude greater than typical surface water to determine filter performance under extreme conditions.Fig. 5 shows that a total water filtrate of >200 bed volumes contained concentrations of atrazine <0.1 μg/L compared to the inlet water containing 32 mg/L of atrazine.This was observed in the estuarine water sample and the water sample spiked at TOC levels an order of magnitude greater than typical surface water.The purification achieved when using concentrations of atrazine a million times greater than some environmental levels reported, or in the same order of magnitude, illustrates the high capacity of the composite.For ultrapure water contaminated at 2 μg/L, 27,400 bed volumes were filtered before the concentration of atrazine exceeded 0.1 μg/L.When these levels of atrazine were tested in batch mode, concentrations were below the limit of detection at equilibrium, hence the direct quantification of adsorption capacity was not possible at these concentrations.Notably, the purification of a more complex matrix than ultrapure water, with high concentrations of soluble organic matter or high salinity, caused a relatively small drop in adsorption capacity, i.e. the amount of filtered water with safe levels of pesticide decreased by less than 30%.This suggests that the sites adsorbing atrazine may have some size exclusion selectivity for the adsorption of small molecules such as atrazine, as the sites in the phenolic carbon retain their capacity to remove atrazine in the presence of high concentrations of potentially competing substances.This highlights their potential capacity for contaminant removal from other environmental, industrial or biological media.The slight decrease in adsorptive capacity observed when filtering estuarine water could be due to the blockage of transport pores under these high ionic strength conditions.Removal of malathion, another polar pesticide with chemical structure different to atrazine, was also tested.In this case ultrapure water containing 16 mg malathion/L was filtered through the composite filter.The adsorptive capacity reached at the last safe filtrate fraction was 244 mg malathion/g carbon and 591 mg malathion/g carbon at the saturation of the carbon-cryogel composites, at 745 and 2533 bed volumes, respectively.Supporting information Fig. S8 shows the breakthrough curve for the purification of malathion.This result indicates that the high purification capacity of the developed material is not specific to atrazine but also may apply to other polar pollutants.Studies of the effect of carbon-cryogel filtered water on the viability of Caco-2 cells, which are the first type of cells that would be exposed to filtered water when drinking, indicated a significant reduction in cell viability associated with the first 10 mL aliquot filtered through the composite.No significant reduction in viability was observed however with subsequent aliquots.While the MTS assay applied has been reliably used to assess cell viability and toxicity, the assay only assesses metabolic activities of mitochondria.As such, the reduction in the conversion of MTS into formazan may not necessarily mean cell death.An LDH assay was therefore carried out to complement the MTS assay results.LDH is a cytoplasmic enzyme found in all cells and it can only be found extracellularly, such as in culture medium, in the event of cell membrane damage and/or cell death, where the amount of LDH released is proportional to the amount of dead cells.In this work, LDH assay results show no significant toxicity associated with water filtrate aliquots across all volumes, including the first 10 mL.This suggests that the reduction in cell viability seen in the MTS assay is not due to cell death but rather to a compromised mitochondrial activity.Crucially however, this problem of reduced metabolic activity can be addressed by an initial pre-wash of the PVA-carbon membrane/column with a volume of water that is 5 times the bed volume prior to use.Previous work has indicated that the carbon beads used in the composite filter do not produce a cytotoxic response against a V79 cell line.The developed composites have shown high purification capacity at lab scale at liquid hourly space velocities which are greater than typical flow rates applied in water treatment plants.This makes them potentially useful both for low flow rate applications and for larger scale applications with higher flow rates and larger water volumes.Recent work by the authors shows that cryogel-based composites of the type described here can be produced in volumes of 400 mL or greater in a monolithic form with uniform pore size and carbon particle distribution, indicating the potential scalability of the filters.The use of a flexible polymer scaffold produced via cryogelation also means that the composite filters can be produced in a variety of shapes: sheets for reactive barriers, discs, beads or monoliths, or within robust plastic carriers for more aggressive physical settings.This provides significant flexibility in terms of device configuration.Effective filters with superior adsorptive capacity for the removal of polar micropollutants, based on PVA macroporous cryogels with embedded phenolic carbon microbeads with controllable porosity, have been developed and tested for the treatment of contaminated water with high TOC levels.Embedding the carbon beads in a polymer support retains the adsorptive capacity of the carbons while improving the pore diffusion.The structure of the phenolic carbons, their micro- and mesopores, high SBET and small particle size were shown to play an important role in the high adsorptive capacity of the filter.Adsorptive capacities of up to 641 mg atrazine/g carbon and 591 mg malathion/g carbon were obtained when filtering water containing 32 mg atrazine/L and 16 mg malathion/L in flow through experiments at 39 h−1.The cytotoxicity of water passed through the filter at the same flow rate was tested with human epithelial cells, and no toxicity was found from 17 to 1667 bed volumes.Prototypes developed at the lab scale are a promising water purification technology with superior adsorptive capacity.
Effective technologies are required to remove organic micropollutants from large fluid volumes to overcome present and future challenges in water and effluent treatment. A novel hierarchical composite filter material for rapid and effective removal of polar organic contaminants from water was developed. The composite is fabricated from phenolic resin-derived carbon microbeads with controllable porous structure and specific surface area embedded in a monolithic, flow permeable, poly(vinyl alcohol) cryogel. The bead-embedded monolithic composite filter retains the bulk of the high adsorptive capacity of the carbon microbeads while improving pore diffusion rates of organic pollutants. Water spiked with organic contaminants, both at environmentally relevant concentrations and at high levels of contamination, was used to determine the purification limits of the filter. Flow through tests using water spiked with the pesticides atrazine (32 mg/L) and malathion (16 mg/L) indicated maximum adsorptive capacities of 641 and 591 mg pollutant/g carbon, respectively. Over 400 bed volumes of water contaminated with 32 mg atrazine/L, and over 27,400 bed volumes of water contaminated with 2 μg atrazine/L, were treated before pesticide guideline values of 0.1 μg/L were exceeded. High adsorptive capacity was maintained when using water with high total organic carbon (TOC) levels and high salinity. The toxicity of water filtrates was tested in vitro with human epithelial cells with no evidence of cytotoxicity after initial washing.
294
Cost-optimal electricity systems with increasing renewable energy penetration for islands across the globe
The proliferation of sustainable energy technologies is growing at a steady rate , as society embarks on the colossal, yet imperative process of undertaking a paradigm shift, from default dependence on fossil fuels, to new systems, built on renewable resources.Geopolitical tensions, as a result of dependence for energy imports, climate goals and national contributions agreed upon at the COP21 in Paris, increasing concerns for the environmental impacts associated with fossil fuel extraction and use, and the opportunity for individuals to act as energy producers, are all factors driving this growth.Furthermore, as deployment rises and manufacturing costs for sustainable technologies fall, the economic equation is increasingly favoring renewable energy technologies .A large number of small islands around the world are currently almost exclusively dependent on imported diesel and other oil products to meet their energy needs.Diesel and heavy fuel oil generation are the primary methods used for electricity generation on these islands.The smaller scale of electricity production and the volume and logistics of supply on islands, results in very high comparative electricity costs.These high costs, coupled with oil price volatility, desire for energy security, and the relatively higher vulnerability of islands to the impacts of climate change, build a strong rationale for islands to shift towards sustainable energy systems.Most islands with substantial populations possess a range of abundant renewable energy resources with high technical potential that can assist in this shift.While this is starting to happen with the more mature technologies of wind and solar photovoltaic energy, the door remains open for more novel technologies, such as wave, tidal and ocean thermal energy techniques, as well as geothermal, biofuels, concentrated solar power and concentrator photovoltaic, to compete.Consequently, islands provide a unique and appropriate test bed for the research and development of such technologies .A number of island governments have also set ambitious targets for achieving sustainable energy integration, however widespread progress is still limited to date, for a variety of reasons, ranging from the technical to the social and political realms.An important question is: how can the optimal use of renewable energy resources on islands be achieved within the context of full analysis of their electricity systems?,The objective of this article is to compare cost-optimal renewable electricity system configurations for different islands with PV, wind and diesel generation, and battery and PHS storage technologies, and determine how system costs and configurations vary with increased penetration of renewables.There is a growing body of research into the topic of optimal renewable energy configurations for island systems, which has predominantly focused on wind, PV, and hydropower as generation technologies, coupled with battery, pumped hydro and a few utilizing hydrogen as storage options.Overview studies on the energy situation and the development of renewables are presented by ; renewable energy is typically in an early stage of development, but opportunities are considered very significant.Several articles present methodologies for performing hybrid renewable electricity system optimizations - as well as many applying them to specific case studies - based on various criteria.Net present value or levelized cost of energy were the most commonly used economic optimization criteria , and optimal systems based on these were investigated in .Other, more system performance-based criteria have also been adopted for optimization, namely loss of load probability, loss of power probability, loss of power supply probability and load coverage rate.It is explained in though, that these constraints are usually evaluating effectively the same thing, ensuring system reliability.A number of articles determined ‘optimal systems’ by identifying the best performing system among a specified range of proposed options, rather than by solving a pure optimization problem.A further group of articles have concentrated on approaches for solving the more complex optimization problems posed by hybrid renewable energy systems, as a result of multi-criteria optimization objectives, often with non-linear, non-convex natures.These focus on the various optimization algorithms and techniques.Overviews of existing research and future developments concerning the use of optimization algorithms for design, planning and control problems in the field of renewable and sustainable energy are presented in .The majority of the literature has concentrated on single case-studies, with only a select few analyzing multiple islands, from the same island group: Seven Greek islands were investigated in ; three Japanese islands in ; three Greek islands in ; and the Canary Islands .However, none of these papers compared different islands from across the world.The notable exception is , where the optimized configuration including solar PV, wind power, and battery storage into the power supply system was determined for a large number of islands based on GIS.In addition, Ioannidis et al. classify islands regarding several qualitative metrics.All papers presented here focus on the analysis for each island of one single renewable energy configuration or a few configurations; others provide the single optimum solution.However, in real life a new energy system on an island is not determined in one step, but gradually develops from a small contribution of renewable energy to large penetration of such sources.Therefore, in this article we will investigate how the optimum configuration and costs of renewable energy systems on islands change with increasing penetration of renewable energy sources.We will do that for a spread of islands across the world, focusing on 6 case studies.We use hour-by-hour simulation of the electricity production system and apply a Sequential Quadratic Programming Algorithm using gradient descent to find the optimum system given a certain penetration of renewable energy.In this article, we first present the selected case study islands.Subsequently, the methodology and input data are described.Next the optimization results are presented.We finalize with discussions and conclusions.An overview was compiled of all islands in the population range of 10,000 to 1,000,000, totaling 300.From this group, six islands were selected as case studies for this analysis.We used the following criteria for selection: no connection to a mainland grid; fair representation of the different geographical water bodies, population sizes and island land areas; preference for islands with serious renewable energy ambitions, and data availability.This lead to the selection of islands described below.Streymoy is the largest island of the Faroe Islands, and lays isolated in the North Atlantic Ocean, between Norway, the United Kingdom and Iceland.The island is quite mountainous, particularly in the northwest corner and has a sub-polar oceanic climate, with average monthly temperatures of 3.4 °C in the winter and 10.6 °C in the summer.Aruba is an island located in the southern part of the Caribbean Sea, around 30 km north of the coast of Venezuela, and is a constituent country of the Kingdom of the Netherlands.The island is relatively flat and river-less, with white sandy beaches on its western and southern coasts, protected from the strong ocean currents that affect the northern and eastern coasts.It has a tropical semi-arid climate, and unlike most of the Caribbean region is more dry and arid.The average monthly temperature varies within a narrow range between 26.7 °C and 29.2 °C, responsible for a steady constitution of tourists among its population.Sumba is an island located in the eastern section of the Indonesian Archipelago, west of West Timor and around 700 km north of Australia.The landscape consists of lower hills unlike the steep volcanoes found on many other Indonesian islands.It has a semi-arid, quite dry climate compared to the rest of Indonesia, where the dry season lasts for between eight and nine months while the wet season only lasts for around three to four .The average monthly temperature varies between 22.3 °C and 30.7 °C.Rhodes is the largest of the Greek Dodecanese islands, in the Mediterranean Sea, around 18 km from the southern shore of Turkey.It has a quite mountainous and forested interior, while also being home to long stretches of pristine beaches along its expansive coastline, making it one of the most popular islands for tourism in Greece.It has a hot-summer Mediterranean climate, with the average monthly temperature ranging from 13 °C in the winter to 27 °C in the summer.Gran Canaria is the third largest of the Spanish Canary Islands, situated in the Atlantic Ocean around 150 km west of the coast of Morocco.It is renowned for its variety of microclimates: it is generally warm; although inland the temperatures are quite mild, with occasional frost or snow in the winter.Due to the different climates and variety of landscapes found, with its long beaches and white sand dunes contrasting with green ravines and small villages, the island is a popular tourist destination.The average monthly temperature ranges from 17.9 °C in January to 24.6 °C in August.Rarotonga is the largest and most populous island of the Cook Islands, lying in the South Pacific Ocean around 3000 km north east of New Zealand.It is surrounded by a lagoon, and agricultural terraces, flats and swamps surround the central mountainous area.The islands typically have a tropical oceanic climate, with a wet season from December to March, and a mild dry season from April to November.The average monthly temperature varies very little, between 23 °C and 27 °C.Shown below is Table 1 summarizing the general island information and their respective electricity system details.In order to determine cost-optimal electricity system configurations, the hourly electricity production from solar PV, wind, diesel, pumped hydro storage and battery storage was required to be simulated.This was achieved by modelling their hourly production in MATLAB/Simulink.In this section, we will first discuss the input data, followed by the method for modelling each technology, and finally the optimization details.The simulation is based on the following inputs, for a period of one year:Ambient Air Temperature,Wind Speed at turbine hub height,Assumed available head height for pumped hydro system,The hourly solar irradiation data was obtained from the Meteonorm database.As stated in the General Assumptions section, real irradiation data was not available from any of the islands investigated, so the synthesized irradiation data from Meteonorm was the best available option.Shown below in Fig. 1 is the monthly averaged irradiation per day.Note that total global horizontal irradiation was used in the model.Shown below in Fig. 2 is the monthly average wind speed for the selected islands.The head height of the theoretical PHS system was assumed to be equal to half of the highest elevation on that island.The magnitudes of the PHS system head heights are shown in Table 2 below.A simple control logic is implemented, to prioritize production from renewable sources and allow them to meet demand where possible.When the renewable capacity is unable to meet demand, it is first checked to what extent the pumped hydro system has the capacity to do so, then in turn the battery, and finally, the remainder is requested of the diesel generators.The storage technologies can only provide electricity to within their own defined limits, detailed ahead in this section.In the case that the total generation capacity in any hour is unable to meet the requested demand, the remainder is categorized as ‘unmet demand’.Conversely, when there is a greater production from renewables than demand, the difference between the demand and production is categorized as ‘curtailed’ energy.It is important also to note that the system is built up from a zero installed capacity starting point, or a ‘greenfield’ situation, not considering the current installed capacities of generation and storage technologies already on the islands.This was done since the objective was purely to determine the optimal system, rather than building on top of what is already there.The production from solar PV, wind, diesel, pumped hydro storage and battery storage were modelled according to the following equations :The conversion from wind speed to electric power was modelled using the power curve of a Gamesa G87-2.0 MW turbine .For use in the model, a hub height of 78 m was selected.Electricity production via diesel generators was modelled as a dispatchable resource, with the diesel generators able to provide any amount of electricity required, up to its installed/rated capacity.The output-dependent efficiency of the diesel generators was not considered, and though relevant, it is of less importance for smaller island electricity systems, as they almost always have multiple generators that can be switched off to match supply and demand and avoid them running on low partial loads.Instead of using output-dependent efficiencies, average fuel costs of generation were determined per island by identifying the financial investments in fuel for electricity generation and the amount of electric energy produced from the generation.We see that there is considerable variation, which may be explained by transportation distances, fuel volumes used, and monopoly positions.Since the model operates with hourly time steps, the rate-limited ramping behavior of diesel/oil-based generator are not captured, as this is only relevant for shorter time scales.Hence, no rate-limiting factor has been included in the model.Lithium-ion battery technology has been selected for implementation in this model.Modelling the performance of a battery over the course of its life is a complex task, and it can be performed in high detail with regard to its chemical behavior and its subsequent influences on the cell voltage and current, as well as the influence of other factors such as temperature, charge/discharge rate and depth of discharge.For this analysis a basic battery model has been developed, simplifying the voltage and current relationship present in battery charging and discharging by relating the charge rate with the state of charge.Furthermore, the effects of charge/discharge rates and depth of charge/discharge on battery life have not been incorporated.It is also assumed in this model that the battery system is operated in ideal constant conditions of 20 °C, as batteries achieve optimum service life when used at 20 °C .The maximum charging rate of a battery decreases with increasing SOC.An approximation of the SOC during charging has been made according to , shown in Fig. 3, and used to represent the charging behavior of the battery in a simplified way.Consequently, the battery can be fully charged in 3 h, and the maximum SOC that can be reached after hours 1, 2 and 3 of charging are 80%, 95% and 100%, corresponding with maximum charging rates of 0.8 C, 0.15 C and 0.05 C respectively.The ‘C-rate’ is a measure of the rate at which a battery is theoretically charged/discharged, relative to the capacity of the battery.A 1 C charge rate for example, would fully charge the battery in 1 h, a 0.5 C rate in 2 h, and so on.Naturally, when charging at a lower rate than the corresponding maximum charging rate, the time taken to fully charge the battery is longer.Fully charging the battery to 100% has been permitted in the model.For discharging, the relationship between the SOC of the battery and the maximum possible discharge rate however can be fairly well approximated as a linear relationship, provided that the SOC is kept above the point at which the voltage rapidly drops off.Furthermore, batteries are prone to self-discharging, and Lithium-ion batteries are known to self-discharge at a rate of approximately 2–3% of the maximum capacity per month .A self-discharge rate of 2.5% per month has been incorporated into the battery model, scaled linearly per hour.A DoD limit of 90% has been used in the model.It was determined in a study on estimation of the state of charge and state of health of Lithium-ion batteries, that at a discharge rate of 1 C, the battery capacity is marginally reduced, by 1.8% of its nominal capacity .A maximum discharge rate of 1 C has been used in the model, as since the simulation model runs with one-hour time steps it is therefore not possible to draw an amount of energy from the battery in one hour that is greater than the battery’s capacity, i.e. it is not possible to exceed a 1 C discharge rate.Consequently, the battery’s nominal energy capacity has been reduced by a factor of 1.8% to represent the functional total discharge capacity.In the model, a charge efficiency of 100% has been assumed, and the discharge-rate dependent efficiency of 98.2% is used, giving the battery a total efficiency of 98.2%.The pumped hydro storage system is modelled according to the following equations, with the respective efficiencies sourced from :In order to optimize the system configuration based on minimized cost, specific constraints and the respective costs of the generation technologies are introduced, which together with the installed capacities as variables, allow for an objective function to be defined.A well-established metric in the energy field for quantifying and comparing the costs of electricity generation technologies is the Levelized Cost of Electricity.It is calculated by accounting for all of that technology’s lifetime costs, which are then divided by the total energy production over the course of its lifetime .All cost and benefit values are adjusted for inflation and discounted to account for the time-value of money.This definition can be extended to include the levelized cost of storage, in order to assess the Levelized Cost of System.The LCOS incorporates both the costs of electricity generation and of storage, in order to give an indication of the total levelized cost of electricity supply systems.For this project, a simplified formulation was used, omitting the financing, taxes, insurance, incentives and any value that can be salvaged at the end of the life of the project.It should be noted, that this definition does not include costs associated with the conversion, transportation, and distribution of electricity, nor the power quality management services, which are also significant when considering all of the costs attributed to the reliable functioning of an electricity system.Nonetheless, the LCOS does serve as a useful basis for the comparison of various electricity systems.The system variables required to be optimized are:ICPV - The installed capacity of PV power ,ICW - The installed capacity of wind power ,ICPHS - The installed capacity of the PHS turbine/pump power ,ICres,up - The installed energy capacity of the PHS system upper reservoir ,ICres,low - The installed energy capacity of the PHS system lower reservoir ,ICB - The installed energy capacity of the battery system ,ICD - The installed capacity of diesel generators ,The objective function is subject to the following constraints:ICPV, ICW, ICPHS, ICres,up, ICres,low, ICB, ICD ≥ 0,Ed,unmet ≤ 0.001 · Ed,total,ED = γ · ES,prod,Where γ = 0.1, 0.3, 0.5, 0.7, 0.9,All variables were logically subjected to the constraint of being greater than or equal to zero.Additionally, constraints were placed on the unmet demand energy, and diesel penetration at intervals of 20% of the produced system energy.In this context, the produced system energy is defined as the total electricity demand minus the unmet demand energy.The unmet energy demand was restricted to less than 0.1% of the total electricity demand for this ‘ideal’ simulation model.Of course, in practice, the goal is always to have demand met at all times, but due to changes from year to year and unplanned availability/system failures, the unmet demand can increase beyond the limit set, especially in cases where there are limited backup generation reserves.Renewable energy penetrations of 10%, 30%, 50%, 70%, and 90%, of the produced system energy were investigated.In order to find optimal solutions to the objective function, the ‘Response Optimization’ tool was utilized within the Simulink model .The gradient descent method was implemented, with a Sequential Quadratic Programming Algorithm.This selection was suitable for handling the ‘continuous’ signals and cost function produced in the Simulink model, as the Pattern Search and Simplex Search methods could not deal with these adequately.The gradient descent method uses the function fmincon, “a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives,” .The parameter tolerance, constraint tolerance and function tolerance were all set to 1e−3 in the Optimization Options.The data used for the Investment, Fixed Maintenance, Variable Maintenance and Fuel Costs are shown in Table 4.The optimizations returned a number of interesting results.Fig. 4 shows that as the renewable energy penetration is increased, the levelized system costs for electricity generation decrease considerably up to an optimal point in the range of 40%, to 80%.Beyond the minimum LCOS points, the ability for PV and wind to meet higher shares of the electricity demand directly is strained, and the requirement for storage becomes essential - associated with the increasing LCOS.Despite this increase, renewable electricity integration in the order of 60–90% can still be achieved with no added cost from the initial situation of 0% penetration of renewables.A general trend can be observed in the system configurations and the amount of curtailed energy for the cost-optimal systems.The installed capacities of renewables range from 50% of total installed capacity in Gran Canaria up to 80% in Aruba and Rarotonga, see Fig. 5.Also of interest, is the fact that none of the cost-optimal systems include any storage.As for the amount of curtailment, it emerges that all of the optimal systems require a moderate level of curtailment, varying in the range of 10% in Sumba and Gran Canaria up to 37% in Aruba.Shown below in Fig. 7 are the installed capacities of PV, wind and diesel with increasing penetrations of renewables, as well as the average and peak electricity demands for the 6 islands.The installed capacities largely exceed the average and peak demands, due to the variable nature of the PV and wind generation.The capacity for the renewables to meet the electricity demands, and the need for generation reserves are addressed in Section 4.It can be seen below that the cost-optimal means of meeting the electricity demands as the renewable energy penetration is increased, is by first adding wind capacity.This is due to the fact that wind energy was the cheapest production method on all the islands except for Sumba.Increasing wind capacity was effective up to a critical point between 30 and 70% of renewable energy penetration, where the installed capacity of PV becomes more significant.This is explained by the fact that diesel penetration is limited, and wind energy alone - regardless of how many turbines are installed - cannot manage to meet the system electricity demand to within the specified limit of 0.1% of unmet demand, as there will always be periods of no wind.As a result, it consequently becomes effective to produce with PV, also because there is a stronger correlation between the demand and PV production pattern, meaning that more of the PV energy is directly utilized.It is important to note also that these optimal systems are still requiring considerable amounts of curtailed energy, as seen in Fig. 6.Increasing the renewable energy penetration further above 70% sees the need for storage become more significant, as the ability for PV and wind alone to meet the demand becomes strained, and the system costs increase due to the addition of storage.Illustrated in Fig. 8 below, is the variation of the shares of renewables and storage with increased RES penetrations, for the 6 islands investigated.As can be seen, Sumba stands out as the only island with a significant contribution from the pumped hydro storage, and a curbing of its dumped energy at high penetration.This is attributed to the fact that the periods of no solar production are more strongly correlated with periods of no wind production in Sumba, than in any of the other islands.This means scaling up the PV and wind capacities as higher renewable energy penetrations are desired, makes little contribution to meeting demand at these times, and therefore the requirement for storage is more significant.Also of interest, was that the batteries were installed to meet very small storage demands, however as the storage requirements became more significant, the PHS became the favorable option.This is discussed further in Section 4.As significant reductions in battery costs are expected, also a sensitivity analysis was made for the battery costs.It can be inferred from Fig. 9 that – at least for Rhodes - a reduction in the battery investment costs of between 50% and 70% sees battery storage become a more favorable storage than pumped hydro.A large amount of battery storage is also installed when costs are reduced by 90%, reducing the required capacity of wind installed.In reality, as islands transition to higher shares of renewable energy this will make it more likely that they opt for battery storage rather than PHS, due to their costs forecast to decrease in the coming years.The optimal RES penetration range of 40–80% achieved is fairly consistent with results obtained in literature for island systems, with RES penetrations of: 55% in the Atlantic and Arctic Oceans, 64% in the Caribbean Sea, 40% in the Indian Ocean, 58% in the Mediterranean Sea, and 49% in the Pacific Ocean , 61% on the Island of El Hierro , 55–60% on Dongfushan Island , 78%, 92% and 85% on the islands of Kithnos, Ikaria, and Karpathos respectively , and 77% on a small island in China .The LCOS range observed of $US 0.08–0.5/kWh is consistent with that seen on El Hierro of $US 0.07/kWh , although considerably lower than values obtained in other studies which were found, in the range of $US 0.7–1.4/kWh in .Potential reasons for this difference are the reduction in PV and wind generation costs since those papers were published, the inclusion of costs for additional equipment such as power converters, and the inclusion of more expensive storage methods.Further cost reductions e.g. for PV and wind can be expected.Of course, this will impact the optimum configuration, moving to slightly higher optimum percentages of renewable energy penetration.However, curtailment and costs of storage will eventually limit much further increase.Further sensitivity analysis was carried out for other variables including PV investment costs, and conclusions were found to be robust.An important consideration in this discussion is the additional costs associated with the production and actual integration of the produced renewable electricity into the island grids.Costs associated with sub hourly balancing like intra-hour drops of wind speed resulting in sudden losses of production for example – and power quality management due to the increased penetration of the more variable renewable sources – are not included in the LCOS.These smaller time scale concerns could require additional generation/storage capacity, like diesel, batteries, flywheel storage or demand response, that are able to maintain the smooth and reliable functioning of the system.These technologies however also come at a price, which would need to be added to the calculated LCOS.It was found however, that adding battery capacity for half an hour at peak demand only marginally increased the LCOS.The same is likely true for taking measures to maintain sufficient inertia in the power system, e.g. through adding idling generators.Additionally, it is likely that the installation of renewables on islands comes at an elevated price when compared to continental installations, due to the transport of required materials and equipment.Island-specific costs were used for the storage technologies, but continental values were used for the PV and wind production; though these continental prices are also likely to decrease in the coming years, partially negating the effect this might have.Another important factor to consider is the large over-production of renewable energy present in each of the island systems examined.In the optimization, no costs/penalties were assigned to the curtailment of renewable energy, allowing for situations of large renewable energy over-production that are seen in the results.In the optimization process, it is evident that in order to meet the electricity system demand while limiting the diesel output, the optimization algorithm needs to find the cheapest way of meeting that demand given the constraints applied.As can be seen from the results, wind energy is favored as the preferred source for meeting the required renewable energy contribution, as it is the cheapest production method on every island usually in the range of $US 0.03–0.06/kWh, compared to that of PV at $US 0.08–0.11/kWh, and storage in the range of $US 0.8–2.0/kWh.Sumba is an exception where PV is cheaper than wind, and Streymoy another exception, where PV costs are around $US 0.22/kWh, double what is seen on the rest of the islands.The reason for the high levelized cost for PV on Streymoy is likely a combination of the fact it has the lowest average irradiation of all the islands, and the effects of not optimally tilting the PV panels, which results in less efficient production than seen at latitudes closer to the equator.Another consideration for the high comparative price of the PHS relative to PV and wind, was that the round-trip efficiency for the PHS was around 59%, calculated according to efficiencies stated in a study on a hybrid wind and PHS system for the Island of Ikaria .However, it is generally stated that round-trip efficiencies for PHS systems are usually more in the order of 70–80%, so the efficiencies selected are a little conservative - though our assumptions only very marginally increased the levelized cost of the PHS.The fact that over-production and curtailment emerges as the favorable option gives merit for investigation of additional, flexible uses for the energy that would otherwise be curtailed.Possible applications for islands could be: fresh water production by desalination since many islands also face issues in providing a sufficient fresh water supply, charging electric vehicles, or even hydrogen production as a means of storage coupled with fuel cells, or as a fuel for transportation.A general trend was observed during the optimization where battery storage appeared favorable to PHS for very low-power demands.This can be explained by the generation limitations of the PHS, where the PHS system requires a minimum flow of 10% of its rated flow.Hence, as the installed capacity of the PHS system is increased, this minimum flow - and thus minimum power output - is increased, rendering the PHS system incapable of meeting power demands less than its rated minimum.It also appeared that utilizing a combination of battery and PHS systems in considerable magnitudes was unfavorable.The possible reasons for this are:High comparative costs of storage opposed to renewable generation,Diesel generation, being unlimited in magnitude and fairly cheap in its ability to meet the high peaks of the residual demand, would be reserved for that purpose.In doing so, the limited allowable diesel energy production is easily, quickly reached, necessitating that the storage be able to meet the lower magnitude residual demands of the system.PHS was usually found to produce at lower cost than battery storage, so it makes sense that a larger PHS system would be installed, restricting the possibility for a significantly sized battery system to also be incorporated, as the system costs would swell.It is easier for the PHS to meet the high peaks since the turbine and reservoirs are sized separately, whereas the battery capacity would have to get very large just to meet the high, infrequent peaks.The PHS system was prioritized in the control logic to supply power ahead of the battery.Thus, the PHS will naturally provide more energy to the system even when the battery is able to, decreasing the economic viability a significant battery capacity.The decision was made to model Pelton turbines for the PHS system.As a result of this, it was also assumed that the required pump costs were incorporated in the non-power generation portion of the PHS costs.Other turbine types also could have fulfilled the purpose of PHS power generation, such as the Francis turbine for example.In this case, since the Francis turbine can operate reversibly both as a turbine and pump, the pump costs would have inherently been incorporated in the power generation costs, and the previously mentioned assumption would not have been required.The optimal system configurations are determined for an ideal system, where the generation and storage technologies are available 100% of the time.As previously mentioned, due to changes from year to year in demand and renewable resource availability, and planned and unplanned unavailability, the unmet demand can increase beyond the specified limit set in the optimization.As a result, and the fact that the goal is generally to maintain a balanced system meeting demand at all times, either over-sizing the system or adding generation reserves may be necessary.Generation reserves were not considered in this project, and would increase the LCOS, but likely only to a small extent, particularly if diesel is selected as the back-up technology.Because of the nature of the fmincon solver used - which terminates its search once a local minimum satisfying the optimization constraints is found - it is not possible to guarantee with certainty that global minimums were found in each of the optimizations performed.The fmincon solver is highly dependent on the initial starting point, and this was experienced in practice where local minima were returned from the optimization process, depending on how close to the ‘real’ global minimum the initial point was.In order to make every effort to ensure that global - rather than local - optima were returned, a range of initial starting points were experimented with in order to heuristically determine an initial configuration that was already close to satisfying the required constraints.Knowledge of the individual LCOE per technology was of great assistance in this process, as the technologies could be ordered by cost of production, and therefore it was understood which technologies should be prioritized and installed in larger capacities.Additionally, as an ‘insurance’ check, once the ‘global’ minimum was found, small variations to the system configuration were made to test the nearby points, in order to ensure that it was not possible to find a slightly more optimal system configuration.Weighing all of this up means that although possible, it is quite unlikely that even more cost-efficient configurations exist that are also able to satisfy the constraints in place.The assumption that PHS is feasible on every island, with an achievable head height of half of the maximum elevation of the island is quite a crude one, and brings uncertainty to the actual cost-effectiveness of PHS.It is entirely possible that local site conditions may not allow for the construction of reservoirs with the head heights assumed in the model, likely altering the amount of storage installed and ultimately, the entire optimal system configuration.However, as mentioned, battery costs are set to fall to the point which they overtake PHS anyway and are feasible everywhere, so the storage contribution should remain.As mentioned earlier, the time frame for the modelling undertaken was a single year with hourly time-steps, using averaged power demands and resource data.Cost-optimal systems were determined under the assumption that the demand and production were constant for the entire lifetime of the system, however this neglects the fact that both the electricity demands, and renewable resource availability vary from year to year.Thus the optimal system in one year may be sub-optimal in the following year, depending on the demand and resource availability.Hence, performing this optimization over multiple years, or even just incorporating expected future demand developments, could allow for the provision of a system configuration that is cost-optimal over a duration closer to the system lifetime.It is likely that due to the number of variables, and the level of accuracy for which the optimal LCOS was determined, different system configurations could exist that fulfil the constraints at a quite comparable LCOS, while not strictly being an optimum.In this situation, a decision-maker should be aware of the various other configuration possibilities, and determine what the highest priorities are, consequently determining the most favorable configuration for their particular system.In any case, the renewable penetration is unlikely to be very different as can be seen from Fig. 4 where a clear optimal penetration exists, although the ratio of PV and wind installed capacities could slightly vary.Islands have a genuine reason to invest in renewable energy technologies for their electricity generation needs.Levelized system costs for electricity generation decrease considerably with increasing renewable energy penetration, up to an optimal point in the range of 40% to 80%.At these optimal points, the system configurations predominantly comprise of a considerable portion of wind energy, in the order of 40 to 70%, coupled with diesel generation.Photovoltaic solar energy makes a significant contribution on only half of the islands.Beyond the 40 to 80% optimal penetration point, the ability for photovoltaic solar energy and wind energy to meet higher shares of the electricity demand is strained, and large over-production occurs with the requirement for storage becoming more significant given the increasingly limited amount of diesel production permitted.Despite this increase, renewable electricity integration in the order of 60 to 90% of total system energy can still be achieved with no added cost from the initial situation of 0% penetration of renewables.The relatively high costs of storage meant that significant over-production and curtailment of renewable energy was preferred over the implementation of storage.Battery storage appeared favorable to pumped-hydro storage for low-power demands, however the contribution of storage in general to the optimal system configurations only became pronounced at renewable penetrations of greater than 70%, with Sumba being the only exception.In all cases, pumped-hydro storage was favored to battery storage as renewable energy penetration exceeded 70%.A reduction in the investment cost of batteries of between 50 and 70% caused battery storage to become more favorable than pumped-hydro storage, and with lithium-ion battery costs forecast to fall by almost half in the coming 5 years , larger-scale battery storage will likely overtake PHS and may well become the best approach for island grid applications.For renewable penetrations up to the optimal points in the range of 40–75%, opting not to make investment in renewables for islands would be a missed opportunity considering the associated cost reductions.A practical way forward would be to add 10 or 20% renewable energy penetration each year in a staged process.This would allow islands time for battery costs to fall to a price competitive with pumped-hydro storage, and they could then be installed at a later date when renewable penetrations of 50–80% are achieved.
Cost-optimal electricity system configurations with increasing renewable energy penetration were determined in this article for six islands of different geographies, sizes and contexts, utilizing photovoltaic energy, wind energy, pumped hydro storage and battery storage. The results of the optimizations showed strong reasoning for islands to invest in renewable energy technologies (particularly wind energy), as compared to conventional power generation. Levelized cost of systems for electricity generation decrease considerably with increasing renewable energy penetrations, to an optimal point in the range of 40–80% penetration. Furthermore, renewable electricity integration in the order of 60–90% could still be achieved with no added cost from the initial situation. Cost increases after these optimal points are attributed to the growing inclusion of storage, required to meet the higher renewable energy shares. However, with battery costs forecast to fall in the coming years, and a cost reduction of 50–70% already causing lithium-ion batteries to overtake pumped hydro as a cost-favorable storage option in this model, there is a real case for islands to begin their transition in a staged process; first installing wind and PV generation, and then - as storage costs decrease and their renewable energy capacities increase - investing in storage options.
295
Data for phase angle shift with frequency
Shift in phase angle between the current and voltage is evaluated by applying sinusoidal excitation of 10 mV across the single phosphoric acid fuel cell.Figures depicting the variation of phase angle with frequency are shown here in wide cell temperatures and humidifier temperatures.A single unit of phosphoric acid fuel cell consists of anode, electrolyte and cathode.Here, 88 wt% phosphoric acid was used as an electrolyte.The details of the experimental set up were also reported elsewhere .A glass mat soaked in the phosphoric acid was used as a solid electrolyte.Two electrodes were composed of thin layers of 20 wt% Pt/C, as deposited on to carbon plate.The electrolyte/electrode assembly was placed between two grooved graphite plates.Pure H2 was passed through a humidifier to humidify the gas and fed to the anode through the grooved graphite plates.In the cathode, the O2 gas was fed.The outlets of the grooved graphite plates from both cathode and anode were connected to an adsorber to collect moisture.Two stainless steel plates were placed at the two ends and were used as current collectors.A heating plate was placed on the lower current collector to maintain the cell temperature.The total arrangement was kept intact by using two pusher plates.Throughout all the experiments the inflows of H2 and O2 gases were maintained at 100 and 50 cc/min, respectively using rotameters.Measurement of phase angle shift between the current and voltage using the electrochemical workbench was performed after 3 h of starting the gas flow.This time is given so that the open circuit potential was generated after the electrochemical reactions inside the cell.As parameters both the cell temperature and humidifier temperature were varied.The variation of phase angle shift with frequency is shown in Fig. 1.A well-defined peak in phase angle shift is observed for higher humidification .The peak shifts towards lower frequency region as the humidifier temperature decreases in Fig. 1–.It is observed from Fig. 1 that the peak vanishes at 40 °C humidifier temperature.Scattering of datum in extreme low frequency indicates the uncertainty of measurement beyond the experimental error .The electrochemical reaction time is evaluated using the peak of phase angle shift at the investigated temperatures .The electrochemical reaction time is also presented in Table 1.It is found that τ has maximum 0.8 ms and minimum 1.45 s for higher humidifier temperature and lower humidifier temperature respectively.
Phase angle shift between the current and voltage with frequency has been reported for a single phosphoric acid fuel cell in the cell temperature from 100 °C to 160 °C and the humidifier temperature from 40 °C to 90 °C. An electrochemical workbench is employed to find the shift. The figure of phase angle shift shows a peak in high humidifier temperatures. The peak in phase angle shift directs to lower frequency side with decreasing humidifier temperature. The estimation of electrochemical reaction time is also evaluated in the humidifier temperature zone from 50 °C to 90 °C.
296
Socio-economic aspects of domestic groundwater consumption, vending and use in Kisumu, Kenya
Between 2012 and 2050, the urban population of Sub-Saharan Africa will increase from about 40% to nearly 60% and is projected to exceed 1.26 billion.Rapid population growth in SSA is predicted for smaller towns with populations under 200,000, as well as large cities."More than 60% of SSA's urban population live in informal settlements and slums.Safe drinking-water from centralised distribution systems rarely meets demand in these settlements.Residents are forced to ‘self-supply’ from wells, surface waters, vendors, and illegal connections to the mains distribution system.For many slum residents, groundwater is a vital domestic water source because of its affordability and availability, but rapid population growth, unplanned land development and climate change are putting it under increasing strain.Urban groundwater quality may be poor due to contamination from adjacent pit latrines, surface waste, and other hazards.Use of such poor quality groundwater could contribute to diarrhoeal disease and infant mortality.The magnitude and locations of those affected remain unclear, but an estimated 41.4 million people in urban SSA use non-piped ‘improved’ sources, a source class that includes protected wells and boreholes.Safe water provision to the urban poor remains an international priority, given the emphasis on reducing inequality in safe water access in post-2015 monitoring, and a national goal in strategic plans across SSA.Alongside formal water services installed and initiated by government, international donors and in some instances non-governmental organisations, the water sources developed by households themselves may also play an important role in securing domestic water access.Such so-called ‘self supply’ water service solutions include rainwater collection, shallow hand-dug wells, home water treatment in some instances, and various community-led solutions to cope with the partial and often interrupted coverage of piped supplies.However, although the quantities of water vended through some of these systems has been documented, the specific contribution of hand-dug wells to urban water supply remains unclear.There is increasing recognition that households use a variety of water sources for a range of different purposes, a perspective embodied in the multiple use water services approach to water provision.To date, this concept has largely been applied in rural areas, although urban residents may also use water from multiple sources for multiple purposes.Many household surveys and censuses continue to focus on the main water source, and thereby may miss the complexities of multiple source use, including population exposure to contaminants from subsidiary water sources and the economic contribution of such sources.There are some interventions that specifically target hand-dug wells and springs, most notably spring and well upgrading programmes.Given the growing policy emphasis on reducing inequalities relating to water and sanitation, an important question is the extent to which such interventions can be considered pro-poor and how the incidence of benefits from well upgrading might vary across different socio-economic groups.With urban hand-dug well upgrading, the analysis of benefits is often complicated by the presence of a supply chain, through which well owners may supply vendors who in turn sell groundwater on to consumers lacking reliable piped water connections.This paper seeks to quantify the contribution of one particular self-supply solution, namely shallow hand-dug wells, within an urban Kenyan setting."The study examines the contribution of shallow hand-dug wells to the city's domestic water supply, alongside other types of water source.In particular, the study aims to quantify the contribution of groundwater from hand-dug wells to water supply in two neighbourhoods and quantify patterns of groundwater vending.It also aims to assess how urban water consumers use the generally cheaper and lower quality groundwater alongside more expensive, higher quality piped water and rainwater."Finally, it examines the socio-economic profile of those consuming vended groundwater, relative to hand-dug well owners and assesses whether contamination risks are greater for poorer owners' wells. "Kisumu is Kenya's third largest city, with an estimated population in its urban core of 259,258 at the time of the 2009 census.Within the city, there are eight informal settlement areas that generally lack access to sewered sanitation and reliable piped water, which surround higher income, more centrally located neighbourhoods such as Milimani.These informal settlements are Bandani, Kaloleni, Manyatta A,Manyatta B, Nyalenda A, Nyalenda B, Nyamasaria and Obunga.In informal settlements, rainwater and groundwater are used to supplement piped water.Alongside these informal settlements, there are other settlements that were originally formally planned but which have subsequently been subject to unplanned infill development.In the informal settlements, pit latrines are the dominant form of sanitation.Groundwater is obtained either through shallow hand-dug wells, which are typically privately owned, or through springs, which are communal.The groundwater abstracted from wells is sometimes sold on to others, though the extent of groundwater vending is unclear.However, springs and wells are both known to be microbially contaminated, with E. coli densities often over 1000 cfu/100 ml.Whilst the domestic tariff for a piped utility connection is the cheapest source of water at US$0.49/m3, for those without such connections, well water is cheaper than all other alternatives.Well water has a median price of $1.15/m3, standpipe water $2.23 m3, whilst piped water vended from handcarts costs $6.72 m3.The study areas were Manyatta A and Migosi, the focus of an earlier study but extended to Obunga, Nyalenda A and B, and Bandani where groundwater use is also common.According to the 2009 census, population density in Manyatta A and Migosi was 203 and 103 people per Ha respectively.Thirty nine percent of households in Manyatta A and 24% in Migosi used groundwater as their main domestic water source, with 31% and 23% respectively using piped water sources.29% of households in Manyatta A and 50% in Migosi purchased water from vendors.The proportion of vended water originating from piped supplies versus groundwater is unclear.Ninety one percent of households in Manyatta A and 38% in Migosi used pit latrines.Mains sewerage was common in Migosi but rare in Manyatta A and in both areas, maintenance issues lead to frequent episodes of sewage overflowing into open storm drainages and low lying areas.A further 4% in Migosi and 29% in Manyatta were using septic tanks as a main means of sanitation.For all sanitation facilities, construction quality can impact on well water quality and consequently human health.Both settlements have fractured basalt geology, overlaid with pyroclastic deposits that become deeper to the east.The old weathered surfaces between successive lava flows and older formations hold groundwater and perched aquifers are common, with their recharge often localised.Interconnected fractures are often intersected by pit latrines, soak pits, and sometimes cracked septic tanks and so act as pathways not only for groundwater movement and recharge, but also fecal contamination.Most groundwater is extracted from perched aquifers via hand-dug wells, with a mean depth of approximately 6 m and diameters of 1–1.5 m.The shallow depth and presence of both fractures and onsite sanitation mean that the hand-dug wells draw on a highly vulnerable aquifer system.Although several donor-funded boreholes have previously been drilled into the deeper second aquifer and been found to be free from microbial contamination, these boreholes were no longer functioning by 2014.To enable assessment of socio-economic status in a manner that would facilitate comparison with a nationally representative population, we examined household asset ownership in the 2008–9 Demographic and Health Survey.Since urban and rural households sometimes have very different sets of assets, making use of a single asset index for both types of household problematic, we examined asset ownership among the 2910 urban DHS households only.We undertook a Principal Components Analysis of 17 assets and services.After examining factor loadings onto the first component, we dropped radio, bicycle, motorbike, cell phone, agricultural land, and livestock from these assets, since these had weak loadings, and undertook a second PCA.The PCA scores derived from this restricted set of assets remained strongly correlated with pre-calculated asset index scores provided with the DHS.We subsequently asked about ownership of this restricted set of assets and services in our fieldwork, drawing on the same question and response wording as those used in the DHS.We then used the PCA factor loadings derived from the national DHS sample of households to create an asset index for households in our study, drawing on an identical set of questions.In this way, we were able to relate asset index values for households in our survey and position these households in terms of socio-economic status relative to a nationally representative Kenyan urban population.Ethical approval for the human subjects part of the study was obtained from the Faculty of Social and Human Sciences, University of Southampton and the University of Surrey.Fieldwork drew on a previous study of groundwater sources in the Manyatta A and Migosi informal settlements, which took place between 2002 and 2004.As part of this earlier study, the location of all 438 wells in both neighbourhoods was initially mapped from aerial photography and GPS-based ground survey in 1999.In this earlier study, a sample of 46 wells was selected from this full inventory of wells in Manyatta A and Migosi, so as to be representatively distributed across the two neighbourhoods.Well water from these 46 wells was tested for contamination on at least two occasions.In March–April 2014, these 46 wells were revisited and well owners interviewed where available.The sample from this earlier study was further extended to include 21 further wells from four other informal settlements, namely Obunga, Nyalanda A and B, and Bandani."Households in these additional settlements were recruited by generating random locations within each settlement's perimeter and selecting the well closest to these locations.Questionnaires were initially piloted in a neighbouring area to the study sites.After piloting and then seeking informed consent from participants, questionnaire-based interviews with well owners and their customers were conducted by locally recruited enumerators under the supervision of a researcher.Well owners were asked for the total amount of water abstracted from their well on the previous day.Answers were given in 20 L jerry cans which is the most familiar water unit for the local population.Well owners were also asked about the subsequent use, handling and treatment of well water and water from other sources, drawing on the wording of existing questions and closed-ended responses wherever possible.Each owner was asked to describe their reasons for using a given water source for a particular purpose and all the reasons mentioned were recorded, but not ranked.Well owners were also asked about access to the restricted set of services and ownership of durable goods from the earlier PCA analysis of the DHS, using identical question and answer wording as the DHS questionnaire.At each well, owners were asked whether they sold groundwater, and groundwater customers were then identified.Customers were identified either as they approached the well to purchase groundwater at the time of interview, or else were identified by the well owner.Groundwater customers were then asked the same questions about water use and ownership of assets and services as well owners.A sanitary risk assessment was conducted at each well.These assessments consist of a standard observation checklist to identify potential pathways of water contamination at the well itself and in the immediately surrounding area.Following our earlier study, a standard World Health Organization inspection protocol was used.To rapidly update the 1999 map of wells and enable subsequent estimation of total domestic groundwater abstraction, a transect survey was carried out in August 2014, with a belt of 0.1 by 1.7 km being used in Migosi and a 0.1 by 2.4 km belt in Manyatta.The start and end points of each transect were randomly generated.Within each transect belt, the locations of shallow wells were identified and any changes noted in comparison to the 1999 1:2500 basemap.This transect enabled us to update the 1999 estimate of the total number of wells in the study area.To estimate the amount of water consumed by well owners and sold on to customers, we calculated the mean quantity of water abstracted for household use and sold per day in the Manyatta A and Migosi neighbourhoods where we had additional data on the total number of wells.Average amounts abstracted for the two neighbourhoods were then multiplied by the number of wells in the two neighbourhoods to estimate the total volume abstracted.To account for the difference in well distribution between the creation of the well distribution map and the situation in 2014, the results were scaled on the basis of the 2014 transect survey.We applied the factor loadings from our preliminary DHS analysis to the well owners, their customers, and owners or customers drinking or cooking with untreated well water in our own field survey.In this way, we generated an asset index that could be compared to a nationally representative population.The non-parametric Mann–Whitney test was used to compare socio-economic status between well owners and well customers using SPSS.We also compared matched asset scores for well owners and consumers using the same wells, calculating the correlation coefficient between the paired owner and consumer scores.Finally, based on the sanitary risk inspection, we calculated the proportion of observable contamination risks that were present at each well and therefore potentially controllable by the owner.We separately calculated the percentage of such risks in the immediately surrounding area that could only be controlled through community action or planning controls."We used Spearman's correlation coefficient to compare the proportion of both types of risk present at each well to the asset score for the well's owner.Of the 46 Manyatta A and Migosi wells from the earlier study, 7were lost to follow-up.4 wells had been built over, 2 had been abandoned, and 1 was inaccessible within a private compound.A total of 51 well owners were interviewed from all informal settlements.The 9 owners not interviewed were not at home during the survey period and therefore could not be included.Of these well owners, 20 sold their well water to others.137 consumers of this vended well water were interviewed during the survey.Table 1 shows the characteristics of well owners and customer water handling and usage practices, in addition to other household characteristics.Well water was most commonly used for washing clothes and cleaning.Drinking well water was less common, though well owners more often drank the water from their well than customers.For the customers and owners that responded to the water storage question, a covered small container was commonly used and 5.9% used a covered large container.Storing water in uncovered containers was less common, with 13.8% responding that they stored water in a small uncovered container and 0.5% responding that they stored water in a large uncovered container.Table 2 shows the results of the transect survey of wells in Migosi and Manyatta A. Overall, although 15% of the wells present in 1999 had disappeared, a much larger number of new wells had been constructed in both settlements.As a result, the total number of wells had risen by 39% between 1999 and 2014.Fig. 2 shows the spatial distribution of well water abstraction across Manyatta A and Migosi.Of the 39 Manyatta A and Migosi wells targeted, there were 27 wells for which both abstraction data and location data were available.The average reported daily abstraction per well was 0.76 m3 in Manyatta and 0.81 m3 in Migosi.Daily abstraction rates were highly variable between wells and ranged from 0.02 m3 to 3 m3.No spatial pattern was identifiable in the data.On the basis of the transect survey results, we increased the number of wells in Manyatta A and Migosi by a factor of 1.43 and 1.30 respectively to account for the change in well numbers over time between the creation of the well map and the current survey.The overall contribution of shallow well water to the domestic water supply in these two neighbourhoods was calculated as 472 m3 per day.Fig. 3 shows the proportions of well owners and customers using water for drinking versus washing clothes, broken down by the more widely accessible water source types.Most respondents avoided drinking borehole and hand-dug well water, but did use spring water for drinking.Piped water and to a lesser extent rainwater were preferred for drinking.In contrast, respondents were much more likely to use groundwater for washing clothes and less than half used piped water for the same purpose.The pattern of source use for cooking was similar to that for drinking.The proportion of households using groundwater sources for personal hygiene, flushing toilets/latrines, and irrigation was greater than those using piped water for each of these purposes.Fig. 4 shows the reasons given by respondents for their choice of water source for different domestic purposes.Water quality/safety was overwhelmingly cited as the reason for choosing a source for drinking or cooking, whilst other considerations such as quantity, constancy of supply, and ease of access and cost became increasingly important for other domestic uses.Fig. 5 shows the distribution of wealth quintiles in five separate groups of urban households, with these quintiles being based on the same set of assets and services across all five groups.These groups are urban households nationally across Kenya; urban households in Nyanza province; well owners in our study settlements of Manyatta A and Migosi; those purchasing well water in our study settlements; and those well owners or customers drinking or cooking with untreated well water.The figures for Kenya nationally are taken from the 2008–9 DHS and show an even split, since this was the basis on which quintile boundaries were defined.For the DHS sample of urban households in Nyanza province, there are proportionately more households in the wealthiest and poorest quintiles and greater variation than among the national group.Among households participating in our study, of the 51 well owners interviewed in our survey, we were unable to calculate wealth quintiles for 9 of these, because of missing data on one or more of the 11 assets or services used to calculate the index.Similarly, among the 137 customers interviewed, we were unable to calculate wealth quintiles for 27, who lacked data on one or more assets or services.Fig. 5 shows the breakdown of wealth quintiles among the remaining 42 well owners and 110 customers.Whilst only 14% of well owners were from the two poorest quintiles, 42% of those purchasing well water were from the two poorest quintiles, suggesting that well owners were generally wealthier than those who purchased well water from them.This was confirmed using a Mann–Whitney test; well owners were significantly wealthier than well customers.Among the 25 well owners or customers who drank or cooked with untreated drinking-water, we were unable to calculate wealth quintiles for 4 of these households.The socio-economic profile of this final group of consumers exposed to untreated well water was broadly similar to that of those purchasing well water.We matched well owner and customer asset index scores for 20 wells.For 65% of the matches, asset score of the owner was higher than the asset score of the customer, meaning that the owner was wealthier than the customer."There was a statistically significant positive correlation between asset scores for owners and customers.This suggests that poorer customers bought water from poorer owners and that wealthier customers bought water from wealthier owners."When asset scores for well owners were compared to sanitary risk scores at the well itself, there was no apparent relationship between the two. "However, there was an inverse relationship between the sanitary risk score for contamination hazards in the surrounding area and the asset score for the owner that was significant. "Our findings suggest most well water consumers appear aware of the health risks of shallow well water and do not drink such water untreated, in accordance with other studies of Kisumu's informal settlements. "Residents' decisions about which water to use for drinking and cooking appear driven by a recognition of the importance of water quality.In contrast, the ease of access, affordability and constancy of supply of water from shallow wells mean it is commonly used for purposes other than consumption, such as washing clothes.However, despite limited borehole availability in these neighbourhoods, many respondents reported using borehole water, suggesting that they may be mistaking enclosed hand-dug wells for boreholes and presuming such wells to be safer.Similarly, although households appear well aware of the health risks of water from hand-dug wells, water from springs was frequently used for drinking and cooking."Given reported high levels of microbial contamination in Kisumu's springs, any belief in the safety of such sources may be misplaced.Similarly, although most households are aware of the health risks from hand-dug wells, a minority continue to consume untreated well water.In Kenya nationally, the predominant form of home water treatment is boiling, practiced by 38% of households in 2008–9, with chlorination practiced by 21.5%.In our study households, those who did treat water overwhelmingly used home chlorination rather than boiling.Boiling may be a more acceptable alternative to chlorination for those not currently treating their water.We estimated that 472 m3 of groundwater was abstracted per day in the two informal settlements of Mnayatta A and Migosi."This compares with an estimated 18,700 m3/day leaving Kisumu's two piped water treatment plants in 2008 and an estimated total water demand of 12,520 m3/day in 2011 for an area including both these settlements. "Although this figure is small in proportion to estimates of the volume of piped water leaving Kisumu's treatment plants and demand in such settlements, it suggests such wells form an affordable source of domestic water for poorer households in the city.The daily average amount of water abstracted per well seems plausible from a hydrogeological standpoint, given that evidence suggests that yields upwards of 5 m3/day are often encountered in fractured basement geology of the type present in Manyatta and Migosi.In a study of the urban core of Kisumu that incorporated Manyatta, Migosi and several other surrounding neighbourhoods, Sima et al. found that 109 m3/day of groundwater was abstracted for subsequent vending through kiosks for subsequent sale to both households and businesses.This study largely focussed on water vending through kiosks and therefore did not measure groundwater directly abstracted from wells for subsequent consumption by households or directly abstracted from wells by vendors using water carts, who by-passed kiosks."Our estimate of the amount of daily abstracted groundwater, which incorporates such direct groundwater abstraction, suggests that groundwater contributes a higher proportion of the city's domestic water supply.Given recent efforts to mobilise inhabitants to map and provide information about their own communities, there may be potential to quantify the contribution of hand-dug wells in a greater number of informal settlements by combining mapping of groundwater sources with household-based abstraction estimates.Our findings also suggest that whilst well owners are seldom among the poorest of urban households, those who purchased well water are largely from the poorest three quintiles, as are those consuming untreated well water.Thus, interventions to improve hand-dug well water, such as well protection and lining, may bring benefits to poorer households purchasing such water, provided consideration is given to issues such as post-collection contamination and subsequent use.This is because other studies have shown that the quality of water deteriorates significantly in the household storage containers.More generally, water vending is now a widespread practice in many urban settings and any interventions targeting the poor and points along water supply chains need to be informed by an understanding of the socio-economic characteristics of all actors in such chains.Many nationally representative household surveys now document selling of domestic water as well as its consumption.It should thus be possible to adapt the small-scale socio-economic profiling of actors at different points in the urban water supply chain that we present here and apply it to a much larger group of nationally representative households."We examined the relationship between well owners' socio-economic status and sanitary risk scores, distinguishing between observed hazards at the well itself versus observed hazards like pit latrines and uncollected refuse in the well's vicinity.Analysis of the latter sanitary risk scores suggested that poorer well owners and poorer well consumers were living in neighbourhoods where there was a greater concentration of contamination hazards such as pit latrines immediately around wells.However, in terms of observed hazards that could be controlled by owners, there was no significant evidence from sanitary risk scores that the wells of poorer owners were less well maintained.Wells owned by wealthier households were just as likely to have poor sanitary risk scores for potential hazards at the wellhead and from inadequate well lining, so there was no evidence that poorer well owners lacked the finances to invest in protecting their wells.More generally in an urban setting, it seems valuable to separate sanitary risk inspection checklist items into those controllable by well owners and those only controllable through community action and/or formal improved planning and regulation.If more widely adopted, this separation could provide insights into the underlying causes of contamination hazards in other, similar urban settlements.Our estimate of the amount of groundwater abstracted is based on a typical weekday during February or March, which would fail to capture annual and weekly variation in abstraction.We have relied on well owners to estimate the amount of water abstracted from their wells in the previous day and these estimates may be subject to recall bias.There is growing interest in using automated water level loggers and other forms of sensor to estimate abstraction directly, thereby overcoming this difficulty.However, the field use of such devices remains experimental.Since our sample drew on a set of wells that were selected in a baseline study in 2002, our finding may be biased towards longer term residents, given that our transect survey suggested widespread subsequent well construction in both Manyatta A and Migosi.Similarly, we were unable to interview absentee well owners, who may have different socio-economic characteristics and whose wells may experience different usage patterns.In developing our measure of SES, although we used the same question and response wording as the DHS in our survey, several methodological issues may have produced differences in question responses between our survey and the DHS.For example, responses to questions can be influenced by questionnaire length and question sequence and by differences in survey implementation and enumerator training.Similarly, the DHS represents historic conditions in 2008–9 rather than 2014.Our conclusions from the above findings are that shallow hand-dug wells and springs are an affordable source of water for washing clothes, flushing toilets/latrines and irrigation among informal settlements in Kisumu.It is also clear that most residents are aware of the health risks from microbial contamination of such water and use it for purposes other than drinking and cooking.However, a minority of residents continue to drink untreated well water, and there is some evidence that some may mistake hand-dug wells for boreholes and mistakenly consider spring water to be safer than well water.Whilst well owners are wealthy relative to those who purchase well water, both their customers and those drinking untreated well water are drawn from the poorest three quintiles of urban Kenyan households.Thus, interventions that seek to improve well water quality, such as wellhead chlorine dispensers and well protection and lining programmes, may still benefit the poorest households, provided attention is given to post-collection contamination of groundwater and its subsequent use.Although the public health risks from shallow hand-dug wells are well documented, they provide a means of increasing the quantity of water available to poorer households for purposes other than consumption.However, poorer households do require an alternative, affordable means of securing safe water for drinking and cooking alongside such well water, such as through effective home water treatment or hygienically vended piped water.It may thus be premature to consider closure of such wells before there is an affordable alternative for poorer households to use for purposes such as washing clothes and irrigation.Until affordable safe water becomes accessible to all urban households, the interim challenge remains to manage the contamination risks to urban shallow wells and springs as far as possible, and promote safer handling, storage, and treatment by groundwater consumers.
Shallow hand-dug wells are commonly used to supplement partial or intermittent piped water coverage in many urban informal settlements in sub-Saharan Africa. Such wells are often microbially contaminated. This study aimed to quantify the amount of such groundwater consumed, identify the socio-economic profile of well owners and consumers, and patterns of domestic water usage in informal settlements in Kisumu, Kenya. Building on a previous study, 51 well owners and 137 well customers were interviewed about well water abstraction, water usage and handling patterns, asset ownership, and service access. An estimated 472m3 of groundwater per day was abstracted in two informal settlements, with most groundwater consumers using this water for purposes other than drinking or cooking. According to an asset index, well owners were significantly wealthier than both the customers purchasing their groundwater and those drinking or cooking with untreated groundwater. This suggests that shallow groundwater sources provide poorer urban households with a substantial volume of water for domestic purposes other than drinking and cooking. Ongoing challenges are thus to raise awareness of the health risks of such water among the minority of consumers who consume untreated groundwater and find means of working with well owners to manage well water quality.
297
Data on changes in red wine phenolic compounds and headspace aroma compounds after treatment of red wines with chitosans with different structures
The data reported includes information about X-Ray diffraction pattern of chitins and chitosans, FTIR spectra and band assignments of chitins and chitosans, amount of chitosan dissolved in red wine when applied at 10, 100 and 500 g/h L.The headspace aroma abundance of red wines before and after treatment at 10, 100 and 500 g/h L application doses of crustacean and fungal chitosans were determined and the correlation between the headspace aroma compounds abundance reduction with the chitins and chitosans deacetylation degree was calculated.Total phenols, flavonoid phenols, non-flavonoid phenols, total anthocyanins, colour intensity, hue and chromatic characteristics of treated and untreated wines were determined.Phenolic acids and flavonoids of wines were determined by RP-HPLC and monomeric anthocyanins for 10 g/h L application doses.Total phenols, flavonoid phenols, non-flavonoid phenols, total anthocyanins, colour intensity, hue and chromatic characteristics for red wines before and after treatment with 10, 100 and 500 g/h L application doses of crustacean and fungal chitosans were determined.Phenolic acids and flavonoids and monomeric anthocyanins of wines before and after treatment with 10, 100 and 500 g/h L application doses of crustacean and fungal chitosans were determined by RP-HPLC.Commercial crustacean chitin, two commercial crustacean chitosans and one fungal chitosan where used.One additional chitin and one additional chitosan were produced by alkaline deacetylation of CHTN and CHTB, respectively .For deacetylation of chitin and chitosan, 15 g of the initial material were dispersed in 150 mL NaOH solution with NaBH4 and heated during 12 h under reflux with stirring, at 130–150 °C under nitrogen .For chitin deacetylation, commercial chitin was previously grounded to particles size less than 0.15 mm.After cooling to room temperature, the solution was neutralised to pH 6–8 with HCl 12 M and ethanol was added until 75% for chitosan precipitation.The precipitate was washed thoroughly with ethanol at 75%.The material was dried at 50 °C in a forced air oven during 24 h.Chitin and chitosan FTIR spectra were recorded in the range of wavenumbers 4000–450 cm−1 and 128 scans were taken at 2 cm−1 resolution, using a Unicam Research Series FTIR spectrometer.Pellets were prepared by thoroughly mixing samples with KBr at a 1:40 sample/KBr weight ratio in a small size agate mortar.The resulting mixture was placed in a manual hydraulic press, and a force of 10 t was applied for 10 min.The spectra obtained were background corrected and smoothed using the Savitzky-Golay algorithm using PeakFit v4.Analyses were performed in duplicate.Powder X-ray diffraction data were recorded on solid samples using a PANalytical X’Pert Pro X-ray diffractometer equipped with X’Celerator detector and secondary monochromator.The measurements were carried out using a Cu Kα radiation in Bragg-Bentano geometry at 7–60° 2θ angular range.Analyses were performed in duplicate.For studying the effect of DD on chitins and chitosans headspace volatile phenols reduction performance, two chitins and four different chitosans were used at 10 g/h L.The wine was previously spiked at two levels of 4-EP and 4-EG, named 4-EP750, 4-EP1500, and 4-EG150, 4-EG300, respectively, according to the ranges usually found in the literature .Chitins and chitosans were added at 10 g/h L, to the wine placed in 250 mL graduated cylinders.For studying the effect of chitosan application dose, the chitosans CHTD and CHTF were also tested in a second trial at 10, 100 and 500 g/h L.After 6 days the wine was centrifuged at 10,956g, 10 min at 20 °C for analysis.All experiments were performed in duplicate.The analysis of conventional oenological parameters were analysed using a FTIR Bacchus Micro.In this work were used two blend red wines from Douro Valley.Wine main characteristics used in the first assay, were as follows: alcohol content 13.3%, specific gravity 0.9921 g/mL, titratable acidity 5.7 g of tartaric acid/L, pH 3.52, volatile acidity 0.54 g of acetic acid/L, total phenolic compounds 1907 mg of gallic acid equivalents/L, total anthocyanins 343 mg of malvidin-3-glucoside equivalents/L.In the second assay the wine used presented an alcohol content of 13.4%, specific gravity 0.9935 g/mL, titratable acidity 5.5 g of tartaric acid/L, pH 3.56, volatile acidity 0.43 g of acetic acid/L, total phenolic compounds 1921 mg of gallic acid equivalents/L, total anthocyanins 364 mg of malvidin-3-glucoside equivalents/L.For the determination of the headspace aroma abundance of red wines a validated method, confirmed in our laboratory was used .Briefly the fibre used was coated with Divinylbenzene/Carboxen/Polydimethylsiloxane 50/30 μm and was conditioned before use by insertion into the GC injector at 270 °C for 60 min.To a 20 mL headspace vial, 10 mL of wine, 2.5 g/L of NaCl and 50 µL of 3-octanol us an internal standard was added.The vial was sealed with a Teflon septum.The fibre was inserted through the vial septum and exposed during 60 min to perform the extraction by an automatic CombiPal system at 35 °C.The fibre was inserted into the injection port of the GC during 3 min at 270 °C.For separation an Optima-FFAP column was used.The temperature program was as follows: initial temperature 40 °C hold during 2 min, followed by an increase in temperature at 2 °C/min to 220 °C followed by an increase at 10 °C/min to 250 °C, hold during 3 min.The flow rate was set at 1.5 mL/min and maintained constant during the run.The transfer line temperature was 250 °C and the ion source was set at 220 °C.The mass scan was performed between m/z 45 and 650, the scan event was 0.59 s. All analyses were performed in quadruplicate.In the second assay, for quantification of the wine glucosamine content, wines treated with 10, 100 and 500 g/h L of chitosans CHTD and CHTF were performed as follows: to 4 mL of wine, 400 μL of 72% H2SO4, and the samples were heated at 100 °C for 2.5 h.After hydrolysis, 500 μL of 2-deoxyglucose at 1 mg/mL was added as an internal standard and the glucosamine content was determined by anion-exchange chromatography using the method described by Ribeiro et al. .Under the conditions of the analytical method the lowest standard in the calibration curve corresponds to 4.5 mg of anhydrous glucosamine/L of wine.Analyses were performed in quadruplicate.Colour intensity and hue were determined according to OIV .The content of total anthocyanins was determined according to Ribéreau-Gayon and Stonestreet .Wine chromatic characterisation were calculated using the CIELab method according to OIV .The Chroma 1/2] and hue-angle values were also determined.To distinguish the colour more accurately, the colour difference was calculated using the following equation: ΔE*=1/2.All analyses were performed in duplicate.The wine non-flavonoids content was quantified according to Kramling and Singleton .The results were expressed as gallic acid equivalents by means of calibration curves with standard gallic acid.The total phenolic content was determined according to Ribéreau-Gayon et al. .All analyses were performed in duplicate.Analyses were performed with an Ultimate 3000 HPLC equipped with a PDA-100 photodiode array detector and an Ultimate 3000 pump.The separation was performed on a C18 column with a flow rate of 1 mL/min at 35 °C.The injection volume was 50 μL and the detection was performed from 200 to 650 nm with 75 min per sample.The analyses conditions were carried out using 5% aqueous formic acid and methanol and the gradient was as follows: 5% B from zero to 5 min followed by a linear gradient up to 65% B until 65 min and from 65 to 67 min down to 5% B .Quantification was carried out with calibration curves with standards caffeic acid, coumaric acid, ferulic acid, gallic acid and catechin.The results of trans-caftaric acid, 2-S-glutathionylcaftaric acid and caffeic acid ethyl ester were expressed as caffeic acid equivalents by means of calibration curves with standard caffeic acid.On the other hand, coutaric acid, coutaric acid isomer and p-coumaric acid ethyl ester were expressed as coumaric acid equivalents by means of calibration curves with standard coumaric acid.A calibration curve of cyanidin-3-glucoside=2.70×+0.00; r=0.99980), malvidin-3-glucoside=1.62×+0.14; r=0.99985), peonidin-3-glucoside=2.49×+0.19; r=0.99994) and pelargonidin-3-glucoside=1.66×+0.99; r=0.99990) was used for quantification of anthocyanins.Using the coefficient of molar absorptivity and by extrapolation, it was possible to obtain the slopes for delphinidin-3-glucoside, petunidin-3-glucoside and malvidin-3-coumaroylglucoside to perform the quantification .The results of delphinidin-3-acetylglucoside, petunidin-3-acetylglucoside, peonidin-3-acetylglucoside, cyanidin-3-acetylglucoside and cyanidin-3-coumarylglucoside were expressed as respective glucoside equivalents.The data are presented as means ± standard deviation.To determine whether there is a statistically significant difference between the data obtained for the diverse parameters quantified in the red wines, an analysis of variance and comparison of treatment means were carried out.Tukey honestly significant difference test was applied to physicochemical data to determine significant differences between treatments.All analyse were performed using Statistica 10 Software.
Data in this article presents the changes on phenolic compounds and headspace aroma abundance of a red wine spiked with 4-ethylphenol and 4-ethylguaiacol and treated with a commercial crustacean chitin (CHTN), two commercial crustacean chitosans (CHTB, CHTD), one fungal chitosan (CHTF), one additional chitin (CHTNA) and one additional chitosan (CHTC) produced by alkaline deacetylation of CHTN and CHTB, respectively. Chitin and chitosans presented different structural features, namely deacetylation degree (DD), average molecular weight (MW), sugar and mineral composition (“Reducing the negative sensory impact of volatile phenols in red wine with different chitosan: effect of structure on efficiency” (Filipe-Ribeiro et al., 2018) [1]. Statistical data is also shown, which correlates the changes in headspace aroma abundance of red wines with the chitosans structural features at 10 g/h L application dose.
298
Prevalence and factors associated with the co-occurrence of health risk behaviors in adolescents
Over the past decades, exposure to health risk behaviors has become one of the most widely investigated subjects in studies with young populations.1,2,The interest in investigations focusing on this subject can be explained, at least in part, by the fact that such behaviors can be established and incorporated into the lifestyle at an early age,3,4 and due to their connection with biological risk factors5 and the presence of established metabolic or cardiovascular disease.6,The prevalence of co-occurrence of health risk behaviors in adolescents has been described in several studies.7–17,However, it was observed that the studies developed in Brazil, except for the survey performed by Farias Júnior et al.,15 relied on very specific samples: laboratory school students17 and day-shift students from public schools in a city from southern Brazil.16,Therefore, the results of these studies cannot be extrapolated to other regions of the country due to socioeconomic and cultural contrasts, which are known to differentiate the exposure to health risk behaviors, as reported by Nahas et al.18,Epidemiological surveys on the co-occurrence of health risk behaviors in adolescents and their associated factors can help to identify risk groups and to monitor the health levels of this population, which can support the development of public policies to promote health, using earlier intervention strategies to prevent these habits.Thus, the aim of this study was to analyze the prevalence and factors associated with co-occurrence of health risk behaviors in adolescents.This is a secondary analysis of data from a cross-sectional epidemiological survey, school-based and statewide, called “Lifestyle and health risk behaviors in adolescents: from prevalence study to intervention”.The research protocol was approved by the Institutional Review Board of Hospital Agamenon Magalhães, in compliance with the standards established in Resolutions 196 and 251 by the National Health Council.The target population, estimated at 352,829 individuals, according to data from the Education and Culture Secretariat of the State of Pernambuco, consisted of high-school students enrolled in public schools, aged 14–19 years.The following parameters were used to calculate sample size: 95% confidence interval; sampling error of 3% points; prevalence estimated at 50%, and the effect of sample design, established at four times the minimum sample size.Based on these parameters, the calculated sample size was 4217 students.Considering the sampling process, we tried to ensure that the selected students represented the target population regarding the geographic regions, school size and shift.The regional distribution was analyzed based on the number of students enrolled in each of the 17 GEREs.Schools were classified according to the number of students enrolled in high school, according to the following criteria: small – less than 200 students; medium – 200–499 students, and large – 500 students or more.Students enrolled in the morning and afternoon periods were grouped into a single category.All students in the selected classes were invited to participate.We used cluster sampling in two stages, using the school and class as the primary and secondary sampling units, respectively.In the first stage, we performed the random selection of the schools, aiming to include at least one school of each size by GERE.In the second stage, 203 classes were randomly selected among those existing in the schools selected in the first stage.Data collection was performed using an adapted version of the Global School-Based Student Health Survey questionnaire.This tool had both face and content validity evaluated by experts, and had its indicators of co-occurrence validity and reproducibility tested in a pilot study.Consistency indicators of test–retest measure ranged from moderate to high19–21 for most items.The test–retest reproducibility coefficients of the measures used in this study were: 0.86 for physical activity; 0.66 for the consumption of fruits; 0.77 for the consumption of vegetables; 0.76 for alcohol consumption; 0.62 for tobacco use, and 0.74 for sedentary behavior.Data collection was carried out from April to October 2006.The questionnaires were applied in the classroom.The students were advised by two previously trained applicators, which clarified and assisted in filling out the data.All students were informed that their participation was voluntary and that the questionnaires did not contain any personal identification.Students were also informed that they could leave the study at any stage of data collection.A passive informed consent form was used to obtain the permission of parents for students younger than 18 years to participate in the study.Participants aged 18 or older signed the term, indicating their agreement to participate in the study.The dependent variable was obtained from the sum of five risk behaviors: low level of physical activity; sedentary behavior; occasional consumption of fruits and vegetables; alcohol consumption, and smoking.These factors were chose because they are lifestyle modifiable factors that appear to be more strongly associated with non-communicable chronic diseases, and represent the highest global burden of disease and mortality worldwide.22,Sedentary behavior was included because it is treated as a distinct behavior from low levels of physical activity, and it has a high prevalence in the population, in addition to being an important impact on adolescent health.23,Information regarding the description of these variables can be found in previous studies.19–21,The obtained responses resulted in an outcome with zero to five identified risk behaviors.Subsequently, for analysis purposes, the occurrence of risk behaviors was divided in four categories.The independent variables were: gender; age; school shift; school size; maternal education; occupational status; ethnicity; geographic region and place of residence.The data tabulation procedure was carried out in a database created with the EpiData Entry software.To perform the analysis, Stata software was used.In the bivariate analysis, the chi-square test was used for heterogeneity and for trend to determine the prevalence of co-occurrence of health risk behaviors by categories of the independent variables.To evaluate possible associations between independent and dependent variables, an analysis of ordinal logistic regression was performed with a proportional odds model.The assumption of proportionality was assessed by the likelihood ratio test, and the significance of coefficients, by the Wald test.Analyses were carried out in two stages: first, by making simple regressions of the independent variables in relation to the outcome.Then, a multivariate analysis was performed to determine whether the demographic and school-related factors were associated or not with the outcome.All independent variables entered the multivariate model at the same level of analysis and were excluded by stepwise method with backward elimination, using a p-value <0.2 as an exclusion criterion of variables during the modeling stages.These results are shown as odds ratios and respective confidence intervals.After selecting the variables that would comprise the regression model, we tested the existence of possible collinearity between the geographic region and place of residence covariates, and no linear association values <10) was identified between these two variables.Of the total of adolescents attending the selected classes in 76 assessed schools, 55 refused to participate in the study, and seven were excluded due to incomplete or inconsistent data in the questionnaire.The final sample consisted of 4207 adolescents, aged between 14 and 19 years.Other sample characteristics are shown in Table 1.Among the analyzed variables in the study, with the exception of maternal education, the rate of unanswered questions did not exceed 2.0%.Fig. 1 shows the prevalence of exposure to the five health risk behaviors targeted in this study.The results for these behaviors will not be explored in this study, as they already have been presented alone in previous investigations.19–21,Fig. 2 shows the prevalence of co-occurrence of health risk behavior exposure observed in the sample by gender.In the bivariate analysis, we observed that the proportion of adolescents simultaneously exposed to three or more risk behaviors was statistically higher among older students, adolescents with higher maternal education, students living in the urban area and those who lived in the semi-arid region when compared to their peers.Table 3 shows the results of the ordinal logistic regression analysis for the co-occurrence of health risk behaviors according to demographic and school-related factors.In the adjusted analysis, it was observed that age, occupational status, maternal education, geographic region and place of residence were statistically associated with higher co-occurrence of health risk behaviors.It was verified that older adolescents had a 17% higher chance of simultaneous exposure to more than three health risk behaviors when compared to younger ones.Students who reported working had a 14% lower chance of having more than three risk behaviors when compared to those who did not work.On the other hand, adolescents who reported mothers with intermediate education had a 21% higher chance of having co-occurrence of risk behaviors, compared to those who reported lower maternal education.The chance of co-occurrence of a higher number of health risk behaviors was 22% lower among adolescents who reported residing in rural areas, when compared to those living in urban areas.Adolescents who reported living in the semi-arid region showed a 39% higher chance of exposure to multiple health risk behaviors when compared to adolescents living in the metropolitan area.The results of this study show that the prevalence of simultaneous exposure to health risk behaviors among adolescents from the state of Pernambuco was high, as observed in similar studies.8,10–12,Another important result was the identification of five significant factors associated to the higher co-occurrence of these behaviors, namely: age range, maternal education, geographic region, working status, and place of residence.The results of this survey indicated that 58.5% of adolescents were simultaneously exposed to two or more risk behaviors, as observed in a study carried out in the city of João Pessoa, state of Paraiba.15,The importance of this finding lies in the fact that health problems can be caused by a set of aggregated risks behaviors, such as throat cancer, which can be explained by the simultaneous occurrence of two habits, as highlighted by the World Health Organization.24,In this study, simultaneous exposure to a higher number of health risk behaviors was higher among older adolescents.As seen in the available studies, the prevalence of simultaneous exposure to health risk behaviors increases with age.8,13,25,That can be explained by the fact that adolescents acquire greater autonomy and social and economic independence with age,26 favoring access to places that sell alcoholic beverages, cigarettes and other drugs.It is worth mentioning in this study the association between intermediate maternal education and higher co-occurrence of health risk behaviors among adolescents.This is an interesting fact, because the higher the educational level of the mother, supposedly the better understanding she would have on the benefits of having a healthier life style, and therefore would have a greater chance of providing more support to her children.27,One of the possible explanations lies in the fact that higher levels of education are seen among those mothers who probably work out of their households and, therefore, spend less time with their adolescent children.It was also observed that adolescents who reported having a job had lower chances of simultaneous exposure to a higher number of health risk behaviors, when compared to those who did not work.In a society where young individuals face great challenges to enter the labor market, it is possible to assume that young individuals who engage in some labor activity have higher self-esteem, autonomy and personal responsibility, characteristics that may favor the adoption of healthier behaviors.Adolescents who live in the semi-arid region of Pernambuco showed a 39% increase in the chance of simultaneous exposure to a higher number of health risk behaviors compared to their peers living in the metropolitan area.Comparative studies with analysis of simultaneous exposure to lifestyle habits are scarce, making the comparisons impossible.However, Matsudo et al.28 carried out a study in the state of São Paulo, observing that the individuals who lived on the coast were more active than those living in the countryside.This may be related to the low supply of leisure and physical facilities for physical activities in the countryside.Moreover, it may be related to the availability, accessibility and quality of food preservation in this region, where there is an acknowledged shortage of water resources, indispensable for both the production and the processing of fresh food.On the other hand, adolescents who live in rural areas had a 22% decrease in the chance of simultaneous exposure to a higher number of health risk behaviors when compared to those living in urban areas.This can be explained by the specific characteristics of the types of activities carried out in rural areas, which require greater energy expenditure to be performed,29 in addition to greater access to foods such as cereals and derivatives and tubers, which are essentially products of family agriculture, as well as the lower access to ready-made meals and industrialized mixes.30,The lack of similar studies makes it difficult to compare the findings of the present study.What was found in the literature was limited to studies that evaluated the association of these factors with isolated exposure to one or another risky behavior.Similar studies available13–17 used very different methodological procedures, particularly regarding the type, quantity and definition of characterizing risk variables.The generalization of the results of this study must be made with caution, as only adolescents attending public schools were included.One can assume that the results are different in samples of adolescents attending private schools and among those who are not enrolled in the formal educational system.On the other hand, the decision to not include private schools in the sampling planning was due to the fact that more than 80% of adolescents from Pernambuco were enrolled in public schools.It is noteworthy that the prevalence shown in this article discloses a scenario observed some time ago and, therefore, the interpretation of these parameters should be made carefully, as social and demographic changes that have occurred in the Brazilian northeast region during this period may have affected these indicators.On the other hand, it is not plausible to assume that the associations that were identified and reported in this study would be different due to possible changes in the prevalence of some factor.Despite the good reproducibility levels of the tool, one cannot rule out the possibility of information bias, as adolescents tend to overestimate or, at other times, underestimate the exposure to risk behaviors.However, the findings of this survey add important evidence to the available body of knowledge on the prevalence and factors associated with co-occurrence of health risk behaviors in adolescents.Additionally, the study was performed with a relatively large sample, representative of high-school students from public schools in the state of Pernambuco.It is believed that the evidence shown in this study may help identify more vulnerable subgroups, thus contributing to decision-making and appropriate intervention strategy planning.Moreover, it can lead to the development of other investigations.Considering these findings, it can be concluded that there is a large portion of adolescents exposed to simultaneous health risk behaviors.It was also verified that older adolescents, with mothers of intermediate educational levels and living in the semi-arid region had higher chance of simultaneous exposure to a higher number of health risk behaviors, thus configuring higher-risk subgroups, whereas adolescents who worked and those living in rural areas were less likely to have simultaneous exposure to a higher number of health risk behaviors.Study supported with financial assistance from the Conselho Nacional de Desenvolvimento Científico e Tecnológico, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior and Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco by granting of scholarships.The authors declare no conflicts of interest.
Objective To analyze the prevalence and factors associated with the co-occurrence of health risk behaviors in adolescents. Methods A cross-sectional study was performed with a sample of high school students from state public schools in Pernambuco, Brazil (n=4207, 14-19 years old). Data were obtained using a questionnaire. The co-occurrence of health risk behaviors was established based on the sum of five behavioral risk factors (low physical activity, sedentary behavior, low consumption of fruits/vegetables, alcohol consumption and tobacco use). The independent variables were gender, age group, time of day attending school, school size, maternal education, occupational status, skin color, geographic region and place of residence. Data were analyzed by ordinal logistic regression with proportional odds model. Results Approximately 10% of adolescents were not exposed to health risk behaviors, while 58.5% reported being exposed to at least two health risk behaviors simultaneously. There was a higher likelihood of co-occurrence of health risk behaviors among adolescents in the older age group, with intermediate maternal education (9-11 years of schooling), and who reported living in the driest (semi-arid) region of the state of Pernambuco. Adolescents who reported having a job and living in rural areas had a lower likelihood of co-occurrence of risk behaviors. Conclusions The findings suggest a high prevalence of co-occurrence of health risk behaviors in this group of adolescents, with a higher chance in five subgroups (older age, intermediate maternal education, the ones that reported not working, those living in urban areas and in the driest region of the state).
299
Partial degradation of carbofuran by natural pyrite
Pyrite is a stable metal sulfide that has received significant attention in a range of industrial and geochemical processes due to its ubiquity and unique features.For example, the acid mine drainage occurred as a result of pyrite oxidation in the presence of H2O and O2.Pyrite is considered as a primary energy supplier for primitive life.Further, pyrite shows intrinsic conductivity and high light absorption capacity.Recently the band gap of pyrite is shown as ∼0.55 eV as opposed to widely accepted value of 0.95 eV.The presence of reduced band gap on the surface as well as the existence of defects sites within this band gap hold implications for electrons transfer as required for the degradation of organic pollutants.In this context, the S22− on pyrite surface is argued as an electron donor.The reactivity of pyrite was largely attributed to fast or slow production of OH radicals in the presence or absence of Fenton precursors, respectively.Pyrite is employed as a heterogeneous catalyst for pollution control in several chemical processes such as aerobic degradation, photolysis etc.For example, it plays a catalytic role in the degradation of organic pollutants generated from pharmaceutical wastes, polyaromatic hydrocarbons, pesticides or pesticide precursors and domestic wastes.In this research we examined the chemical kinetics of 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-methylcarbamate degradation by natural pyrite under anaerobic conditions and the results were used to postulate plausible degradation mechanism/s based on experimentally identified intermediates.When such an approach is not feasible due to limitations in experimental procedures, the most probable intermediates were predicted theoretically.All the experiments were conducted in the dark under anaerobic environmental conditions.Finally, a carbofuran degradation mechanism was proposed, and the time domain distribution of different degradation products was simulated computationally.The investigation of systems in this nature is very important both from industrial and geochemical viewpoints due to its importance as an industrial catalyst, and the role played in numerous natural processes.Carbofuran was selected mainly due to following reasons; it is an insecticide widely used in agriculture and has a high toxicity which acts as an inhibitor of acetyl cholinesterase.Carbofuran has a high mobility in soils and is soluble in water, i.e., solubility ∼700 mg/L.The maximum contaminant level of carbofuran in drinking water is 0.04 mg L−1.Further due to high toxicity of carbofuran, new regulations have been promulgated banning its use.Semi-empirical calculations of pyrite and carbofuran systems have shown that the energy values of ELUMO and EHOMO are −3.14 and −6.14 eV, respectively, which indicate that carbofuran degradation by pyrite is theoretically feasible.However, investigations into the degradation of carbofuran by pyrite are depleting to date and the available information is of limited use in elucidating mechanistic pathways of carbofuran degradation.In contrast, there is substantial information on the degradation of carbofuran by photochemical, chemical, pyrolytic and biological methods and in most of them are routinely used in advanced oxidation processes of water treatment.In pyrolytic degradation, carbofuran has disintegrated into 83 low molecular mass fragments, viz. m/z 39–164.Both in biological and chemical processes, however the carbofuran degradation occurred via the formation of high molecular mass fragments, viz. m/z 178–221.Standards 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-N-methylcarbamate and 2,3-dihydro-2,2-dimethylbenzofuran-7-ol were purchased from Chem Service, USA.HClO4, NaOH, NaClO4, Na2S, methanol, tert-butyl alcohol, HOCH2C2OH were obtained from Fluk.N-hydroxyamine hydrochloride, sodium acetate, and 2.2′-bypyridyl, atomic absorption iron standard solutions were from BDH.Water was purified passing through a mixed bed resin to remove any anionic or cationic constituents before distillation.Stock solutions of carbofuran and 2,3-dihydro-2,2-dimethylbenzofuran-7-ol were prepared in methanol — water mixtures from the chemicals received from Sigma.The half-lives of carbofuran in water ranged from 690 days at pH 5 to 1 days at pH 9.Therefore, the pH of the stock solution was always kept below 5 to minimize self-degradation.Pyrite samples were collected from a graphite mine and they were purified according to the methods given elsewhere, particularly to remove any oxidative products such as iron sulfate, melanterite etc., from the mineral phase. .For all experiments sample preparations were conducted under anaerobic conditions in a glove box which was flushed with 99.95% N2 according to following procedure: three times prior to use; two times after each sample change and two times daily to remove any slow diffusive atmospheric contaminants from the glove box.Confirmatory evidence for the identification of mineral phase was received by X-ray diffraction and FTIR analyses.The physico-chemical properties of pyrite/water interface and carbofuran used are shown in Table 1.The chemical kinetics of carbofuran degradation by pyrite was examined as a function of pH, carbofuran and solid content.In control experiments, identical solutions were prepared without pyrite.The blank solutions were prepared by membrane filtering pyrite–water suspensions that were synthesized at desired experimental conditions.In selected experiments ∼0.10 M tert-butyl alcohol was added as the OH radical scavenger.As discussed in ref., the OH scavenging can be monitored by detecting HOCH2C2OH.The detection of HOCH2C2OH was carried out by a GC-FID system using Porapak 3 capillary column under split–split less mode at 9.4 min retention time as confirmed against a known standard.All experiments were conducted in amber borosilicate vessels capped with a glass lid with five outlets to facilitate solution transfer and pH monitoring under controlled conditions.In most cases, the mass ratio of the solid to solution was around 1:5.The uptake of water by pyrite slurry was negligible; hence errors due to changes in volume by pyrite addition were neglected because they were within the range of analytical and experimental errors.To prepare samples, first a clean vessel was capped with the lid equipped with pH electrode, anti-magnetic stirring rod, temperature probe, and N2 gas inlet/outlet tubes.The batch slurry prepared with pyrite was transferred to the vessel, and stirring was initiated.Subsequently, the solution was spiked with known concentration of carbofuran and pH was recorded.In a typical kinetic experiment, the pyrite batch solution was spiked with a known carbofuran concentration.The background ionic strength was adjusted to 0.01 M with 5 M NaClO4.The pH values within 1–5 were adjusted either with 0.0921 M HClO4 or 0.935 M NaOH.The temperature was maintained at 298 K.At pre-defined time intervals, samples were withdrawn into a syringe.Samples were placed in 3-ml glass vials.For the analysis of organic compounds, a 0.5-ml sample was added into a vial.A known volume was taken into gas tight syringe for HPLC analysis.Normally three samples were taken from each vial.In most of the cases, the conditions of pseudo order kinetics were imposed by performing experiments at excess surface sites while varying one experimental parameter at a time.Dependence of the carboruan degradation rate on proton concentration was investigated by varying system pH between ∼2.0 and ∼5.0.To determine the effect of solid concentration on carbofuran degradation, the pyrite content in the batch reactor was varied between 1 and 48 μM around pH ∼2.Carbofuran and its degradation products were analyzed by reversed phase HPLC using Supersil C-18 column with a UV detector at 276 nm.The mobile phase was composed of a mixture of 60/40% acetonitrile and water under 1 mL min−1 isocratic flow conditions.The inorganic degradation products, nitrate, nitrite and ammonium were determined photometrically according to the procedures given in APHA, 2005).Carbofuran degraded product identifications were carried out by Hewlett–Packard GC–MS. The time resolved IR spectra of carbofuran-pyrite complexes were obtained under transmission mode.The carbofuran-pyrite-solid suspensions were mixed with KBr at a 1:3 ratio, and the samples were introduced to the FTIR spectrometer with time resolved spectral software.The FITR machine was programmed to collect spectral data at user defined different time intervals for a specified duration.Total iron analysis was carried out with the flame atomic absorption spectrometry.Ferrous ion concentration was determined photmetrically with 2.2′-bypyridyl method.Sulfide concentration was determined by Orion Ag2S/S2− ion selective electrode and double junction reference electrode coupled to auto-chemistry Analyzer.All calibrations of the sulfide electrode were made under matching solution conditions to minimize matrix effects.The pH measurements were made with Rose combined electrode.Table 2 shows the parameters examined for different analytical methods to ascertain quality control of data.The kinetic data were analyzed with Chemical Kinetic Simulator.The CKS algorithm assumed a stochastic method based on reaction probabilities to calculate the history of a given system using a specified reaction mechanism.It treats the reaction system as a volume filled with limited number of particles representing reactants and products.More technical details about the algorithm is given in reference no.The postulated pathways of carbofuran degradation were developed based on experimental measurements identifying possible intermediates.If such an approach is not possible due to measurement limitations ab initio molecular modeling method based on DFT theory was used to predict possible intermediates.The details of these calculations were reported elsewhere.The postulated mechanism is given in Fig. 5.The initial concentrations of carbofuran and pyrite were inputted into CKS code along with the experimentally defined rate constants, viz. k1, and k5.Other rate constants stated in the proposed mechanism were optimized till the calculated concentrations of the intermediates matched experimentally measured values.As shown elsewhere, carbofuran exhibits direct interactions with reactive sites on pyrite surface as evidenced by the variations of lattice vibration modes of the solid in the region 700–1200 cm−1.Further, the presence of carbofuran is characterized by a band at 3363 cm−1due to CHN stretching.Bare pyrite did not show any bands at or in the vicinity of 3363 cm−1.In the presence of pyrite the intensity variation of this band was monitored as a function of time using time resolved IR spectroscopy.As time evolved the relative intensity of this band has decreased and this provides evidence for degradation of carbofuran via HCN bond.Further, in time resolved IR spectroscopy, the IR band intensity values can be recorded at 0.5 s intervals and such a resolution cannot be achieved in residual concentration determination method.However, in the latter method, the time period of sampling can comfortably be extended to order of hours.Although in the IR spectroscopic method such an extension of time interval is possible, the IR band at 3363 cm−1 is obscured with a broad band.This may be due the presence of degraded products of carbofuran since any changes on pyrite surface should show IR bands in the region 600–800 cm−1.Presently no attempt was made to elucidate the spectral variations upon interactions of pyrite with these degraded products.Therefore, the time resolved IR spectroscopy was not used further in this work.The rate constants of carbofuran degradation as a function of pH were also examined.The data in Fig. 4 show that carbofuran degrades fastest in acidic pH and thereafter it decreases toward basic end.As the pH of the solution varies, the surface species of the pyrite surface should show a variation.In a previous work, the distribution of different species as a function of pH had been modeled using a 1-pK surface complex formation mechanism and the points pertinent to this work were briefly outlined.Accordingly, pyrite consists of two surface sites, namely and .The hydroxyl functional groups are created by hydrolysis of H2O on iron sites; the sulfide groups occur differently, yielding , and sites.The pHzpc of pyrite is 1.7.When pH < pHzpc surface charge of pyrite is net positive, and surface sites are present.When compared to sites, the sites dominate below pH 5.The presence of terminal sulfur species enhanced hydrophobic behavior, and this behavior is strongly influenced by the adjacent iron species.Accordingly, the sites seem to be more hydrophobic than sites and their relative abundance showed opposite trends with the pH.The variation of kobs with pH showed somewhat similar behavior with that of .It implied the dominant role played by surface species in the reduction of carbofuran by pyrite.Borda et al. showed that highly reactive OH are spontaneously formed in pyrite slurry.We hypothesized that the degradation of carbofuran occurred via formation of OH.However the determination of the evolution of OH concentration is challenging because there is no easy method for measuring this concentration in situ.Therefore separate experiments were carried out in the presence of OH scavenger, tert-butyl alcohol.As shown earlier tert-butyl alcohol appears to react with OH and Fe3+ to give HOCH2C2OH.The presence of HOCH2C2OH was confirmed qualitatively by its experimental detection.In the present work, the site concentrations of pyrite were 2.73 μM and 48 μM.In both cases, 0.1 M tert-butyl alcohol serving as an OH scavenger decreased the carbofuran degradation by pyrite by about 96%.Further strict anaerobic conditions were maintained in all experiments.In the absence of O2, OH generation is known to occur at defect sites of pyrite by water splitting.In a perfect pyrite crystal the sulfur on S2− has an oxidation state –I.However sulfur presents at defect sites has an oxidation state −II.As shown below, therefore to maintain charge balance this requires iron in vicinity should have an oxidation state +III.In all instances, the generation of OH is presumed to be formed at defect sites creating an acidic environment lowering the pH of the solution by about 0.7–1.5 units.During the course of the reaction, the pH was restored to specific values with 0.935 M NaOH:Fe3+ + H2O → Fe2+ + OH + H+,The different intermediates resulted by carbofuran degradation by pyrite were examined with RP-HPLC and gas chromatography–mass spectroscopic methods.For this experiment following experimental conditions were used; pH 2.5, = 35 μM and 0 = 5 μM.The RP-HPLC chromatograph showed six peaks.Obviously six peaks indicate six compounds which cannot be identified conclusively due to unavailability of authenticated samples of some intermediates.However, only two peaks of the chromatograph were identified as carbofuran,and 2,3-dihydro-2,2-dimethylbenzofuran-3,7-diol.Bachman and Patterson reported seven peaks when carbofuran was photo-decomposed.Out of the seven compounds, only the two products were identified based on retention times.Presently further identification of degradation products was carried out by GC–MS. However, the total ion current spectrum of the same sample used for HPLC analysis has resulted only four peaks.The absence of a particular peak in TIC may be due to the low resolution of the mass detector.Identification of the four compounds shown in mass spectroscopic data are given in Fig. S-2 as 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-methylcarbamate, 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-formate, 2,2-dimethyl-2,3-dihydrobenofuran-3,7-diol, and 7-hydroxy-2,2-dimethylbenzofuran-3-one.The possible structures of the two unidentified compoundswere determined from ab initio calculations made at B3LYP/6-311 + G level.Accordingly, the conversion of 2,2-dimethyl-2,3-dihydrobenzofuran-7-ylformate → 2,2-dimethyl-2,3-dihydrobenzofuran-3,7-diol is believed to occur via the formation of 2,2-dimethyl-2,3-dihydrobenzofuran-7-ol.The degradation of 2,2-dimethyl-2,3-dihydrobenzofuran-3,7-diol resulted 7-hydroxy-2,2-dimethylbenzofuran-3-one and subsequent conversion into 3-hydroxy-2-2-methoxybenzaldehyde.Although the exact structural elucidations are not possible by HPLC, it is likely that the unidentified peaks may be due the presence of these degradation products, 7-hydroxy-2,2-dimethylbenzofuran-3-one, 2,2-dimethyl-2,3-dihydrobenzofuran-7-ol and 3-hydroxy-2-2-methoxybenzaldehyde).Possible intermediates shown as 2,2-dimethyl-2,3-dihydrobenzofuran-7-ol and 3-hydroxy-2-2-methoxybenzaldehyde were predicted by theoretical calculations at B3LYP/6-311+G level.Most of the carbofuran degradation products stated in this work was similar to those reported by the process of carbofuran photolysis.However, they do differ with respect to the products obtained by carbofuran pyrolysis.Further according to the data shown so far it can be deduced that the carbofuran degradation is controlled by the generation of OH on pyrite.Due to high reactivity of OH, the relevant reaction/s can be assumed to be offset from the state of equilibrium state.Therefore no back reaction/s is assumed to be occurred in significant proportions.As shown in Fig. 5, a plausible mechanism for the carbofuran degradation by pyrite was suggested considering following steps;.Cleavage of the CN bond forming 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl formate and methyl amine, and.Cleavage of the CO bond of 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl formate forming 2,2-dimethyl-2,3-dihydrobenzofuran-7-ol via2,2-dimethyl-2,3-dihydrobenzofuran-7-ol and carbamic acid.The carbamic acid is unstable and rapidly degraded to methylamine, inorganic nitrogen species and CO2.Bachman and Patterson proposed a three steps mechanism for the carbofuran photo-degradation.The first step was somewhat equivalent to the steps suggested by us; the carbamate group is cleaved from carbofuran forming 2,3-dihydro-2,2-dimethylbenzofuran-7-ol and carbamic acid; the latter compound decomposes readily into methylamine and CO2.The formation of the gaseous products is a driving force for the progress of the forward reaction.Further, as a result of degradation of carbamic acid or methyl amine, the solution also contains NH4+, NO3− and NO2−; the equivalent nitrogen content was 1.7 × 10−6 mol.The nitrogen content of the carbofuran used was 5.0 × 10−6 mol.The discrepancy of the nitrogen mass balance is attributed due to escape of some nitrogen as ammonia or methylamine as gases.However, such an approach cannot be made on carbon mass balance due to excessive release of CO2, and the presence of possible intermediates that are not quantified yet.Pyrite is an intrinsic semiconductor which is believed to split water for OH production at defects or surface sites.This material and its oxidative products are environmentally benign.It is a low cost Fe2+ source that makes it an ideal substrate for heterogeneous catalyst for most environmental applications.Pyrite based Fenton process showed enhanced efficiency due to the self regulation of Fe2+ in solution.Although exact mechanism/s is inconclusive to date, the production of H2O2 is also shown in the vicinity of surface sites.In classical Fenton process both of these substrates, namely Fe2+ and H2O2 requires adding externally.However in the case of pyrite aqueous systems both of these substrates believed to produce in situ.Therefore, pyrite has a great potential as an excellent starting material for the degradation of organic pollutants.Pyrite has already been used as a natural Fenton reactor in the degradation of organic pollutants several instances.Pyrite is proposed as an active substrate for domestic water treatment, nitrate and metalloids remediation.This study indicates that natural pyrite spontaneously initiates slow degradation of carbofuran.The data related to the toxicity of most of the degraded products are not available to date.Nonetheless this information is vital in assessing environmental impact due to carbofuran.When properly functionalized, in accordance with green chemistry principles, pyrite can be used as a starting material for pollution control and remediation due to its simplicity, cost effectiveness and environmental inertness.Partial degradation of carbofuran was occurred in the presence of pyrite with a highest efficiency around pH ∼ 2.5.The dominant degradation products were 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-methylcarbamate, 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-formate, 2,2-dimethyl-2,3-dihydrobenofuran-3,7-diol, 7-hydroxy-2,2-dimethylbenzofuran-3-one and two unidentified intermediates.The plausible structures were postulated theoretically.In the presence of natural pyrite, nearly 40% of carbofuran was degraded within 100 h.The degradation efficiency of pyrite enhanced with the adsorbate concentration at a given temperature, pH and initial carbofuran loading.The research geared to improve the efficiency of pyrite as a geo-catalyst is currently in progress.
This work provides new insight into the degradation of 2,2-dimethyl-2,3-dihydrobenzofuran-7-yl-methylcarbamate (hereafter carbofuran) by natural pyrite as a function of pH and adsorbent loading. In the presence of tert-butyl alcohol i.e., OH scavenger, the degradation efficiency of carbofuran was almost stopped. In acidic solutions (pH < 5) the degradation kinetics was pseudo first order in carbofuran as -d[carbofuran]dt=-<inf>kobserved</inf>×[carbofuran]. The dependence of k<inf>observed</inf> on [FeS<inf>2</inf>] was given as k<inf>observed</inf> = k<inf>0</inf> + [FeS<inf>2</inf>] × k<inf>1</inf> where k<inf>0</inf> = 1.16 × 10<sup>-7</sup> h<sup>-1</sup> and k<inf>1</inf> = 0.137 h<sup>-1</sup>. The elucidation of precise steps of carbofuran degradation by pyrite has yet to be solved.